12 Appendix
12.1 Purpose of the Appendix
This appendix complements the main body of the whitepaper by providing reference material, technical notation, and implementation-oriented detail. The main chapters established the economic motivation, framework, architecture, use cases, examples, limitations, and roadmap of LSDx. The purpose of the appendix is different. It is intended to serve as a structured technical companion.
In particular, the appendix has four roles.
First, it provides a reference taxonomy of Liquid Staking Derivatives and related token forms relevant to the LSDx universe.
Second, it defines the main terminology used throughout the paper so that conceptual distinctions remain clear and stable.
Third, it expands the description of the risk-factor system in a more granular and implementation-friendly way.
Fourth, it collects the key equations and formula structures used in the framework, including valuation decomposition, adjusted-yield logic, score construction, and horizon-sensitive interpretation.
The appendix should therefore be read as both a technical reference and a bridge between the whitepaper and future implementation.
12.2 List of LSD Tokens
12.2.1 Purpose of the token list
The purpose of this section is not to provide a final or exhaustive census of all liquid staking derivatives in existence. The market evolves continuously, token wrappers change, and new variants emerge over time. Instead, the purpose is to define a useful reference universe and token classification approach for LSDx.
The list below should be interpreted as a working taxonomy. It helps distinguish between direct LSDs, wrapped forms, exchange-linked forms, and more complex staking-related derivative tokens. The analytical framework of LSDx begins with this taxonomy because not every staking-linked token should be treated identically.
12.2.2 Token classification principles
For the purposes of LSDx, tokens may be grouped into several broad classes.
12.2.2.1 Native or direct LSDs
These are tokenised claims linked relatively directly to staked underlying assets through a liquid staking protocol. Their value is primarily driven by: - underlying staking economics, - protocol fee structure, - validator set quality, - redemption mechanics, - and market liquidity.
These are the core instruments for the first versions of LSDx.
12.2.2.2 Wrapped LSDs
These are wrappers around direct LSDs or around claim structures that themselves already represent staking-linked value. Wrapped forms may alter: - transferability, - accounting representation, - DeFi usability, - and in some cases the effective liquidity profile.
Wrapped tokens require careful treatment because their price and usability may depend on both the wrapped asset and the wrapper structure.
12.2.2.3 Exchange-linked staking tokens
Some staking-linked tokens originate from centralised or semi-centralised exchange environments. They may provide useful yield-bearing exposure, but their analytical treatment can differ because: - governance and control structure differ, - custody assumptions differ, - and redemption or operational mechanics may differ materially from fully protocol-native structures.
These tokens may still belong in the analytical universe, but they may require a distinct structural treatment.
12.2.2.4 Restaked or recursively integrated staking derivatives
Some token forms represent more than a simple claim on staked underlying value. They may embed additional layers of risk or utility through restaking, recursive use, structured wrappers, or dependence on secondary protocols.
These instruments are important but should be treated cautiously in early versions of LSDx. Their inclusion requires richer dependency mapping and more explicit structural modelling.
12.2.3 Reference token table
The following table provides a reference-style list of token categories relevant to LSDx. The entries are illustrative and should be updated as the supported universe evolves.
| Token / Category | Type | Base asset linkage | Notes for LSDx treatment |
|---|---|---|---|
| stETH-type token | Direct LSD | ETH staking exposure | Core reference case for direct LSD analysis |
| rETH-type token | Direct LSD | ETH staking exposure | Important for comparative ETH-LSD analysis |
| cbETH-type token | Exchange-linked LSD | ETH staking exposure | Useful but structurally distinct from protocol-native forms |
| Wrapped stETH-type token | Wrapped LSD | Derived from direct ETH-LSD | Requires wrapper-aware treatment |
| sfrxETH-type token | Yield-accumulating staking-linked form | ETH staking exposure | Needs careful carry and accounting interpretation |
| osETH-type token | Structured staking derivative | ETH staking exposure | May require richer integration and design analysis |
| swETH-type token | Direct LSD | ETH staking exposure | Relevant in broader ETH-LSD comparative universe |
| mETH-type token | Direct LSD | ETH staking exposure | Potential candidate for extended comparative coverage |
| Chain-specific LSD tokens | Direct LSD | Non-ETH PoS asset | Relevant for multi-chain future extension |
| Wrapped cross-chain representations | Wrapped / bridged form | Indirect staking exposure | Requires conservative treatment in early versions |
This table is intentionally generic in wording. In production implementation, LSDx should maintain a canonical token registry with precise identifiers, metadata, structural labels, and methodology support status.
12.2.4 Canonical token registry fields
A practical internal registry for LSDx may contain fields such as:
- canonical token identifier,
- display name,
- ticker symbol,
- base asset,
- token class,
- wrapper class,
- rebasing or exchange-rate growth flag,
- protocol family,
- supported chains,
- redemption pathway category,
- liquidity coverage status,
- methodology support status,
- and confidence or maturity tag.
An illustrative internal schema may be represented as follows:
| Field | Description |
|---|---|
token_id |
Canonical internal identifier |
symbol |
Public-facing token symbol |
display_name |
Human-readable token name |
base_asset |
Underlying economic asset |
token_class |
Direct LSD, wrapped LSD, exchange-linked, structured, etc. |
wrapper_parent |
Parent token if wrapped |
yield_representation |
Rebasing, exchange-rate growth, hybrid |
protocol_family |
Issuing or originating protocol family |
redemption_class |
Direct, queued, indirect, market-dependent |
liquidity_class |
Preliminary liquidity bucket |
coverage_status |
Complete, partial, degraded, unsupported |
methodology_version |
Current supported methodology version |
This registry is foundational because the rest of the framework depends on consistent treatment of token identity and structure.
12.2.5 Token universe maturity tiers
It is useful for LSDx to distinguish tokens by coverage and maturity tier.
12.2.5.1 Tier 1: fully supported analytical universe
These are tokens for which LSDx has: - sufficient data coverage, - stable canonical mapping, - reliable liquidity diagnostics, - and methodology support across the main factor dimensions.
These tokens should form the first comparative production universe.
12.2.5.2 Tier 2: partially supported analytical universe
These are tokens that can be included with partial confidence. Some dimensions may be covered, while others remain sparse or manually curated.
These tokens may appear in extended research views but should carry explicit caution.
12.2.5.3 Tier 3: observational universe only
These are tokens for which LSDx may maintain metadata or basic descriptive information, but not yet full scoring or fair-value outputs.
This distinction is important because it prevents false equivalence between mature and immature analytical coverage.
12.3 Glossary of Terms
12.3.1 Purpose of the glossary
The vocabulary of LSDx must remain precise. Several terms used in the paper may sound familiar while carrying technical distinctions that matter for the framework. The glossary therefore defines the key concepts used throughout the whitepaper.
12.3.2 Core terms
12.3.2.1 Liquid Staking Derivative (LSD)
A tokenised claim linked to staked underlying capital and designed to preserve transferability or broader capital usability while retaining staking-linked economic exposure.
12.3.2.2 Base asset
The underlying native asset from which the staking exposure originates. In many early LSDx applications this is ETH, but the framework is not conceptually restricted to ETH.
12.3.2.3 Direct LSD
An LSD whose economic exposure is relatively directly linked to a staking protocol’s issuance and reward mechanics, without an additional wrapper layer dominating interpretation.
12.3.2.4 Wrapped LSD
A tokenised wrapper around another staking-linked token. Wrapped forms may preserve economic exposure while changing accounting mechanics, transferability, integration behaviour, or yield representation.
12.3.2.5 Exchange-linked staking token
A staking-linked token originating through an exchange or exchange-controlled system rather than a fully protocol-native staking layer.
12.3.2.6 Redemption-based value
The value implied by the economic claim that could in principle be realised through the token’s redemption, withdrawal, or economically equivalent conversion pathway, subject to practical friction.
12.3.2.7 Market price
The observed trading price of the token in the relevant market or reference venue.
12.3.2.8 Model fair value
The estimated value generated by LSDx after combining redemption anchor, expected carry, liquidity adjustment, structural risk adjustment, and any convenience premium logic.
12.3.2.10 Liquidity discount
The reduction in effective economic attractiveness associated with weaker tradeability, limited depth, high slippage, venue concentration, or stress fragility.
12.3.2.11 Exit friction
The collection of practical barriers that reduce the ease or certainty of converting a token into economically realised underlying value.
12.3.2.12 Adjusted yield
A yield measure in which nominal net staking return is corrected for relevant liquidity, structural, or exit-related penalties.
12.3.2.13 Composite risk score
A summary risk metric produced by aggregating multiple factor dimensions into a single headline number or class.
12.3.2.14 Suitability score
A use-case-specific score indicating how well a given token fits a defined purpose such as treasury reserve holding, collateral use, or long-horizon passive exposure.
12.3.2.15 Regime classification
A categorisation of the token’s current market-analytical state, for example normal, watch, stress, dislocation, or recovery.
12.3.2.16 Coverage status
An annotation indicating how complete and reliable the available analytical inputs are for a given token and evaluation time.
12.3.2.17 Methodology version
The specific analytical model version under which a score, valuation estimate, or output record was produced.
12.3.3 Additional implementation terms
12.3.3.1 Canonical token object
The standard internal representation into which all tokens are translated before feature construction and scoring.
12.3.3.2 Factor engine
The part of the analytical pipeline responsible for constructing component-level features and risk factors from normalised token data.
12.3.3.3 Analytical store
The structured storage layer that preserves model outputs, associated metadata, methodology versions, and coverage states.
12.3.3.4 Confidence annotation
A signal or metadata field indicating how robust or complete the analytical output is, given the available data and methodology support.
12.3.3.5 Stress liquidity
The expected quality of liquidity under adverse or abnormal conditions rather than ordinary market conditions.
12.3.3.6 Relative-value indicator
A metric capturing the divergence between observed market price and model fair value, typically interpreted together with structural and liquidity context.
12.4 Risk Factor Descriptions
12.4.1 Purpose of the risk-factor section
The main body of the whitepaper defined the existence of a multi-dimensional risk architecture. This section expands those factors in a more granular manner so that the framework becomes easier to implement, refine, and audit.
The factor system in LSDx is designed to satisfy four conditions: - interpretability, - modularity, - extensibility, - and use-case relevance.
Each factor should therefore be measurable, explainable, and separable from the rest, even when a later composite score is formed.
12.4.2 Peg and market dislocation risk
This factor captures the risk that the token trades materially away from its model anchor or expected economic value.
12.4.2.1 Why it matters
A token may still be economically linked to staked underlying value and yet display significant market discount or unstable premium behaviour. Such dislocation can affect collateral usability, risk-adjusted attractiveness, and treasury confidence.
12.4.2.2 Possible sub-components
- deviation from redemption-based anchor,
- deviation from model fair value,
- realised discount volatility,
- maximum observed drawdown from anchor,
- time spent outside tolerance bands,
- speed of reversion after stress.
12.4.2.3 Illustrative implementation variables
| Variable | Description |
|---|---|
peg_dev_mean |
Average deviation from anchor over window |
peg_dev_vol |
Volatility of deviation |
peg_tail_event |
Tail dislocation metric |
peg_reversion_speed |
Mean-reversion characteristic |
peg_stress_flag |
Binary or ordinal stress indicator |
12.4.3 Liquidity risk
This factor captures the risk that the token cannot be traded or unwound efficiently at meaningful size.
12.4.3.1 Why it matters
Liquidity affects both immediate usability and stress resilience. It matters for treasury policy, collateral frameworks, and structured strategy design.
12.4.3.2 Possible sub-components
- quoted or effective depth,
- expected slippage by trade size,
- concentration of liquidity across venues,
- persistence of volume,
- stress-fragility estimate,
- directional imbalance sensitivity.
12.4.3.3 Illustrative implementation variables
| Variable | Description |
|---|---|
liq_depth_small |
Depth for small representative size |
liq_depth_large |
Depth for larger size |
liq_slippage_1 |
Slippage estimate at size 1 |
liq_slippage_2 |
Slippage estimate at size 2 |
liq_venue_conc |
Venue concentration index |
liq_stress_score |
Stress-liquidity quality estimate |
12.4.4 Redemption and exit risk
This factor captures the uncertainty and practical friction associated with converting the token into realised underlying value.
12.4.4.1 Why it matters
Formal redeemability is not identical to practical exit quality. Queue conditions, operational overhead, time dependence, and reliance on secondary market mechanisms can all matter.
12.4.4.2 Possible sub-components
- directness of redemption path,
- expected queue friction,
- number of viable exit routes,
- dependence on secondary-market exit,
- operational complexity,
- regime sensitivity of exit conditions.
12.4.4.3 Illustrative implementation variables
| Variable | Description |
|---|---|
exit_directness |
Direct versus indirect path score |
exit_queue_est |
Estimated queue burden |
exit_route_count |
Number of credible exit paths |
exit_market_dependence |
Degree of reliance on trading exit |
exit_complexity |
Operational complexity assessment |
12.4.5 Validator and staking-layer risk
This factor captures the resilience and concentration characteristics of the staking substrate supporting the token.
12.4.5.1 Why it matters
An LSD inherits part of its quality from the quality of the staking environment behind it. Validator concentration, operational fragility, and performance inconsistency can all reduce confidence in the token.
12.4.5.2 Possible sub-components
- validator concentration,
- operator diversity,
- staking performance stability,
- slashing exposure,
- operational quality,
- delegation structure.
12.4.5.3 Illustrative implementation variables
| Variable | Description |
|---|---|
val_concentration |
Concentration measure across validator set |
val_operator_div |
Diversity indicator |
val_performance_stability |
Stability of staking performance |
val_slashing_exposure |
Exposure proxy or category |
val_operational_quality |
Operational robustness score |
12.4.6 Protocol and smart-contract risk
This factor captures risks introduced by the technical and contractual implementation of the LSD protocol.
12.4.6.1 Why it matters
An LSD is not only an economic design. It is also a deployed technical system. Complexity, upgradability, dependency structure, and implementation quality matter.
12.4.6.2 Possible sub-components
- contract complexity,
- upgradeability,
- audit maturity,
- dependency on external modules,
- admin authority structure,
- emergency mechanism design.
12.4.6.3 Illustrative implementation variables
| Variable | Description |
|---|---|
prot_complexity |
Complexity proxy |
prot_upgradeability |
Degree of mutable logic |
prot_audit_maturity |
Audit coverage maturity indicator |
prot_dependency_count |
Number of material dependencies |
prot_admin_surface |
Administrative control exposure |
12.4.7 Governance and structural risk
This factor captures risk associated with control, concentration, discretion, and structural power within the protocol ecosystem.
12.4.7.1 Why it matters
Governance quality can influence token resilience, policy stability, and user confidence. Highly concentrated governance can create abrupt policy changes or asymmetric control risk.
12.4.7.2 Possible sub-components
- governance concentration,
- voting power asymmetry,
- emergency authority concentration,
- policy-change sensitivity,
- off-chain coordination dependence,
- structural centralisation.
12.4.7.3 Illustrative implementation variables
| Variable | Description |
|---|---|
gov_concentration |
Governance concentration metric |
gov_privilege_asym |
Privilege asymmetry score |
gov_emergency_control |
Emergency control intensity |
gov_policy_sensitivity |
Sensitivity to governance action |
gov_centralisation_class |
Structural centralisation category |
12.4.8 Composability and contagion risk
This factor captures the extent to which a token’s use across the broader DeFi system may become a source of fragility.
12.4.8.1 Why it matters
Broad integration is often a strength, but it can also amplify systemic exposure. The more widely a token is used in collateral loops, LP systems, lending structures, or recursive strategies, the more sensitive it may become to ecosystem-wide stress.
12.4.8.2 Possible sub-components
- collateral-system exposure,
- leverage-loop exposure,
- integration concentration,
- dependence on a few key downstream protocols,
- recursive use intensity,
- correlation to system-wide risk events.
12.4.8.3 Illustrative implementation variables
| Variable | Description |
|---|---|
comp_collateral_exposure |
Use as collateral across protocols |
comp_recursive_intensity |
Recursive usage indicator |
comp_integration_conc |
Concentration across downstream integrations |
comp_systemic_beta |
Sensitivity to broader system stress |
comp_dependency_graph_score |
Dependency-network fragility measure |
12.4.9 Model uncertainty factor
It is useful to define an explicit model-uncertainty overlay rather than pretending that all supported tokens can be scored with equal confidence.
12.4.9.1 Why it matters
Outputs based on sparse or unstable data should not appear equally reliable to outputs based on mature coverage.
12.4.9.2 Possible sub-components
- data completeness,
- source disagreement,
- freshness weakness,
- structural novelty,
- unsupported wrapper complexity,
- methodology immaturity for the token class.
12.4.9.3 Illustrative implementation variables
| Variable | Description |
|---|---|
mu_data_completeness |
Completeness score |
mu_source_disagreement |
Degree of cross-source discrepancy |
mu_freshness_risk |
Staleness-related uncertainty |
mu_structure_novelty |
Novel structure penalty |
mu_method_gap |
Methodology support weakness |
12.4.10 Composite risk construction
The factor architecture may be summarised through the vector:
\[ \mathbf{R}_{i,t} = \left( R_{i,t}^{peg}, R_{i,t}^{liq}, R_{i,t}^{exit}, R_{i,t}^{val}, R_{i,t}^{prot}, R_{i,t}^{gov}, R_{i,t}^{comp}, R_{i,t}^{mu} \right) \]
where \(R_{i,t}^{mu}\) is the model-uncertainty component.
A general composite form may be written as:
\[ R_{i,t}^{total} = \sum_{k=1}^{K} w_k \, \phi_k\!\left(R_{i,t}^{(k)}\right) \]
with: - \(w_k\) denoting use-case-specific or baseline weights, - \(\phi_k(\cdot)\) denoting optional non-linear transformations or bucket mappings.
The framework should preserve the factor decomposition even when producing the composite output.
12.5 Equations and Yield Formulas
12.5.1 Purpose of the equation section
This section collects the principal formulas used throughout the LSDx framework in one place. The equations are not intended to imply that every term is directly observable or finally calibrated. Rather, they define the mathematical structure of the framework and make the modelling logic explicit.
12.5.2 Normalised instrument state
For token \(i\) at time \(t\), define the normalised instrument state:
\[ \mathcal{S}_{i,t} = \Big( P_{i,t}^{mkt}, V_{i,t}^{red}, Y_{i,t}^{gross}, Y_{i,t}^{net}, F_{i,t}^{prot}, L_{i,t}, E_{i,t}^{exit}, Q_{i,t}^{val}, G_{i,t}, C_{i,t} \Big) \]
where: - \(P_{i,t}^{mkt}\) is observed market price, - \(V_{i,t}^{red}\) is redemption-based value, - \(Y_{i,t}^{gross}\) is gross staking-linked yield, - \(Y_{i,t}^{net}\) is net yield after protocol fee treatment, - \(F_{i,t}^{prot}\) is protocol fee component, - \(L_{i,t}\) is liquidity state, - \(E_{i,t}^{exit}\) is exit-friction state, - \(Q_{i,t}^{val}\) is validator-quality state, - \(G_{i,t}\) is governance/structural state, - \(C_{i,t}\) is composability or strategic utility state.
12.5.3 Gross and net yield relationship
A simple representation of net yield is:
\[ Y_{i,t}^{net} = Y_{i,t}^{gross} - F_{i,t}^{prot} \]
where \(F_{i,t}^{prot}\) may be represented either as an absolute annualised deduction or as an equivalent netting transformation.
If fee is charged proportionally, an alternative expression may be written as:
\[ Y_{i,t}^{net} = Y_{i,t}^{gross}\,(1 - f_{i,t}) \]
where \(f_{i,t}\) is the proportional fee rate.
12.5.4 Total return over a horizon
For horizon \(h\), the total-return representation may be defined as:
\[ TR_{i,t \to t+h} = \frac{P_{i,t+h}^{mkt} - P_{i,t}^{mkt} + A_{i,t \to t+h}}{P_{i,t}^{mkt}} \]
where \(A_{i,t \to t+h}\) denotes accumulated economic accrual over the horizon, including rebasing or exchange-rate appreciation translated into comparable form.
This allows rebasing and non-rebasing tokens to be compared in a common economic language.
12.5.5 Baseline fair value decomposition
A high-level fair-value decomposition can be written as:
\[ FV_{i,t} = U_{i,t} + A_{i,t} - D_{i,t} + \Pi_{i,t}^{conv} \]
where: - \(U_{i,t}\) is underlying economic claim value, - \(A_{i,t}\) is accrued or expected staking-related economic benefit, - \(D_{i,t}\) is total discount for risk and friction, - \(\Pi_{i,t}^{conv}\) is convenience or strategic premium.
12.5.6 Fair value midpoint formulation
A more implementation-oriented midpoint fair-value expression is:
\[ FV_{i,t}^{mid} = V_{i,t}^{red} + Carry_{i,t}^{(h)} - Disc_{i,t}^{risk} - Disc_{i,t}^{liq} - Disc_{i,t}^{exit} + Prem_{i,t}^{conv} \]
where: - \(V_{i,t}^{red}\) is redemption-based anchor, - \(Carry_{i,t}^{(h)}\) is expected horizon-specific net carry, - \(Disc_{i,t}^{risk}\) is structural risk discount, - \(Disc_{i,t}^{liq}\) is liquidity discount, - \(Disc_{i,t}^{exit}\) is exit-friction discount, - \(Prem_{i,t}^{conv}\) is convenience premium.
12.5.7 Fair value range
To avoid false precision, LSDx may represent fair value as a range:
\[ FV_{i,t}^{low} \leq FV_{i,t}^{mid} \leq FV_{i,t}^{high} \]
The width of the range may depend on model uncertainty, data weakness, market instability, or sensitivity of the token structure.
12.5.8 Relative-value indicator
The basic relative-value or mispricing indicator is:
\[ \Delta_{i,t} = \frac{P_{i,t}^{mkt} - FV_{i,t}^{mid}}{FV_{i,t}^{mid}} \]
Interpretation: - \(\Delta_{i,t} > 0\): market trades above model midpoint, - \(\Delta_{i,t} < 0\): market trades below model midpoint.
A bounded or z-score style variant may be used later if historical calibration becomes more mature.
12.5.9 Adjusted yield formula
A baseline adjusted-yield expression is:
\[ AY_{i,t}^{(h)} = Y_{i,t}^{net} - \Lambda_{i,t}^{risk} - \Lambda_{i,t}^{liq} - \Lambda_{i,t}^{exit} \]
where: - \(Y_{i,t}^{net}\) is net yield, - \(\Lambda_{i,t}^{risk}\) is structural-risk penalty, - \(\Lambda_{i,t}^{liq}\) is liquidity penalty, - \(\Lambda_{i,t}^{exit}\) is exit-friction penalty.
An expanded form may also include model uncertainty:
\[ AY_{i,t}^{(h)} = Y_{i,t}^{net} - \Lambda_{i,t}^{risk} - \Lambda_{i,t}^{liq} - \Lambda_{i,t}^{exit} - \Lambda_{i,t}^{mu} \]
12.5.10 Carry over horizon
A simple horizon-scaled carry approximation may be written as:
\[ Carry_{i,t}^{(h)} = Y_{i,t}^{net} \cdot \frac{h}{H} \]
where: - \(h\) is the evaluation horizon, - \(H\) is the annualisation base.
This is deliberately simple and may later be replaced with more refined compounding or scenario-aware structures.
12.5.11 Yield-efficiency metric
A compact yield-efficiency representation may be written as:
\[ Eff_{i,t} = \frac{AY_{i,t}^{(h)}}{R_{i,t}^{total} + \varepsilon} \]
where \(\varepsilon > 0\) is a stabilising constant.
This metric is not intended to replace decomposition. It is a compact comparative aid.
12.5.12 Liquidity-quality function
Liquidity quality may be expressed as a score or transformed index:
\[ LQ_{i,t} = \psi\!\left( Depth_{i,t}, Slip_{i,t}, VenueDiv_{i,t}, VolPersist_{i,t}, Fragility_{i,t} \right) \]
where \(\psi(\cdot)\) is a chosen mapping from raw liquidity features to a normalised liquidity-quality value.
In a more explicit weighted form:
\[ LQ_{i,t} = \alpha_1 D_{i,t}^{depth} - \alpha_2 S_{i,t}^{slip} + \alpha_3 V_{i,t}^{div} + \alpha_4 P_{i,t}^{persist} - \alpha_5 F_{i,t}^{frag} \]
after suitable normalisation of the component terms.
12.5.13 Treasury suitability score
For treasury use cases, an illustrative form is:
\[ TS_i = \omega_1 LQ_i + \omega_2 SR_i + \omega_3 RC_i + \omega_4 AY_i^{norm} + \omega_5 IG_i \]
where: - \(LQ_i\) is liquidity quality, - \(SR_i\) is structural robustness, - \(RC_i\) is redemption confidence, - \(AY_i^{norm}\) is normalised adjusted yield, - \(IG_i\) is integration strength.
12.5.14 General use-case suitability score
For use case \(u\), a more general suitability mapping is:
\[ SS_{i,t}^{(u)} = f_u\Big( FV_{i,t}, AY_{i,t}^{(h)}, \mathbf{R}_{i,t}, LQ_{i,t}, C_{i,t}, M_{i,t} \Big) \]
where: - \(f_u(\cdot)\) is a use-case-specific mapping, - \(M_{i,t}\) may capture methodology confidence or coverage.
This makes clear that suitability is context-dependent rather than universal.
12.5.15 Confidence-adjusted output logic
To avoid overclaiming confidence, an output may be scaled or annotated through a confidence term \(Conf_{i,t}\in[0,1]\). One possible schematic form is:
\[ Score_{i,t}^{adj} = Conf_{i,t} \cdot Score_{i,t}^{raw} \]
This is only one design option. In many cases it may be preferable to keep the raw score and display confidence separately rather than mechanically multiplying them.
12.5.16 Regime classification function
A general regime function may be expressed as:
\[ \Gamma_{i,t} = g\Big( \Delta_{i,t}, \mathbf{R}_{i,t}, LQ_{i,t}, Trend_{i,t}, Conf_{i,t} \Big) \]
where \(\Gamma_{i,t}\) takes values in a finite regime set such as:
\[ \Gamma_{i,t} \in \{\text{Normal},\ \text{Watch},\ \text{Stress},\ \text{Dislocation},\ \text{Recovery}\} \]
In early implementations, \(g(\cdot)\) may be rule-based rather than fully statistical.
12.5.17 Historical state persistence
For stored analytical history, each record may be represented as:
\[ \mathcal{H}_{i,t} = \left( t,\, \text{token}_i,\, \text{methodology\_version},\, FV_{i,t}^{mid},\, AY_{i,t}^{(h)},\, \mathbf{R}_{i,t},\, LQ_{i,t},\, \Gamma_{i,t},\, Conf_{i,t} \right) \]
This makes explicit that the platform should preserve not only the score but the full analytical context of the score.
12.6 Closing Note on the Appendix
This appendix has provided the technical reference layer of the LSDx whitepaper. It has outlined a working token taxonomy, defined the principal terms used throughout the framework, expanded the risk-factor architecture, and collected the main equations that structure the analytical logic of the system.
The appendix is not meant to imply that every parameter, factor weight, or functional form is already final. Rather, it establishes the formal scaffolding within which the framework can be implemented, calibrated, extended, and governed. In that sense, it is both a reference document and a bridge to the next stage of LSDx development.