5 Core Analytical Framework
5.1 Purpose of the Framework
The purpose of the LSDx analytical framework is to convert a heterogeneous universe of Liquid Staking Derivatives into a consistent set of quantitative objects that can be compared, ranked, valued, monitored, and integrated into decision systems.
This chapter is the methodological core of the whitepaper. It formalises the logic behind LSDx and defines the analytical pipeline that transforms raw protocol and market information into structured outputs. These outputs include fair value estimates, risk decompositions, adjusted yield metrics, liquidity diagnostics, relative-value indicators, and strategy suitability signals.
The central problem is that LSDs are often treated as if they were simple yield wrappers around the same base asset. This is analytically insufficient. An LSD is not only a claim on staked capital. It is a financial object with protocol-dependent mechanics, embedded frictions, liquidity characteristics, market dislocations, governance exposure, and composability effects. Even when two LSDs are linked to the same native asset, they need not exhibit the same economic quality.
The LSDx framework is built on five principles:
Normalisation before comparison
LSDs with different mechanics must first be translated into a common analytical language before comparison becomes meaningful.Decomposition before ranking
No composite score should exist without underlying components. If one token ranks above another, the reason should be analytically traceable.Fair value is model-based, not observed
Market price is a fact. Fair value is an estimate. The framework must keep these distinct.Yield is not sufficient without risk and liquidity adjustment
A higher nominal yield may be less attractive after accounting for liquidity fragility, validator concentration, redemption uncertainty, or peg instability.The framework must be extensible
The first version of LSDx should be rigorous but modular. New factors, new chains, and new token designs must be incorporable without breaking the analytical structure.
The goal is therefore not merely to produce an attractive dashboard. The goal is to define a reusable quant framework for LSD-aware financial intelligence.
5.2 Analytical View of an LSD
5.2.1 An LSD as a Structured Claim
For the purpose of LSDx, an LSD is viewed as a structured economic claim on a staked underlying asset, modified by protocol design, subject to market trading conditions, and exposed to several layers of operational and financial risk.
At a high level, the observed market value of an LSD may be understood as the interaction of the following elements:
- value of the underlying staked base asset,
- accumulated or expected staking rewards,
- protocol fee structure,
- validator set quality and diversification,
- redemption and exit mechanics,
- secondary market liquidity,
- market demand for the token as a DeFi primitive,
- and a range of structural and tail risks.
This means the market price of an LSD is not reducible to a single simple rule. It may trade near the value of its redeemable underlying, above it, or below it, depending on convenience, scarcity, strategic utility, liquidity depth, stress conditions, and market expectations.
5.2.2 Three Distinct Notions of Value
A central distinction in LSDx is between three notions of value.
5.2.2.1 Market Price
This is the observed secondary-market trading price of the LSD relative to the base asset or a common numeraire. It is a market fact.
5.2.2.2 Redemption-Based Value
This is the value implied by the claim that can be redeemed or realised through the protocol’s native staking and withdrawal mechanics, subject to fees, queue delays, and operational constraints.
5.2.2.3 Model Fair Value
This is the value estimated by LSDx after incorporating expected carry, liquidity conditions, structural risk, token-specific convenience value, and relevant frictions.
The distinction matters because these three values may diverge materially.
A token may trade at a premium to redemption value because it offers superior liquidity or broad collateral acceptance.
A token may trade at a discount because liquidity is weak, stress is elevated, or the market demands compensation for protocol-specific risk.
A token may trade close to redemption value yet still appear unattractive after risk-adjusted comparison with alternatives.
The role of LSDx is not to deny these deviations. The role of LSDx is to interpret them systematically.
5.3 Overview of the Framework
The LSDx analytical framework can be understood as a six-layer pipeline.
5.3.1 Layer 1: Instrument Normalisation
Each LSD is translated into a common representation. Differences in rebasing, exchange-rate growth, wrapper structure, redemption design, and reward accrual are standardised so that tokens become analytically comparable.
5.3.2 Layer 2: Value Driver Extraction
The framework identifies the components that economically matter for valuation and comparison. These include expected carry, fee drag, liquidity depth, redemption delay, validator concentration, and strategic integration value.
5.3.3 Layer 3: Risk Factor Construction
Relevant risk dimensions are quantified or scored through a structured factor architecture. Examples include peg risk, liquidity risk, redemption risk, protocol risk, and composability risk.
5.3.4 Layer 4: Fair Value and Adjusted Yield Estimation
The framework combines value drivers and risk adjustments into model fair value ranges and adjusted yield metrics.
5.3.5 Layer 5: Relative-Value and Suitability Analysis
Tokens are compared across use cases such as passive holding, treasury reserve, collateral posting, or liquidity deployment.
5.3.6 Layer 6: Monitoring and Alert Logic
The framework evaluates whether a token remains in a stable regime or is entering deterioration, stress, or dislocation.
This layered approach is intentional. It prevents the common error of jumping directly from raw token metrics to a headline ranking without defining the economic logic in between.
5.4 Instrument Normalisation
5.4.1 Why Normalisation Is Necessary
LSDs may differ in fundamental mechanics:
- some are rebasing,
- some are non-rebasing and appreciate through exchange rate,
- some expose clean claim structures,
- some are wrapped forms of other staking tokens,
- some allow relatively direct redemption,
- some depend more heavily on secondary-market exit,
- some are deeply integrated into DeFi,
- some remain operationally simple but strategically limited.
These differences make raw comparison misleading. A token with visible APY is not automatically easier to evaluate than one whose value appreciation occurs through balance mechanics. A token with higher liquidity may justify a premium over one with similar staking economics but weaker market depth.
The first task of LSDx is therefore to bring all tokens into a common analytical representation.
5.4.2 Normalised Token Representation
For each LSD \(i\) at time \(t\), LSDx defines a normalised instrument state consisting of:
\[ \mathcal{S}_{i,t} = \Big( P_{i,t}^{mkt}, V_{i,t}^{red}, Y_{i,t}^{gross}, F_{i,t}^{prot}, L_{i,t}, R_{i,t}^{exit}, Q_{i,t}^{val}, G_{i,t}, C_{i,t} \Big) \]
where:
- \(P_{i,t}^{mkt}\) is the observed market price,
- \(V_{i,t}^{red}\) is redemption-based value,
- \(Y_{i,t}^{gross}\) is gross staking yield,
- \(F_{i,t}^{prot}\) is the protocol fee drag,
- \(L_{i,t}\) captures liquidity conditions,
- \(R_{i,t}^{exit}\) captures redemption and exit friction,
- \(Q_{i,t}^{val}\) summarises validator and operational quality,
- \(G_{i,t}\) captures governance and structural protocol characteristics,
- \(C_{i,t}\) captures composability and strategic integration value.
This does not yet produce a score. It produces a normalised state vector from which valuation and risk analytics can be derived.
5.4.3 Rebasing and Non-Rebasing Equivalence
One practical challenge in LSD comparison is that token mechanics differ. A rebasing token reflects yield through increasing balance, while a non-rebasing token may reflect yield through exchange-rate appreciation. Economically, these can be made comparable by converting both into a total-return representation over a chosen horizon.
Let \(TR_{i,t \to t+h}\) denote the total return over horizon \(h\). Then the framework treats different balance mechanics as alternative accounting forms of the same economic object, provided they are mapped into a common return measure.
This matters because the framework should compare economic substance, not UI representation.
5.4.4 Horizon Dependence
Some LSD characteristics matter more for some horizons than others.
A short-horizon trader may care heavily about liquidity and slippage.
A medium-horizon treasury may care more about stability and redemption confidence.
A long-horizon holder may care more about sustainable net yield and protocol durability.
The normalisation step therefore preserves horizon sensitivity. The same token may be acceptable at one horizon and weak at another.
5.5 Value Decomposition
5.5.1 Economic Components of LSD Value
LSDx decomposes the economic value of an LSD into interpretable components. At a high level, the model fair value of token \(i\) at time \(t\) may be written as:
\[ FV_{i,t} = U_{i,t} + A_{i,t} - D_{i,t} + \Pi_{i,t}^{conv} \]
where:
- \(U_{i,t}\) is the underlying base claim value,
- \(A_{i,t}\) is accrued or expected staking-related economic benefit,
- \(D_{i,t}\) is total discount due to risk, friction, and inefficiency,
- \(\Pi_{i,t}^{conv}\) is convenience or strategic premium.
This expression is intentionally high-level. It is not meant to imply that all terms are directly observable. It is meant to state the economic structure of the framework.
5.5.2 Underlying Base Claim
The first component is the value of the underlying staked asset claim. This is the anchor from which the token derives its economic relevance. Without this anchor, the LSD would be a pure synthetic. With it, the token inherits a base layer of value tied to the native asset and the staking system.
In many cases, this component is close to one unit of the base asset adjusted by exchange-rate mechanics. However, this should not be assumed blindly. Wrapped structures, fee accumulation, claim dilution, or redemption asymmetry may matter.
5.5.3 Accrual and Carry Component
The second component is the value contribution from expected staking accrual. This includes:
- validator rewards,
- protocol distribution mechanics,
- expected net accrual over the relevant horizon,
- and any economically equivalent exchange-rate appreciation.
This component should be measured net of protocol fee structure wherever possible, or at least decomposed into gross and net versions so that users understand where yield originates and where it is lost.
5.5.4 Discount Components
The discount term is central. It reflects the fact that not all accrued yield translates into fair value on a one-for-one basis. Economic frictions and risks matter. These may include:
- liquidity discount,
- redemption delay discount,
- structural protocol risk discount,
- governance discount,
- validator concentration discount,
- smart-contract or operational fragility discount,
- model uncertainty adjustment.
The point is not to force every token into a deterministic formula. The point is to acknowledge explicitly that nominal yield must be discounted when the structure embedding that yield is fragile or constrained.
5.6 Risk Factor Architecture
5.6.1 Why a Factor Architecture Is Needed
Risk in LSDs is multi-dimensional. It is not well captured by a single scalar notion of “riskiness.” A token may be strong in validator diversification but weak in liquidity. Another may be liquid but governance-sensitive. Another may offer good net yield but exhibit unstable discount behaviour during stress.
For this reason, LSDx adopts a factor architecture. Each LSD is evaluated across several structured dimensions. Composite scores are then built on top of these components, not instead of them.
5.6.2 5.6.2 Core Risk Dimensions
The first version of LSDx should include at least the following core risk dimensions.
5.6.2.1 Peg and Market Dislocation Risk
This captures the tendency of the token to deviate from its reference value or model value. Persistent or abrupt discounts may indicate liquidity fragility, exit constraints, or stress amplification.
Indicative sub-factors may include:
- historical discount volatility,
- maximum observed dislocation,
- speed of mean reversion,
- stress sensitivity during market events,
- divergence from redemption-implied value.
5.6.2.2 Liquidity Risk
This captures how easily the token can be traded or unwound without material price impact.
Indicative sub-factors may include:
- on-chain depth,
- centralised exchange support where relevant,
- slippage for representative trade sizes,
- liquidity concentration across venues,
- volume persistence,
- fragility under volatility spikes.
5.6.2.3 Redemption and Exit Risk
This captures the uncertainty and friction involved in converting the token into its underlying economic value through the protocol or equivalent pathways.
Indicative sub-factors may include:
- queue dependence,
- expected waiting time,
- operational complexity,
- exit-route diversity,
- reliance on secondary market exit,
- mismatch between nominal and practical redeemability.
5.6.2.4 Validator and Staking Risk
This captures the quality and resilience of the staking layer supporting the token.
Indicative sub-factors may include:
- validator concentration,
- slashing exposure,
- node-operator diversity,
- performance stability,
- reliance on delegated intermediaries,
- operational robustness.
5.6.2.5 Protocol and Smart Contract Risk
This captures risks associated with the protocol implementation and broader dependency structure.
Indicative sub-factors may include:
- contract complexity,
- upgradeability,
- dependency on external modules,
- audit maturity,
- governance attack surface,
- emergency mechanism design.
5.6.2.6 Governance and Structural Risk
This captures the extent to which key outcomes depend on governance quality, centralised discretion, or structural concentration.
Indicative sub-factors may include:
- governance concentration,
- privilege asymmetry,
- policy-change sensitivity,
- custody or control assumptions,
- dependency on a small decision-making set.
5.6.2.7 Composability and Contagion Risk
This captures the token’s exposure to the broader DeFi stack. Broad integration can be a strength, but it can also become a transmission channel for systemic stress.
Indicative sub-factors may include:
- use as collateral across protocols,
- dependence on leveraged strategies,
- recursive exposure loops,
- concentration in a few major integrations,
- correlation with external protocol failure modes.
5.6.3 Factor Vector
For token \(i\) at time \(t\), let the risk factor vector be:
\[ \mathbf{R}_{i,t} = \Big( R_{i,t}^{peg}, R_{i,t}^{liq}, R_{i,t}^{exit}, R_{i,t}^{val}, R_{i,t}^{prot}, R_{i,t}^{gov}, R_{i,t}^{comp} \Big) \]
Each component may itself be a score, percentile, model output, or composite sub-index. The important point is that the architecture remains modular and interpretable.
5.6.4 Composite Risk Score
A composite risk score can then be defined as:
\[ R_{i,t}^{comp\_tot} = \sum_{k=1}^{K} w_k \, \phi_k(R_{i,t}^{(k)}) \]
where:
- \(R_{i,t}^{(k)}\) is the \(k\)-th risk dimension,
- \(\phi_k(\cdot)\) is a normalisation or transformation function,
- \(w_k\) is the weight assigned to that dimension.
The use of \(\phi_k\) is important. Not all factors should be treated linearly. A small deterioration in liquidity may matter little until a threshold is crossed, after which risk rises sharply. Likewise, extreme validator concentration may deserve disproportionate penalty.
The composite score therefore exists, but only as the summary layer above a more important component structure.
5.7 Fair Value Estimation
5.7.1 Fair Value as a Range, Not a Point Illusion
LSDx should avoid pretending that there is one exact true fair value at all times. In practice, fair value is better represented as a model-based range or central estimate with uncertainty bounds.
This is especially important in DeFi, where liquidity conditions can change rapidly and where some relevant risks are not continuously priced in a stable way.
Therefore, for token \(i\) at time \(t\), LSDx defines:
\[ FV_{i,t}^{low} \leq FV_{i,t}^{mid} \leq FV_{i,t}^{high} \]
where the width of the range reflects model uncertainty, market instability, and factor ambiguity.
5.7.2 Baseline Fair Value Structure
A practical first-version fair value formulation may be written as:
\[ FV_{i,t}^{mid} = V_{i,t}^{red} + Carry_{i,t}^{(h)} - Disc_{i,t}^{risk} - Disc_{i,t}^{liq} - Disc_{i,t}^{exit} + Prem_{i,t}^{conv} \]
for horizon \(h\), where:
- \(V_{i,t}^{red}\) is redemption-based anchor value,
- \(Carry_{i,t}^{(h)}\) is expected net carry over horizon \(h\),
- \(Disc_{i,t}^{risk}\) is structural risk discount,
- \(Disc_{i,t}^{liq}\) is liquidity-related discount,
- \(Disc_{i,t}^{exit}\) is redemption and exit-friction discount,
- \(Prem_{i,t}^{conv}\) is convenience or strategic premium.
This is not the final empirical implementation. It is the conceptual model. It tells the user what kinds of economic forces the framework considers legitimate drivers of value.
5.7.3 Mispricing Indicator
Given observed market price \(P_{i,t}^{mkt}\), the relative-value or mispricing indicator may be defined as:
\[ \Delta_{i,t} = \frac{P_{i,t}^{mkt} - FV_{i,t}^{mid}}{FV_{i,t}^{mid}} \]
Interpretation:
- \(\Delta_{i,t} > 0\): market trades above model fair value,
- \(\Delta_{i,t} < 0\): market trades below model fair value,
- \(|\Delta_{i,t}|\) large: greater divergence between market pricing and model valuation.
This indicator should never be used in isolation. A premium may be justified by strategic utility. A discount may reflect stress or real hidden weakness. Relative-value signals require interpretation through the rest of the factor system.
5.7.4 5.7.4 Horizon-Aware Fair Value
Fair value may differ by holding horizon.
For a short-horizon user, convenience and liquidity may dominate.
For a medium-horizon user, carry and liquidity both matter.
For a long-horizon user, sustainable net accrual and structural durability may dominate.
Accordingly, LSDx should allow:
\[ FV_{i,t}^{(h)} \]
to vary by horizon \(h\), rather than pretending that one universal fair value always serves all users equally well.
5.8 Adjusted Yield Framework
5.8.1 Why Adjusted Yield Matters
Headline staking APY is one of the most visible metrics in the LSD market, but it is also one of the most incomplete. A higher nominal yield is not automatically superior if it comes with weaker liquidity, poorer exit reliability, higher concentration, or elevated peg stress.
For this reason, LSDx defines an adjusted yield framework that aims to capture the economically relevant portion of reward after accounting for meaningful frictions and risks.
5.8.2 Baseline Adjusted Yield
A first-order adjusted yield may be written as:
\[ AY_{i,t}^{(h)} = Y_{i,t}^{net} - \Lambda_{i,t}^{risk} - \Lambda_{i,t}^{liq} - \Lambda_{i,t}^{exit} \]
where:
- \(Y_{i,t}^{net}\) is net expected yield,
- \(\Lambda_{i,t}^{risk}\) is the annualised or horizon-scaled structural risk penalty,
- \(\Lambda_{i,t}^{liq}\) is liquidity penalty,
- \(\Lambda_{i,t}^{exit}\) is exit-friction penalty.
This formulation is deliberately conservative. It reminds the user that observed yield must be interpreted through the lens of tradability and robustness.
5.8.3 Risk-Adjusted Yield versus Strategic Yield
Not all users want the same yield notion.
A passive holder may want a risk-adjusted carry measure.
A treasury may want a reserve-quality yield measure, where strong liquidity and durable integration are rewarded.
A leveraged strategy may want a collateral-aware effective yield that incorporates liquidation quality and margin usability.
LSDx should therefore support multiple adjusted-yield variants rather than one rigid definition.
5.8.4 5.8.4 Relative Yield Efficiency
For comparison across tokens, LSDx may define a relative yield efficiency metric such as:
\[ E_{i,t} = \frac{AY_{i,t}^{(h)}}{R_{i,t}^{comp\_tot} + \epsilon} \]
where \(\epsilon > 0\) avoids instability near zero. This is not meant as a universal truth metric. It is a compact comparison aid that relates adjusted yield to aggregate risk burden.
Used carefully, it helps identify tokens that are not merely high yielding, but efficient in the return-per-risk sense.
5.9 Liquidity Diagnostics
5.9.1 Liquidity as More Than Volume
Liquidity is often reduced to recent trading volume. That is inadequate. A token can show good historical volume and still be fragile if depth is thin, routing is concentrated, or stress quickly destroys execution quality.
LSDx therefore treats liquidity as a multi-dimensional object.
5.9.2 Liquidity Components
A robust liquidity diagnostic should include at least:
- depth at representative trade sizes,
- expected slippage,
- venue diversity,
- persistence of market activity,
- concentration of liquidity providers,
- behaviour during volatility spikes,
- and sensitivity to one-sided order flow.
5.9.3 5.9.3 Liquidity Score
A liquidity quality measure may be defined as:
\[ LQ_{i,t} = \psi\big( Depth_{i,t}, Slip_{i,t}, VenueDiv_{i,t}, VolPersist_{i,t}, Fragility_{i,t} \big) \]
where \(\psi(\cdot)\) is a normalised scoring map. The exact functional form can be chosen later, but the logic is clear: liquidity should be treated as a structural input, not a decorative chart.
5.9.4 Stress Liquidity
Normal liquidity and stress liquidity are not the same. A token may behave well under ordinary conditions but fail under market-wide deleveraging or rapid risk-off dynamics.
For this reason, LSDx should ideally distinguish:
\[ LQ_{i,t}^{norm} \quad \text{and} \quad LQ_{i,t}^{stress} \]
The second is often more relevant for collateral and treasury decisions.
5.10 Strategy Suitability Layer
5.10.1 Why Suitability Matters
There is no universally best LSD. Suitability depends on the intended use.
A token well suited for passive holding may be weak collateral.
A token excellent for liquidity deployment may be suboptimal for reserve policy.
A token with strong convenience value may be too expensive for simple carry capture.
LSDx therefore introduces a suitability layer rather than a universal winner-takes-all ranking.
5.10.2 Use-Case Profiles
At minimum, the framework should support suitability views for:
- passive long-horizon holding,
- treasury reserve allocation,
- collateral usage,
- liquidity provisioning,
- basis and relative-value strategies,
- leveraged carry structures.
5.10.3 Suitability Score
For use case \(u\), define:
\[ SS_{i,t}^{(u)} = f_u\Big( FV_{i,t}, AY_{i,t}^{(h)}, \mathbf{R}_{i,t}, LQ_{i,t}, C_{i,t} \Big) \]
The function \(f_u\) depends on the use case. For collateral, liquidity and stress resilience may receive heavy weight. For long-horizon passive holding, sustainable adjusted yield and validator quality may matter more.
This is a major conceptual advantage of LSDx. It acknowledges that “best token” is not absolute. It is context-dependent.
5.11 Regime Classification and Monitoring
5.11.1 Need for Regime Awareness
LSD behaviour is not static. A token may move from stable conditions to mild stress, then to dislocation, then back to normality. Static ranking without regime awareness can mislead users precisely when they need analytics most.
The framework should therefore support regime classification.
5.11.2 Illustrative Regimes
A first-version monitoring layer may classify token state into regimes such as:
Normal
Price behaviour, liquidity, and factor readings remain within expected ranges.Watch
Mild deterioration in one or more dimensions.Stress
Significant dislocation, weakening liquidity, or elevated structural concern.Dislocation
Market price diverges sharply from model range, or multiple factors deteriorate simultaneously.Recovery
Stress indicators begin to normalise, but conditions remain under review.
5.11.3 Regime Function
Let regime be represented by:
\[ \Gamma_{i,t} = g\Big( \Delta_{i,t}, \mathbf{R}_{i,t}, LQ_{i,t}, \text{trend dynamics} \Big) \]
where \(g(\cdot)\) is a rule-based or probabilistic classifier. In early versions, a transparent rule-based approach may be preferable to a complex machine-learning model.
Interpretability matters more than sophistication at this stage.
5.11.4 Alert Logic
The regime layer can support alerts such as:
- fair-value deviation exceeds tolerance,
- liquidity deteriorates beyond threshold,
- validator concentration worsens materially,
- redemption friction rises,
- correlation to system-wide stress increases,
- composite score drops by more than a defined amount.
The importance of this layer is operational. It turns LSDx from a static scoring system into a monitoring framework.
5.12 Methodological Philosophy
5.12.1 Model Discipline over False Precision
A major design principle of LSDx is to avoid false precision. DeFi analytics often oscillate between two weak extremes: superficial dashboards with little depth, or pseudo-scientific models that hide assumptions behind excessive mathematical symbolism.
LSDx should avoid both.
The framework should be explicit enough to be credible and structured enough to be extendable, while still honest about uncertainty, calibration limits, and context dependence.
5.12.2 Hybrid Quantification
Not every factor needs to begin as a fully continuous model variable. Some dimensions may initially be measured through robust ordinal or banded scores, later refined into richer models when better data and empirical history are available.
This is not a weakness. It is a disciplined development path. A transparent, well-structured factor score is better than a fragile pseudo-exact model built on insufficient data.
5.12.3 Decomposition as Governance
The decomposition architecture is not only a modelling convenience. It is a governance mechanism. It allows the user to challenge assumptions, inspect components, revise weights, and understand why outputs change over time.
This becomes especially important if LSDx is later used in treasury or protocol settings.
5.12.4 Extensibility
The framework should remain open to extension in at least four directions:
- more refined valuation models,
- more granular liquidity and stress diagnostics,
- multi-chain and wrapped-LSD support,
- integration of probabilistic scenario analysis.
The first version does not need to solve every future problem. It needs to define a structure that can grow without conceptual inconsistency.
5.13 Summary of the Framework
The LSDx analytical framework is built to transform LSDs from loosely compared staking tokens into structured financial instruments.
It does so by:
- normalising token mechanics,
- decomposing value into interpretable drivers,
- constructing a modular risk factor architecture,
- estimating fair value as a range rather than a naive point claim,
- adjusting yield for meaningful frictions,
- diagnosing liquidity structurally,
- defining use-case-specific suitability,
- and monitoring regime changes over time.
The result is not merely a ranking engine. It is a decision framework.
This is the foundation on which the later chapters will build. The next chapters translate this framework into architecture, use cases, worked examples, limitations, and implementation pathways.