flowchart LR
A[Token Universe] --> B[Input Collection]
B --> C[Normalisation]
C --> D[Factor Construction]
D --> E[Fair Value & Adjusted Yield]
E --> F[Suitability / Interpretation]
````
This process is the operational expression of the LSDx framework.
## Example One: Comparative Evaluation of Three ETH-Based LSDs
### Purpose of the example
The first example considers a simple but economically important setting: three LSDs linked to the same base asset, ETH. The purpose is to show why tokens with common underlying exposure should not automatically be treated as equivalent.
Even when three tokens all represent staked ETH exposure, they may differ materially in:
* fee structure,
* market liquidity,
* DeFi integration,
* validator diversification,
* redemption confidence,
* and observed premium or discount behaviour.
The objective of the example is therefore not merely to compare tokens. It is to show how LSDx separates shared exposure from token-specific quality.
### Illustrative setup
Assume three ETH-based LSDs, denoted here as:
* Token A,
* Token B,
* Token C.
These labels are used for the methodological illustration. In practical deployment, the same framework would be applied to specific instruments such as stETH, rETH, cbETH, or other relevant tokens. The naming is kept abstract here so that the logic remains general rather than over-attached to one moment in market history.
Suppose the system collects the following broad observations for a given evaluation date:
* Token A has strong market liquidity, broad DeFi integration, moderate net yield, and relatively stable market pricing.
* Token B has somewhat higher nominal net yield, lower liquidity depth, and somewhat weaker redemption convenience.
* Token C has attractive protocol design features and decent yield, but thinner market depth and less established integration.
At the raw-data level, the comparison might still appear ambiguous. The purpose of LSDx is to remove that ambiguity by structuring the evaluation.
### Step one: normalised instrument view
The first step is to represent each token through the same analytical state. Conceptually, LSDx transforms the instruments into a comparable matrix.
| Token | Market price state | Net carry profile | Liquidity quality | Exit friction | Structural risk | Integration strength |
| ------- | ------------------ | ----------------: | ----------------: | -------------: | --------------: | -------------------: |
| Token A | Near anchor | Moderate | High | Low | Moderate | High |
| Token B | Mild discount | High | Medium | Medium | Medium | Medium |
| Token C | Slight discount | Moderate to high | Lower | Medium to high | Medium | Lower |
This table is intentionally stylised. The point is not the exact numbers. The point is that the framework does not begin from APY alone. It begins from a normalised cross-token representation.
### Step two: factor interpretation
Once the token states are normalised, the next task is factor interpretation.
#### Token A
Token A appears strong in liquidity and integration. This may justify some convenience value in the market. Even if its yield is not the highest, its reserve quality and broad utility may make it especially attractive for treasuries, collateral systems, or users who value deep secondary-market support.
#### Token B
Token B appears attractive from a carry perspective, but somewhat weaker in liquidity and exit conditions. It may be appealing for longer-horizon users who place more weight on net accrual and less weight on immediate market depth. It may be less attractive in stress-sensitive collateral contexts.
#### Token C
Token C may offer a structurally interesting profile but suffer from weaker secondary liquidity and lower ecosystem penetration. It may still be suitable in selected strategies, but the burden of proof becomes greater. Its discount could reflect undervaluation, but it could also reflect weaker strategic utility.
### Step three: illustrative adjusted yield logic
Suppose nominal annualised net yields are approximately as follows:
* Token A: (3.6%)
* Token B: (4.1%)
* Token C: (3.9%)
Now suppose the LSDx framework applies stylised annualised penalties for liquidity and structural fragility.
For illustration only:
* Token A receives a total adjustment of (0.4%),
* Token B receives a total adjustment of (1.0%),
* Token C receives a total adjustment of (1.3%).
Then adjusted yield becomes:
[
AY_A = 3.6% - 0.4% = 3.2%
]
[
AY_B = 4.1% - 1.0% = 3.1%
]
[
AY_C = 3.9% - 1.3% = 2.6%
]
This is a simple but important result. The nominal ranking and the adjusted ranking differ. Token B initially appeared strongest by carry. After incorporating risk and liquidity penalty, Token A becomes more competitive or even preferable depending on the use case.
This is exactly the type of analytical correction LSDx is designed to provide.
### Step four: use-case-specific interpretation
The same three-token set may lead to different conclusions depending on purpose.
#### Treasury reserve perspective
A treasury may prefer Token A because:
* liquidity is stronger,
* integration is broader,
* and adjusted yield remains competitive after penalty.
#### Long-horizon passive holding perspective
A long-horizon holder may still consider Token B attractive if:
* they accept somewhat weaker liquidity,
* they do not expect forced exit,
* and they believe the carry premium compensates the additional risk.
#### Collateral perspective
A collateral manager may heavily penalise Token C if thin liquidity and weaker integration raise concerns about unwind quality.
The same analytical framework therefore produces differentiated but coherent outputs.
### Visual summary
```yxxkswvjs
flowchart TD
A[Three ETH-Based LSDs] --> B[Normalised Comparison]
B --> C[Adjusted Yield]
B --> D[Liquidity & Exit Review]
B --> E[Structural Risk Review]
C --> F[Treasury View]
D --> G[Collateral View]
E --> H[Passive Holder View]
8 Worked Examples and Technical Illustrations
8.1 Why This Chapter Matters
The previous chapters defined the vision, framework, architecture, and practical use cases of LSDx. Those chapters established what the platform is, how it is organised, and where it can be applied. This chapter serves a different purpose. It makes the framework concrete.
A quantitative whitepaper becomes more convincing when the reader can see how its logic behaves in example settings. Without such illustrations, even a well-structured framework may remain abstract. The objective here is therefore not to claim empirical finality, nor to pretend that all parameter choices are already production-calibrated. Instead, the objective is to demonstrate how the LSDx framework operates in practice.
This chapter proceeds through worked examples. These examples show how LSDx can:
- compare several LSDs referencing the same base asset,
- distinguish between nominal yield and adjusted yield,
- interpret a premium or discount relative to model fair value,
- produce use-case-specific suitability views,
- and translate raw observations into structured judgement.
The examples are illustrative by design. They are not intended to serve as definitive market calls. Their role is methodological. They show the reader how the framework reasons.
8.2 Structure of the Example Layer
Each example in this chapter follows the same broad logic:
- define the token universe and analytical context,
- collect the economically relevant inputs,
- normalise the instruments into comparable form,
- compute factor-level diagnostics,
- estimate fair value and adjusted yield,
- derive context-specific interpretation.
This is summarised below.
The important lesson is that common underlying exposure does not imply common economic quality.
8.3 Example Two: Interpreting a Market Discount
8.3.1 Purpose of the example
One of the most important practical questions in LSD markets is whether a token trading at discount should be viewed as cheap or dangerous.
A market discount can arise for many reasons:
- temporary liquidity imbalance,
- general market stress,
- redemption queue friction,
- reduced confidence in the protocol,
- forced selling,
- or structural deterioration.
Without a framework, the same discount can be interpreted in contradictory ways. LSDx addresses this by combining fair value logic with factor-level context.
8.3.2 Illustrative setup
Assume Token B trades at a market discount of (2.8%) relative to its model anchor. At first glance, that may appear attractive. But the proper question is not simply “is it below anchor?” The proper question is “is the observed discount wider than what the current risk and liquidity environment justifies?”
Suppose LSDx estimates the following stylised fair value range:
[ FV_B^{low} = 0.975 FV_B^{mid} = 0.983 FV_B^{high} = 0.989]
where all values are expressed relative to one unit of underlying economic reference.
Suppose observed market price is:
[ P_B^{mkt} = 0.972]
Then the market is below the central estimate and slightly below even the lower bound of the illustrative range.
8.3.3 First interpretation layer
This creates an initial signal:
- the token may be undervalued,
- or the model may still be underestimating some risk not yet captured.
The point is that the discount becomes analytically interesting, not automatically attractive.
8.3.4 Adding context through factor diagnostics
Suppose additional diagnostics show:
- liquidity has weakened, but remains functional,
- redemption conditions are unchanged,
- no major governance deterioration is detected,
- peg volatility has risen moderately,
- and overall regime is classified as Watch, not Stress.
In that case, the framework may interpret the discount as a possible mild dislocation rather than a deep structural impairment.
By contrast, if the factor layer showed:
- sharply worsening exit conditions,
- severe liquidity thinning,
- deteriorating governance confidence,
- and regime classified as Stress,
then the same observed discount would be interpreted very differently.
8.3.5 Why this matters
This is one of the strongest analytical advantages of LSDx. It prevents the user from treating all discounts as bargains and all premiums as overvaluation. It asks whether pricing deviation is consistent with the rest of the token’s state.
8.3.6 Visual interpretation flow
flowchart LR
A[Observed Discount] --> B[Compare to Fair Value Range]
B --> C[Check Liquidity State]
C --> D[Check Exit / Structural Factors]
D --> E[Regime Classification]
E --> F[Benign Dislocation]
E --> G[Stress-Justified Discount]
E --> H[Further Review Needed]
This turns a raw market observation into a disciplined interpretive sequence.
8.4 Example Three: Treasury-Oriented Token Ranking
8.4.1 Purpose of the example
The previous examples focused on adjusted yield and discount interpretation. This example shifts to a treasury use case. The purpose is to show how a treasury-focused ranking differs from a generic “best LSD” ranking.
Treasury reserve management places special importance on:
- liquidity reliability,
- structural durability,
- policy defensibility,
- and resilience under stress.
That means the weighting scheme should differ from that of a speculative allocator.
8.4.2 Illustrative treasury scoring structure
Suppose LSDx defines a treasury reserve score as a weighted combination of:
- liquidity quality,
- structural risk quality,
- redemption confidence,
- adjusted yield,
- and integration strength.
An illustrative form is:
[ TS_i ====
0.30 LQ_i + 0.25 SR_i + 0.20 RC_i + 0.15 AY_i^{norm} + 0.10 IG_i ]
where:
- (LQ_i) is liquidity quality,
- (SR_i) is structural robustness,
- (RC_i) is redemption confidence,
- (AY_i^{norm}) is normalised adjusted yield,
- (IG_i) is integration strength.
This is not a universal formula. It is an illustrative treasury-oriented weighting scheme.
8.4.3 Stylised comparison
Suppose three tokens score as follows on a (0) to (100) basis.
| Token | Liquidity quality | Structural robustness | Redemption confidence | Normalised adjusted yield | Integration strength | Treasury score |
|---|---|---|---|---|---|---|
| Token A | 92 | 82 | 88 | 71 | 95 | 86.0 |
| Token B | 74 | 78 | 73 | 84 | 76 | 76.6 |
| Token C | 61 | 75 | 66 | 68 | 60 | 66.9 |
The ranking now reflects treasury priorities rather than pure yield. Token A emerges strongest not because it maximises nominal carry, but because it combines liquidity, confidence, and integration in a way that supports reserve policy.
8.4.4 Analytical insight
This example demonstrates why one universal ranking is misleading. A token that leads in nominal yield may not lead in treasury suitability. That is not a contradiction. It is the entire point of use-case-aware analytics.
8.5 Example Four: Collateral Eligibility Thought Experiment
8.5.1 Purpose of the example
This example illustrates how LSDx can help a lending protocol decide whether a token should be acceptable as collateral.
Collateral use imposes stricter requirements than passive holding. The protocol must think about liquidation, not only long-run value.
8.5.2 Simplified logic
Assume the protocol cares primarily about:
- stress liquidity,
- peg stability,
- exit confidence,
- and protocol fragility.
An illustrative collateral suitability rule could be expressed schematically as follows:
flowchart TD
A[Candidate LSD] --> B{Stress Liquidity Sufficient?}
B -- No --> X[Reject or Severe Haircut]
B -- Yes --> C{Peg Stability Acceptable?}
C -- No --> X
C -- Yes --> D{Exit Path Credible?}
D -- No --> Y[Conditional Acceptance]
D -- Yes --> E{Structural Risk Within Policy?}
E -- No --> Y
E -- Yes --> Z[Collateral Eligible]
This figure is useful because it shows that collateral evaluation need not begin from APY at all. It begins from emergency usability and structural confidence.
8.5.3 Stylised decision outcome
Suppose:
- Token A passes all four gates,
- Token B passes three but shows weaker stress liquidity,
- Token C fails peg-stability and exit-confidence thresholds.
Then the protocol may conclude:
- Token A: fully eligible,
- Token B: conditionally eligible with more conservative haircut,
- Token C: not currently eligible.
This is exactly the kind of structured policy output that makes LSDx relevant for DeFi infrastructure rather than merely for portfolio commentary.
8.6 Example Five: Regime Monitoring Through Time
8.6.1 Purpose of the example
A useful analytical framework should not only compare tokens cross-sectionally. It should also monitor them over time. This final example shows how LSDx can observe the same token across changing conditions.
8.6.2 Stylised time path
Assume a token moves through the following sequence over several periods:
- stable market pricing,
- mild widening of discount,
- weakening liquidity,
- persistent dislocation,
- gradual recovery.
LSDx may then assign the following regimes:
| Period | Discount state | Liquidity state | Structural change | Regime |
|---|---|---|---|---|
| (t_1) | Stable | Strong | None | Normal |
| (t_2) | Mild widening | Strong | None | Watch |
| (t_3) | Wider discount | Weakening | None | Stress |
| (t_4) | Persistent wide discount | Weak | Mild negative | Dislocation |
| (t_5) | Narrowing | Improving | Stable | Recovery |
This is a useful illustration because it shows that analytical state is dynamic. The token is not permanently “good” or permanently “bad.” It moves across conditions, and the framework should reflect that movement.
8.6.3 Monitoring flow
flowchart LR
A[Normal] --> B[Watch]
B --> C[Stress]
C --> D[Dislocation]
D --> E[Recovery]
The point is not to force every token into a rigid sequence. The point is to demonstrate that LSDx can provide temporal oversight rather than one-time ranking.
8.7 Example Summary Table
The examples in this chapter illustrate different dimensions of the framework.
| Example | Core analytical question | Main lesson |
|---|---|---|
| Comparative three-token evaluation | How do common-underlying LSDs differ in quality? | Shared base asset does not imply shared economic attractiveness |
| Discount interpretation | Is a discount attractive or justified by risk? | Price deviation must be interpreted with context |
| Treasury ranking | Which token is strongest as a reserve asset? | Use-case weighting changes the ranking |
| Collateral thought experiment | Is the token suitable for lending protocol collateral? | Liquidation quality matters more than nominal yield |
| Regime monitoring | How does token state evolve through time? | Analytical quality is dynamic, not static |
The combined effect of these examples is important. They show that LSDx is not merely a scoring framework. It is an interpretive system.
8.8 What These Examples Demonstrate
Taken together, the worked examples establish several core points.
First, LSD analytics must be multi-dimensional. Yield alone is not enough.
Second, fair value is not identical to market price. A token can trade above or below model value for reasons that may or may not be justified.
Third, the same token can look attractive in one use case and weak in another. This is not a failure of the framework. It is an honest reflection of financial reality.
Fourth, monitoring through time is as important as initial comparison. A useful system must capture change, not only rank snapshots.
Fifth, the architecture of LSDx is coherent across examples. The same core engine drives all interpretations. What changes is the lens through which the output is consumed.
8.9 Closing Remarks
This chapter has turned the LSDx framework from abstract structure into example-based reasoning. Through comparative token analysis, discount interpretation, treasury scoring, collateral thought experiments, and regime monitoring, the chapter has shown how the platform can organise real decision problems into a disciplined analytical process.
The examples are intentionally illustrative rather than final. Their purpose is not to claim complete calibration, but to demonstrate that the framework is implementable, interpretable, and economically meaningful.
The next chapter should therefore take the opposite direction. After showing what the framework can do, the paper should openly address where the framework is limited, where judgement enters, and what risks remain unresolved. That is essential if LSDx is to remain intellectually serious.