Skip to main content

Designing a liquidity incentive program for Lido

All projects that have a native token face a common challenge to achieve widespread adoption of their token - ensuring the token has sufficient liquidity and trading volume on exchanges. To tackle this, it is common to use incentive programs to bootstrap and increase liquidity on target liquidity pools across decentralized exchanges (DEXs).

In this post, we will explore how these programs work, and we will use Lido as an example. Lido is one of the largest protocols in Ethereum, and it plays a crucial role in the ecosystem, making it an excellent basis for our investigation. Specifically, we will focus on how liquidity mining programs can be used to enhance the trading volumes of Lido’s liquid tokens, stETH and wstETH.

We will start by providing an overview of Lido and its liquid tokens. Then, we will discuss previous liquidity mining programs, with a closer look at the recent Uniswap program designed and implemented by Gauntlet. Finally, we will examine a framework for testing and designing such a program for Lido.

Lido and stETH

Lido is the largest liquid staking protocol for Ethereum. It allows their users to participate in Ethereum Proof-of-Stake by enabling them to stake their ETH and earn daily staking rewards without maintaining a validator infrastructure. Instead, a group of node operators runs the validator operations on behalf of the users who staked their tokens.

There is a significant advantage for users who stake their tokens in Lido. When users stake their ETH through Lido, they receive stETH stETH in return. stETH can be traded, used in DeFi applications, or even locked as collateral on various lending platforms. This means that the staked ETH is now liquid and can be used for other purposes.

The chart below shows the various Stakers in Ethereum, and we can clearly notice that Lido is the dominant player, with more than 30% of all staked ETH.

It’s crucial to understand that stETH is a rebasable token . This means that the amount of stETH in a user’s balance changes daily based on the rewards they have received from staking their ETH in Lido. Some DeFi platforms, such as Curve and Yearn, support rebasable tokens, and therefore, users can interact in these applications directly with their stETH. However, most DApps are not designed for rebasable tokens, and as such, users lose their portion of Lido rewards when they lock stETH across these applications. To solve this, Lido provides wstETH , a wrapped version of stETH that is fully DeFi-compatible. Instead of updating the balance daily, wstETH uses an underlying share system to reflect staking rewards allocation.

This is important because it means that in order to get a more accurate view of the trading volume of stETH, we must also consider the trading volume of its wrapped version, wstETH.

Three key factors make Lido so dominant. First, the perception of Lido as a safe and performant validator operator. This is achieved by running a good ETH Validator operation and by maintaining a sizeable staking market share. Second, Lido’s transparent and community-driven approach to governance. Lido is structured as a DAO, where holders of the DAO token, LDO , have the right to propose, discuss and vote on critical management decisions of the protocol. Third, and most importantly for this post, the usability of (w)stETH across the Ethereum ecosystem.

Incentivizing stETH trading

A key factor that impacts the usability of a token is its liquidity and trading depth across exchanges and DEXs. For Lido, we can measure liquidity and depth by looking at the Total Value Locked (TVL) of (w)stETH. Another important metric to track is the token’s daily Trading Volume since it represents its actual trading activity.

As an example, the plot below shows the TVL and daily trading volume for the ETH/stETH pair, which we can see fluctuating significantly during the previous 2 years.

If Lido wanted to encourage more trading of (w)stETH, they could create a rewards program. This program would allocate tokens from the Lido treasury to offer additional rewards to Liquidity Providers (LPs) on (w)stETH pools. It is important to note that this should be done across various DEXs. The chart below shows the cumulative trading volume of stETH and wstETH on various DEXs over a 7-day period. From the chart, we can infer that Uniswap, Balancer, and Curve are the most popular DEXs for these tokens.

By temporarily providing additional rewards to LPs, these programs aim to increase the Total Value Locked (TVL) on the pools, improving their trade execution by decreasing the slippage experienced by traders. This, in turn, should bring more users, which increases the Trading Volume and the fees collected by LPs. Thus, once the reward program finishes, the newly improved pool state should be enough to maintain higher liquidity and trading volumes.

This is not a new concept, as many decentralized exchanges (DEXs) have previously implemented liquidity mining programs. In the case of Lido, there were several proposals made in 2021 on Lido’s DAO forum to allocate LDO tokens to specific liquidity pools that contained (w)stETH. These pools include a stETH/ETH pool on Curve , a wstETH/wETH pool on Balancer v2 , and a wstETH/DAI pool on SushiSwap .

At the time, beacon chain withdrawals, which refer to the ability to withdraw staked ETH from the beacon chain, were not enabled. The beacon chain operated alongside the Ethereum main chain but was using a Proof-of-Stake (PoS) consensus mechanism, known as Ethereum 2.0. Without withdrawal functionality, ETH stakers couldn’t directly access their staked funds. As a result, Lido strategically increased the liquidity of its staked ETH tokens by allocating LDO tokens to different liquidity pools ( ref ).

We can see these reward programs in action still today for (w)stETH liquidity pools. This dune dashboard contains an analysis of the active rewards programs on various DEXs on Ethereum.

Another excellent example is the work that Gauntlet did for Uniswap to incentivize liquidity on Uniswap v3 pools. To decide on an optimal strategy, they did an analysis to quantitatively select the correct pools to incentivize .

They began by analyzing previous Uniswap liquidity mining programs to understand how successful they were. They selected 5 pools that received incentives in the past (i.e. the treatment group) and paired each of them with a similar pool that did not receive incentives (i.e. the control group).

Furthermore, they calculated each treatment pool’s share of TVL and trading volume relative to its control pool. The goal of using a relative share was to account for market-wide deviations in usage that may affect all pools at the same time. Then, they conducted a one-sided t-test to determine if there was a significant increase in volume or TVL after the incentives began, and after the incentives ended (in comparison to the levels before the incentives started).

In the end, they concluded that Liquidity mining programs can help improve the market share of the pools, in terms of both TVL and Trading Volume. However, the results were not consistent across all pools and programs, which highlights the importance of carefully selecting the pools and continuously monitoring the program to ensure that capital is efficiently allocated.

This analysis served as motivation for a new way of designing these types of incentive programs . Concretely, the team at Gauntlet developed a set of Machine Learning (ML) models to simulate the impacts of Liquidity Mining programs Uniswap pools. Then, instead of selecting which pools to incentivize based on popularity or other qualitative heuristics, they proposed to select the pools that the simulation indicated would lead to the optimal results for the liquidity program. They also used the simulation to select the program duration and amount of rewards allocated.

Subsequently, they ran the incentive program with Uniswap during 4 weeks and gathered the results.

From the results, they concluded that LPs did not always respond to incentives, with some appearing to be completely unaware of the program and failing to claim their rewards. Additionally, an increase in liquidity did not always translate to a proportional increase in the Trading Volume’s market share within the program’s timeframe. They also noticed that these incentive programs were less successful in more established pools, with these pools showing a lower liquidity elasticity. In other words, there are modest returns to incentivizing an already large pool to make it get even larger. This was the case for the wstETH/ETH pool.

One of the insights that we can take from the work done by Gauntlet is that it is not straightforward to know beforehand which pools will benefit the most from incentive programs. Using ML models and simulations to design these programs is an exciting avenue to research! However, there is still some need for improvement. One possibility is to gather more data and/or iterate on the feature engineering to obtain more representative models.

Another interesting approach could be to use an Active Learning framework to arrive at a better-performing model. Active Learning models are especially useful in situations where unlabeled data is abundant but manual labeling is expensive. These models are initially trained in a small labeled dataset and then are iteratively improved. At each iteration,

In addition, we would very likely need to build bespoke models for each DEX we wish to target.

An alternative approach is to run a first experiment where we distribute a small part of the rewards across a wider array of pools, for a short period of time. Then, we can analyze the experiment’s impact and pick the pools with the best results to allocate the bulk of the incentives. This is a common approach when we need to allocate resources over a large selection of options and it is not known beforehand which options will lead to the best results.

Specs and validation

To effectively implement a two-part incentives program for Lido’s liquid tokens, it’s crucial to establish clear specifications and a robust validation framework. This section outlines the key components and methodologies necessary to design, monitor, and evaluate the program’s success.

As a first step, we need to set the parameters of the initial experiment, which include:

  1. Selecting the initial pool set By taking a look at apy.vision , we see there are 62 pools containing (w)stETH. This includes Ethereum pools and L2s such as Arbitrum, Polygon, and Optimism. From these, only 23 have been deployed for at least 3 months and have at least 100k USD in TVL. This is a good set to start with.

  2. Choosing the duration for the experiment Programs like these are commonly run on a monthly basis. However, we want to have results earlier and spend as little as possible in this “exploration” phase. As such, 2-weeks is a good middle ground. It provides enough time for LPs and traders to experience the rewards and respond to the incentives. This was also the time period Gauntlet used in their Uniswap analysis. It is also important to highlight that it’s crucial to have the program running simultaneously for all the targeted pools.

  3. Allocating rewards to pools Here, we want to use as little funds as possible in the first experiment, with the aim of making the capital allocation from the DAO as efficient as possible. This means that we allocate the minimum funds so that the rewards offered are meaningful when compared to the fees collected by LPs. To compute this value, we can consider the average fees collected by each pool in a two-week period in USD, and allocate between 10% to 30% of this value in additional rewards. Of course, these values would have to be discussed and approved by the Lido DAO.

Secondly, we need to define a framework for monitoring the experiment and finding the most successful pools. In particular, we would need to collect metrics for each targeted pool, analyze the results, and pick the optimal subset of pools to allocate rewards in the second phase. Note that we should collect all data from 2 weeks before to 2 weeks after the experiment in order to represent not only the time when the program was in place but also the state of the pool before and after the program.

In terms of the metrics being collected, we could consider the following:

  • Daily TVL in USD for each pool

  • Daily trading volume in USD for each pool

  • Daily fees in USD for each pool

  • Daily reward claim rate for each pool

  • Daily share of TVL across a comparable market for each pool (similar to what Gauntlet did in their analysis)

  • Daily share of trading volume across a comparable market for each pool (similar to what Gauntlet did in their analysis)

  • Daily trading volume in USD for all (w)stETH pools

  • Daily share of trading volume for all (w)stETH pools across all comparable markets

  • Daily share of trading volume for each pool across all (w)stETH pools

Note that in order to select the “comparable market” for a given pool, we can use similar pools that, instead of having (w)stETH, have a different liquid stake ETH or wrapped ETH token.

Now, for selecting the best-performing pools in the experiment, we could use two key indicators. The first relates to the observed change in trading volume after the reward program. This is key, since the end goal of this program is to improve the trading activity of (w)stETH. However, we need to consider price volatility and the pool’s impact on the total trading activity of (w)stETH. Thus, we can use a metric that measures the impact of a given pool on the change in share of trading volume for all (w)stETH pools. Concretely, we can use $\text{lift}_k$, where:

$$ \text{lift}_k = \text{lift} - \text{lift}_{-k} $$

with

$$ \text{lift} = \frac{\sum_p V_p^\text{after}}{\sum_p V_\text{market}^\text{after}} - \frac{\sum_p V_p^\text{before}}{\sum_p V_\text{p-market}^\text{before}} $$

and

$$ \text{lift}_{-k} = \frac{\sum_{p\neq k} V_p^\text{after}}{\sum_{p\neq k} V_\text{p-market}^\text{after}} - \frac{\sum_p V_p^\text{before}}{\sum_p V_\text{p-market}^\text{before}} $$

The second indicator we can use relates to an adaptation of the $\text{lift}_k$ metric that takes into consideration capital efficiency, which is also important when deploying limited resources. We want to invest in pools that show a long-lasting effect on the overall trading volume of (w)stETH. However, we want to avoid deploying large amounts of funds on a given pool and only getting diminutive improvements in trading volume. Thus, a second indicator could be the $\text{lift}_k$ divided by the total USD value allocated to pool $k$.

Finally, we can verify if the results obtained are statistically significant. For this, we can leverage the fact that we are not considering a single point in time to collect data, but instead, we have 2 weeks of data before and after the incentives program. This means we can use daily statistics to compute the lift and adjusted lift on different days, and use those measurements to derive the empirical distributions for the indicators. We can then use this to run hypothesis tests, such as whether $\text{lift}_k$ is higher than 0.