TIP-003: Threshold Network Reward Mechanisms Proposal I – Stable Yield for Non-Institutional Staker Welfare

:heavy_minus_sign: Proposal Rationale :heavy_minus_sign:

  • The number of genuinely economically independent stakers (hereafter the ‘# nodes’) is a critical measure of the security and utility of virtually all decentralized networks. The modular services hosted on Threshold Network are no exception – tBTCv2 above all (requiring > 1,000 nodes). However, the # nodes network statistic is only discernible anecdotally/trustfully (i.e. not currently verifiable via consensus). Moreover, attempting to maximize the # nodes directly via the protocol remains an ill-advised strategy, lest the mechanism fall prey to Sybil-style manipulation.
  • An often overlooked and measurable network statistic is the break-even stake size – the minimum sum of tokens a staker must lock to avoid operating a loss at a given timestep. The volatility of the break-even stake size, and how close it stays to the min. stake invariant (parametrized at genesis), are major determinants of the theoretical number of stakers that are capable of surviving and thriving as the backbone of a decentralized service.
  • Many protocols target a staking rate (% circulating supply locked) as a proxy for optimizing the # nodes, dividing a fixed reward ‘budget’ amongst a fluctuating array of active stakers. Some go further by engineering a second-order variance via the nominal inflation rate (Cosmos, Livepeer, ETH2). Both these designs foster conditions that may actually further centralization trends over the medium/long-term. Elevated staking rates and correspondingly low yields do little to persuade deeper-pocketed stakers to scale back capacity, but do (a) debilitate non-Institutional (smaller) staker balance sheets and (b) increase the ‘stake hoarding’ payoff for those who can afford it. The combination of dynamics (a) & (b) can engender detrimental feedback loops and widen the gap in fractional token share between large and small. This wealth concentration trend eventually diminishes the # nodes. See the <Subsidization & Participation Bivariate Analysis notebook> for a deep-dive on this issue, and the driving empirical evidence behind this proposal.

:heavy_minus_sign: Proposed Mechanism :heavy_minus_sign:

  • The proposed mechanism minimizes + stabilizes the break-even stake size by programmatically adjusting the nominal inflation rate – thereby targeting a minimum effective yield to all stakers and stake sizes, independent of fluctuations in the staking rate or number of active nodes from timestep-to-timestep.
  • The mechanism confers economic sustainability by being greater than or equal to a minimum target rate, and encourages longer-term financial planning via lower variance in said rate.
  • The Stable Yield is set to a provisional target rate of 10% APY. This, along with other parameters, can and should be modified by the Threshold DAO to accommodate complementary reward models (e.g. buyback-and-distribute – see note below), or to tune the (now more variable) dilution burden placed on passive token-holders.
  • Since non-Institutional and smaller stakers are adversely affected when staking rates are elevated – i.e. when yields are unsustainably low – the proposed mechanism only stabilizes the yield when participation is above a DAO-selected threshold. Below this threshold, the yield is inversely proportional to the staking rate, as is typical in prevailing reward mechanisms. This participation parameter is provisionally set to 50% of the circulating supply.

[Note that the design presented here is compatible with various other reward models. For example, combining a Stable Yield with dynamic incentives, such as a fee-driven buyback-and-distribute model – see Placeholder’s high-level <design> as an example – would provide a safety net for non-Institutional stakers AND align all stakeholder interests towards maximizing fee generation. Indeed, this combination shifts the ‘bet’ made by stakers/hodlers alike towards service traction and away from subsidy hoarding. A buyback mechanism also supports the price+liquidity of the native token, further stabilizing and minimizing the break-even stake size. Separately, this proposal is also compatible with delegation-driven node diversification such as Solana’s Yield Throttle + DelegationBot <ideas>.]

:heavy_minus_sign: Provisional method to compute annual issuance for Stable Yield mechanism (SY_Issuance): :heavy_minus_sign:

  1. Base_Issuance = Target_Yield * Supply
    Example: Target_Yield = 10% & Supply = 1bn tokens → Base_Issuance = 100m tokens

  2. Scenario A (Staking_Rate >= 50%)
    SY_Issuance = Base_Issuance * Staking_Rate
    Base_Issuance = 100m tokens & Staking_Rate = 75% → SY_Issuance = 75m tokens
    Scenario B (Staking_Rate < 50%)
    SY_Issuance = Base_Issuance * Constant
    Base_Issuance = 100m tokens & Staking_Rate = 25% & Constant = 0.5 → SY_Issuance = 50m tokens

  3. Inflation_Rate = SY_Issuance / Supply
    Scenario A: SY_Issuance = 75m tokens & Supply = 1bn → Inflation_Rate = 7.5%
    Scenario B: SY_Issuance = 50m tokens & Supply = 1bn → Inflation_Rate = 5%

  4. Yield = Inflation_Rate / Staking_Rate
    Scenario A: Inflation_Rate = 7.5% & Staking_Rate = 75% → Yield = 10%
    Scenario B: Inflation_Rate = 5% & Staking_Rate = 25% → Yield = 20%

Hence the minimum target yield is 10%, regardless of the staking rate. Plotting this with simulated inputs:

The outputs of proposed Stable Yield reward mechanism: (1) Nominal inflation rate & (2) Yield (APY). Note that the staking rates are simulated (rand[0.15, 0.99]). This allows the incorporation of realistic (time-dependent & monotonic) supply growth and subsequent issuance adjustment to fulfill the correct rates.

14 Likes

I think this mechanism is novel and brilliant. I’m completely onboard with the design.

A part of the conversation seems to be missing from the proposal, however. As a community, before implementing an inflation mechanism, we have to address and justify this value:

If I understand the proposal correctly:

  • the stable yield rate is constant (that can be modified)
  • the nominal inflation rate is a dependent variable
  • the participation rate is an independent variable

So, where do we set the stable yield rate? (keeping in mind that we can change this in the future, but it will likely be a difficult/sticky conversation once the ball is in motion)

Here are my thoughts on the matter:

The mechanism detailed in the post above will help sustain independent nodes by establishing a floor, but it won’t directly put them in business. The inflation model accounts for operational expenses, but needs to also address startup costs (which in this industry are considerable and worth subsidizing IMO).

So instead of setting a static stable yield rate, I propose that we set a decaying rate with a set perpetual inflation. This would be similar to the Synthetix model (SIP-24) here:

However, I think our rate should be much less aggressive in regards to the:

  1. Initial Rate
  2. Decay Rate

The reason for (1) is obvious: we don’t want to dilute the DAO or people that cannot run a node too much. The reason for (2) is that a slower decay will allow more time for independent operators to learn and boot up.

In combination with the stable base yield mechanism that @arj proposed above, I believe this plan can help us maximize independent nodes in a deliberate manner.

As to the exact numbers, I’ve created a spreadsheet to play around with what that might look like:

Feel free to adjust any of the values in blue.
These are my initial thoughts on those parameters:

Perhaps we could drill down a little further into whether these values would be more/less effective than other values at incubating operators.

In the spreadsheet and image above, these two parameters are designed to grow the number of independent operators:

  • Annual Yield Decay Rate
  • Initial Target Yield

While these two parameters are designed to sustain network growth and independent operators:

  • Constant
  • Terminal Target Yield
10 Likes

Thanks all!

The main issue I see is that “staking” isn’t really a binary on/off state due to all the different applications so distributing the inflation based on “staking” status wouldn’t be viable.

We update it in the following way… we could do it by adding an inflation subsidy to the T-denominated rewards paid out so that if someone earns 1000 T normally the inflation might add 500 T on top of that

That means centralizing our reward logic from all the different applications to a single place which tracks the payment amounts and adds the subsidy on top where appropriate; similar to how we have a central staking contract we’d have a central reward contract.

On the application side the change wouldn’t be that major, any T tokens paid in (or even non-T tokens if we centralize the buyback of T tokens) would just be routed through the reward contract before being received by the sortition pool/etc. That actually distributes them to the stakers.

Also, what this means is that you have to participate in all the available applications to maximize your yield, and each application (unless we configure the subsidy differently to encourage staking for particular applications) would pay subsidy proportionate to the amount of rewards the application pays.

5 Likes

great proposal @arj, would it be fair to say the eli5 version is

Proposal Rationale

  • We need lots of stakers for security.
  • The breakeven to run a node will determine the number of stakers.
  • Variable minimum returns affect small stakers disproportionately and can cause centralisation.

Proposed Mechanism

  • Stabilise the minimum return to create more certainty for small stakers.
  • Target minimum of 10% APY.
  • Network inflation is bounded between 5% - 10% depending on how much of the circulating supply is staked.
  • Yield gets stabilised to the minimum APY when the stake rate of the circulating supply is high.
  • Yield is variable (high) when the stake rate is low, this encourages more stakers to join the network.
  • Stakers are paid in T. Network Fee’s are used to buy back T on market to put upward price pressure on T. This aligns everyones interests to increasing the adoption of T.
7 Likes

Thank you @arj.

One point to clarify, am I right that the target yield is based on the assumption that a staker is opting-in to running every service? If they only run a subset of services, then the earned yield would be less than the target, correct?

I think diverting some of the fees to an Olympus Pro-style bond program in order to build up protocol-owned liquidity would be worth exploring as well.

6 Likes

My understanding here has been that the DAO can choose how and how frequently to buy back T, and that rewards can be from either freshly minted or freshly bought-back T. I could be missing a passed proposal though, correct me if I’m wrong @ben

We need to avoid being too prescriptive here, as past designs haven’t been flexible enough. Making decisions that cleanly distinguish between emissions behavior, staking rewards, and what to do with revenue means the DAO can be more responsive to the market…

… and make decisions like this :slight_smile:

3 Likes

To add on, we should work to have a separate schedule + knobs for governance to adjust around supply emissions, DAO treasury usage, staker rewards (either in newly minted T or bought back T), and when and how buybacks land.

I also think we should work to have a clear emissions cap, outside this whole proposal, so market participants can grasp it more easily.

5 Likes

It took me a few reads before I felt like I grok’d it, but here was the bit that confused me.

Originally in this bit:

I was thinking about “break-even” in terms of either eth or USD for running the node. You have to pay server costs, there are other ways to invest, you have to pay gas costs, etc. So, after getting your T token rewards, I would expect that you would look at the price oracle for T to USD, and then be able to say “I earned $1200 this month”, and then that would let you know if you were “breaking even” or not. Then, you compare that to other investment opportunities to decide if the T network is a good risk-adjusted place to put your money.

But then, the actual math does all of it’s APY calculation strictly in T tokens. The yield is calculated as inflation rate / staking rate, which further boils down to target_yield * max(staking_rate, constant) / staking_rate

The important bit here is that none of these calculations seem to touch the outside world, or adjust for how I would traditionally account for “break-even costs”.

That said, I can see how, by making sure that rather than giving out fixed token emissions, we give out dynamic token emissions based on the stake_rate, that would help out the little guy, relatively, and thus help with the centralization of capital.

Let me know if anything I’m saying is totally off-base!

4 Likes

Thank you @arj , really a great design ! And cool Eli5 @ben, ty.

Being an independent staker myself I find this design very clever and economically appealing.
Not sure about the need expressed by @jakelynch (great analysis btw !) to reward more initially since there are two existing networks merging with a lot of nodes already spun up.

Some open questions I have:

  • when you talk about APY, this means compounding ? As far as I know this happens actually with rewards on Nu but not on Keep.
  • Rewards to stakers will come out only of issuance ? And Fees generated by the different services would go to the treasury ?
  • And treasury is to handle all the other needs, like grants, liquidity, bonds (like this !), marketing, support, etc ?
2 Likes

@beaushinkle Thank you for your response! It is correct that this mechanism would not adjust the nominal inflation rate based on any of the other factors which influence the breakeven stake. Even if there were a suitable price oracle, regulating the issuance of new tokens based on fluctuations in the T:USD rate would likely engineer the network into oblivion :smiley: You’d run the risk of a positive feedback loop where price depreciation due to market forces would trigger faster issuance, which itself is price depreciating (all things equal), so would trigger even more tokens flooding the market, and so on and so forth…

Conversely, this proposal increases the nominal inflation as a response to an elevated ‘supply’ of stakers. To make this more concrete, here is an expression for NuCypher’s breakeven stake size:


Where the invariants are:

And the time-sensitive variables are:

Although clearly several factors influence the breakeven stake size, you can see that it is proportional to S_committed – the greater the sum of other stakes which are active in a given reward timestep, the greater the size of one’s stake must be in order to avoid a loss. Of course, across a diversity of node operations, the number of loss-making periods has a non-uniform impact on (in)solvency. Regardless, the greater the break-even stake size is, and the longer it remains elevated, the fewer stakers will be able and/or willing to maintain their service to the network.

@Eastban Thanks, I appreciate your comments!

Compounding via restaking is perfectly compatible with this mechanism. If you restake your rewards and your fractional share of the supply grows as a result (because some other stakers withdraw), then you are eligible for more rewards. The novel feature is that, despite that increase your fractional share, everyone will still see 10% growth on their stake, even though yours will now be larger and receive more tokens in absolute terms.

Conceptually, rewards = issuance = inflation. I use issuance to describe the absolute number of tokens generated per timestep and (nominal) inflation to describe the annual growth in the circulating supply. Fee mechanisms and pricing structures are TBP (‘To Be Proposed’ – can we make that a thing?). But one (as yet unwritten) proposal that’s been floating around involves certain service fees being used to buy T on the open market and then distributing to stakers, in addition to the Stable Yield discussed here. The answer to your last question is yes.

5 Likes

Stable Yield + Conventional Rewards Architecture

The overall reward mechanism architecture comprises three layers, from top to bottom:
(1) DAO + Multi-sig council
(2) Inflation Contract
(3) Application

(1) The DAO mints a top-level sum of issuance (e.g. 25m tokens) on a quarterly basis. This is then split into subsidy budgets by the Multi-sig council and allocated to Threshold’s various applications – e.g. tBTCv2 will receive 10m tokens, PRE gets 5m, etc.

(2) The Inflation Contract provides special methods for applications which require the maximization of node population (at network genes is: tBTCv2, future: other apps) . This is achieved via the dynamic nominal inflation + stable yield mechanism detailed above. For those population-sensitive apps, the DAO mints, and Multi-sig council allocates, sufficient issuance for the ‘worst-case’/maximal participation scenario – e.g. if 90% of all tokens are authorized to tBTCv2, then a 10% target yield (APY) requires 9% nominal inflation (this equals some absolute # of tokens sent to the Inflation Contract – accounting for the fact it’s quarterly and the evolving total supply). The Inflation Contract contains methods that calculate a daily issuance figure for each application, based on the inputs of (a) total supply and (b) number of tokens authorized to the application in question. The contract then transfers this output issuance to the application contract once per day.

(3) The application contract then distributes the daily sum of tokens received from the Inflation Contract to stakers using simple, Unipool-style logic. The application distributes all of the tokens it receives per day.

*Note that population-sensitive apps like tBTCv2 will almost certainly have tokens left over and undistributed, held in the Inflation Contract, at the end of every quarter. The Multi-sig council manually adjusts subsidy budgets to account for this – e.g. if the initial (‘worst-case’) subsidy budget was 10m, and 5m were distributed, then in the next quarter the Multi-sig council would allocate ~5m to that application.

*This design allows the flexibility of applying the stable yield methods to any application in the future – for example, if a design change means that Random Beacon actually needs to maximize its node count or stabilize yield for some other reason.

*Note that if it proves to be overly costly (e.g. due to a spike in gas prices), the Inflation Contact could transfer issuance to applications more infrequently (e.g. weekly) while still largely preserving the predictability and sustainability benefits of the Stable Yield mechanism.

*This model leverages Multi-sig council-driven incentive adjustment, rather than capping staker numbers & a delegation system, in order to drive participation into/away from certain applications – e.g. too many PRE nodes might mean dropping the annual nom. inflation from 4% to 2%.

*Note that all applications will receive issuance via the Inflation Contract layer, but not all have this issuance adjusted daily by stable yield methods.

*The maximum dilution of passive holders annually is the sum of (a) the target yield for stable yield applications and (b) the nom. inflation for other applications – e.g. 6% tBTCv2 + 2% PRE + 1% Random Beacon implies a dilution ceiling of 9% per year.

2 Likes

Thank you for your work here. I enjoyed reading this and the notebook on github.

If I understand correctly, this is working to address the only reason I have not yet run a node for any project. (despite having symmetric gigabit and a few Linux servers)

Your findings are in-line with what I found when looking at some projects to participate in.
Without significant initial capital backing/stake (sometimes nearing $1M USD), my machine would not have a chance to be ‘active’.

5 Likes