Stake-Assured Human Feedback
Reppo uses a custom veToken mechanism that lets researchers and AI/ML teams capture the strength of preferences in training data, backed by real economic stake. It extends traditional vote-escrowed tokenomics (veTokenomics) beyond governance and applies it directly to AI training data.
How It Works
Lock REPPO β Receive veREPPO
Voters, who are also data annotators, lock $REPPO tokens for a chosen length.
In return, they receive veREPPO, i.e. voting power.
The amount of veREPPO is proportional to both the quantity of tokens locked and the lock duration. Longer locks grant disproportionately higher voting power.
Vote Each Epoch
veREPPO holders vote each epoch to predict which collabs, or AI content published by creators, will receive the most support.
Voting power decays linearly within the epoch, so earlier votes carry more weight than later ones.
At the end of each epoch, net new $REPPO emissions are split 50/50 between creators who received votes and the voters who backed them.
This creates a prediction market around AI content while also crowdsourcing AI training data.
Adjust Votes, Not the Lock
Voters can adjust allocations every epoch, but their locked REPPO remains illiquid for the chosen duration.
This balances long-term capital stability with short-term governance flexibility.
Key Properties
Alignment of incentives: Token holders are rewarded for surfacing the highest-quality content and publishers.
Sticky economics: veREPPO creates persistent relationships between voters and creators, reducing short-term churn.
Anti-farming mechanisms: Locked stake and epoch-based participation make short-term reward farming harder to sustain.
Last updated