Incentive Framework

veREPPO is the governance and incentive alignment mechanism that enables researchers and AI/ML teams to capture strength of preferences in training data—backed by real economic stake. It's the first of its kind extension of the traditional vote-escrowed tokenomics (veTokenomics) model which is primarily used to enable efficient allocation of capital to generate AI training data.

How It Works

  1. Locking REPPO → veREPPO

    • Voters, who are also data annotators, lock $REPPO tokens for a chosen length.

    • In return, they receive veREPPO i.e voting power.

    • The amount of veREPPO is proportional to both the quantity of tokens locked and the lock duration. Longer locks grant disproportionately higher voting power (non-linear function).

  2. Voting & Emissions

    • veREPPO holders vote each epoch to predict which collabs (AI content published by creators) will get the highest votes. We use a commit-reveal function to ensure privacy in the voting process.

    • At the end of each epoch, net new $REPPO token emissions are distributed and split 50-50 between content creators who got votes and everyone who voted for them.

    • This creates a dynamic prediction market for AI content while crowdsourcing AI training data.

  3. Epoch-Based Flexibility

    • Voting allocations can be adjusted each epoch by the voters, but the locked REPPO remains illiquid for the chosen duration.

    • This balances long-term capital stability with short-term governance flexibility.

Key Properties

  • Alignment of Incentives: Token holders are rewarded to surface the highest quality content and publishers in the system while getting rewarded for doing so

  • Sticky Economics: VeREPPO create persistent relationships between voters and creators, reducing short-term churn.

  • Anti-Farming Mechanisms: The commit-reveal adds a delay in how and when emissions are distributed, delivering real economic value to creators and voters.

Last updated