Sovereign AGI: Millions of Experts

As AI systems become more advanced, there's a growing risk that they could become misaligned with human values and goals, leading to unintended consequences. To mitigate this risk, we need new approaches to AI development that prioritizes human values and control.

To prioritize human values and control, co-ownership and co-monetization is important.

Our ambition is to support training foundation models such as those being built by NEAR Foundation and Pond GNN through our collection of Reppo models and agents that prioritizes human values and control. Models and agents built by Reppo Pods, each built on niche datasets, decentralized infra, and map to individual and/or group ownership, and monetization schemes.

Last updated