HumanChain
HumanChain is building the infrastructure for AI safety. The guardrail against AI failure. Failure not intrinsically as a technology but the context within which it operates, the governance structures it is subject to, the networks of power and uses to which it is put.
The AI era has begun. We are living in a time of immense opportunities and perils. To make the most of it we need to create a safe, trustworthy and human-centric digital world. Too few of us are focused on building the future that humanity can enjoy without stifling innovation.
AI seismic wave is not equal to previous technological revolutions that pessimism aversion will suffice due to the following reasons: it is inherently general and therefore omniuse, can hyper-evolve, has asymmetric impacts, and is increasingly autonomous.
We are techno-optimists who believe that humanity deserves the abundance and significant societal improvements that AI can reap for us. But currently their creation is driven by powerful incentives: geopolitical competition, massive financial rewards, and slowly turning into a closed culture of research. State and non-state actors will race ahead to develop them, taking risks that affect everyone, whether we like it or not.
Time is nigh. Join us.
We are assembling a team of the world's best engineers and researchers dedicated to safe AI diffusion.
Connect with us if you want to read our whitepaper or contribute as a team member, advisor, partner or as an investor.
Read our Concept Note
Email: team@humanchain.network