Consensus 2024 provided bountiful opportunities for learning about the burgeoning web3 x AI space. The Autonomys team attended numerous insightful talks, panels and side-events on this theme, cultivating our network of potential DePIN and deAI ecosystem partners in the process. Read highlights from some of our favorite talks on ethical decentralized AI (deAI), decentralized physical infrastructure networks (DePIN), and self-sovereign identities (SSIs) below.
Hernandez spoke about the importance of decentralized digital identities, or self-sovereign IDs (SSIs), and having exclusive control over your personal information.
A privacy-centric world that lacks proper identity controls is just as dangerous as a highly transparent world protected from identity fraud. SSIs balance decentralization, privacy and security, and are thus the next generation architecture for identity management on the Internet.
Verifiable credentials have built-in proof of authenticity and ownership, massively reducing operational costs. Instead of having multiple siloed identities for individual services, we will have a single identity across all our digital services, with the ability to grant and revoke access to specific personal information easily.
SSIs are coming very soon. “It is time that we shift our collective indifference towards privacy to a collective effort in building an improved version of the Internet where privacy is not just a nice idea, but the cornerstone that sustains it all.”
Representatives from Gensyn, Akash Network, FLock, and Foundry spoke about the state of deAI and DePIN and the future of machine learning (ML).
Jiaho Sun highlighted the importance of GPU drivers and data privacy in deAI agent development. Decentralized compute DePIN need drivers to ensure distributed GPUs can collaborate on model training effectively and efficiently — NVIDIA’s CUDA software currently dominates. If you want to train an effective personal assistant AI agent, you must provide it with all your personal data (which you don’t want to be giving to Big Tech companies). Sun suggested deAI Agents Summer is thus on the horizon.
Dr. Ben Fielding offered two solutions for solving latency and bandwidth issues in decentralized ML training:
Fielding also prophesied richer, bespoke interactions based on probabilistic AI replacing traditional app and website interfaces. However, he conceded we need much more decentralized training and fine tuning infrastructure for that future to exist, because currently there’s nowhere near enough. He foresees the future of ML as networks of mini models distributed across devices that compose to form larger models when needed. Inference queries will be paths through these mini models, similar to Internet routing protocols. An open-source compiler stack for ML training would connect high-level AI frameworks with different devices as more chips are designed.
There is lots of ongoing research into techniques for data privacy in ML training and inference, including federated learning, homomorphic and functional encryption, and differential privacy. The number one problem Gensyn is trying to solve is verification of remote ML tasks, as ZK-proofs are too expensive to verify computation. General auditing (rerunning portions of the task) is currently the most viable verification mechanism, but requires reproducible or deterministic execution via cryptographic, game theoretic or probabilistic proofs, as different devices produce different results.
According to Greg Osuri, the GPU supply chain is extremely broken today. It’s impossible to get enough high-density GPUs to do any meaningful training or inference at scale, as demand from the Big Tech quintet (Alphabet, Amazon, Apple, Meta, Microsoft) currently strangles the supply of NVIDIA GPUs. The complexity of the global supply chain has meant supply has failed to keep pace with ever-increasing demand, and creating chip fabricators requires lots of time and investment. However, there’s billions invested globally into compute for AI and more specialized GPUs for ML. Although AMD and Intel GPU driver software is unstable right now, NVIDIA’s dominance will wane within 3–5 years, Osuri suggests.
There are now lots of chips available as ML projects have upgraded to different chipsets, and crypto companies, such as Foundry, have moved away from PoW mining as it became less profitable. Many now supply Akash, making it the only way of accessing H100s on-demand. Akash powers numerous apps and dApps, including Brave, FLock and Venice.ai. Akash is able to reduce costs by 50–90% by tapping into this underutilized compute capacity and decoupling the resource from its control. As Foundry’s Tommy Eastman points out, “This represents a unique opportunity to onboard people into crypto as there’s no other options for them to access these GPUs. If we make the experience good, they will stay.”
Osuri believes the narrative is changing around crypto and AI, citing this October 2023 article in Semafor. As he asserts, “Open-source, decentralized systems are significantly better at guaranteeing data privacy than closed, centralized ones because of their transparency.”
David Johnston, from Morpheus and Erik Voorhees, from Venice.ai, spoke about smart agents and the goals of building open AI.
A reaction to regulatory capture, Morpheus has no founder, company or foundation — just a whitepaper, written by pseudonymous co-authors Morpheus, Trinity and Neo. However, it also now has 255 contributing members and $500 million in ETH staked towards the project, contributing yield to bootstrap it into existence. Johnston claims AI experts are already saying it’s really useful for cheap compute.
In Johnston’s words, “Ethereum pioneered smart contracts. Morpheus is pioneering smart agents.” Built using open-source LLMs (Llama 3, Mistral), which caught up with closed LLMs (ChatGPT) last year, Morpheus agents can run on personal hardware, lowering the cost of access to AI for everyone.
Agents need web3 for economic action. The future is wallet-connected personal agents that you own and control, Johnston claims. AI-powered interfaces will be the Internet search engine moment of the mid-90s for web3, improving UX, matching intent with the best results, and helping onboard billions without technical knowledge, truly bringing web3 to the mass market. For example, you tell MetaMask to send 1 eth to Bob. An agent finds the most efficient path before allowing you to approve the transaction with a single button.
Voorhees’s example is more complex, giving the user the ability to interact with a financial intelligence: You tell your agent to put your USDC (on MetaMask) into the highest yielding stablecoin. The agent responds that it’s XYZ with a rate of 32.3%. However, it has a market cap of $50 million. The second-highest is ABC with a market cap of $10 billion. Do you want to buy it? Tap yes and it buys it.
“You don’t want all that stuff going through a centralized company. That should be obvious to anyone in crypto,” Voorhees suggests. Crypto’s permissionless composability will transform AI. “If you can incentivize a decentralized compute function, everything else AI can organically build on top.” deAI will grow like DeFi did despite established players, because it is open, permissionless and composable. We are building systems of interaction and exchange which are easier and more frictionless, which will draw developers and users.
Venice pulls inference data from open-source models run on decentralized GPUs via encrypted proxy servers. Data is otherwise stored locally. Why use Venice? Everything you’ve ever sent to ChatGPT goes to a centralized company (and any other entity, government or hacker they give, sell or leak your unencrypted data to) tied to your identity forever. If you don’t want to be spied on and don’t want censored answers, use Venice.
Dragonfly Capital’s Haseeb Qureshi spoke about decentralized inference and the state of the deAI industry: “Crypto is a technology that creates trust. AI is a technology that needs trust.” web3 x AI is big, but right now it’s early and hard. The next generation of AI protocols needs to be cheap, easy and fast for mass adoption, but needs to solve core scalability issues first.
The current deAI stack:
Decentralized GPU compute networks, including Render, Akash and io.net are the largest in web3 x AI. Decentralized training protocols include Bittensor (the most famous) and Gensyn (in the research stage). Decentralized inference (running the model to ensure its returns the right output) is being worked on by Ritual (the most famous) and Modulus (doing zero-knowledge inference). AI inference is very computationally intensive. It is virtually impossible to do directly on a decentralized blockchain. Lastly, there are AI-powered web3 apps, including Kaito (market intelligence), MyShell (character chatbots), Alethea (interactive AI companions).
Approaches to verifying LLM computation on-chain:
These are not new ideas — they map almost perfectly onto how we think about blockchain security:
ML is ultimately a type of computation, but it has important differences with computation normally on a blockchain. A key challenge is the fact ML was not designed to be proven. This matters for zk-proofs as they won’t compute with even minute differences caused by:
There is a fourth way of verifying deAI computation: using Trusted Execution Environments (TEEs) — special enclaves for storing private information with hardware-level security. The newest generation of NVIDIA chips have these TEEs that can guarantee privacy and verifiability of computation way faster than any other form of verifiability. However, you have to trust NVIDIA (which everyone ultimately has to do if they work in AI). Qureshi says the market will decide how to answer this question.
At present, few decentralized cloud projects have a good privacy solution for sensitive data, but this also applies to managed cloud providers. Data is the single hardest thing for deAI players, Qureshi claims. Although data marketplaces can aggregate and auction data, licensing agreement enforcement on data is very difficult. Vana is building Reddit Data DAOs to allow users to sell their collective user-generated data for AI training (Reddit data is very valuable for this), and undercut Reddit in the process (Reddit licenses their site data for $50m a year — none going to users).
Teana Baker-Taylor (Venice.ai), Ben Goertzel (SingularityNET) and Arif Khan (Alethea) spoke about the opportunities and challenges of deAI.
Crypto’s ‘killer app’ could be AI smart agents, Baker-Taylor suggested. Tokenization is very useful in a decentralized, digital-native ecosystem as it allows for efficient incentivization and automatic financial transactions between agents via smart contracts.
She offered this as an example: I type in natural language my instructions for my agent (permissioned by me) into a generative interface. That agent works with other agents to complete tasks and automatically settles with them via a tokenized currency and smart contracts.
However, Taylor warned, “This is uncharted territory. Unlike cryptocurrencies, we don’t have an existing relationship with AI in the same way we did with money. The adoption cycle for these technologies could be faster, meaning we have less time to figure out what good looks like. We really need to think about our privacy in a different way” because the more we give to AI, the more open we leave ourselves to manipulation. We need to translate crypto’s notion of financial sovereignty into an AI context.
Dr. Ben Goertzel (who popularized the term AGI) said he tried doing decentralized AI in the 2000s and it wasn’t possible. Ethereum’s revolutionary smart contract ecosystem inspired and enabled SingularityNET’s founding in 2017 with the goal of building infrastructure to run various AI systems across large numbers of decentralized machines.
deAI tools still work a lot worse than Big Tech AI tools, Goertzel claimed. We need to fix that because “the amount of people who will use a decentralized AI because it’s decentralized is not enough”. We need to build systems that are smarter, more powerful and easier to use. Near parity between open and closed-source LLMs isn’t enough. The deAI community has tremendous size and diversity on which to draw from on its side. Goertzel is working on open-source AGI architecture project OpenCog Hyperon which brings together neural, symbolic and evolutionary AI in an integrated cognitive architecture built on decentralized infrastructure.
“Although we’re not there yet, when you take seriously our aspiration to make machines that can think as creatively and originally as people, and not be constrained by their programming or data, it becomes clear that who owns and controls that system is a big deal. Ultimately, the answer is going to be they will own and control themselves,” Goertzel said.
Alethea allows people to tokenize their generative AI models, characters, agents, datasets etc. via the The AI Protocol. Its key application is ALIagents which allows you to create emotive, human-like characters that can be interacted with in real-time. Khan posits the main challenge in building deAI at the moment is the coordination of capital, compute and intelligence. He suggests this is partly because it’s incredibly difficult to step away from existing structures with strong network effects — whether it’s a capital-based (VCs), compute-based (cloud computing) or intelligence-based (post-ChatGPT).
To create deAI agents, Khan claims, you need to solve 3 the Cs:
Blockchain tech is the perfect solution, particularly when you integrate primitives like EigenLayer, which uses Ethereum’s security and yield to provide opportunities for developers to build applications.
Thanks for reading this summary of the best talks and panels on decentralized AI from Consensus 2024. We hope you found this crash-course on one of the most exciting emerging technological trends informative and insightful. We’re looking forward to further building on our deAI knowledge and developing our partnership network at future industry events, including EthCC[7] (and its many side-events) in Brussels on 8–11 July!
Bonus Consensus 2024 Autonomys Team Pics 👀