a16z Backs OpenGradient in $9.5M Verifiable AI Round
OpenGradient, a startup building infrastructure for auditable AI model execution, has raised $9.5 million led by a16z crypto. The company, co-founded and led by CEO Matthew Wang, positions itself as a decentralized network where AI models can be hosted, run, and cryptographically verified at scale. Coinbase Ventures, SV Angel, Foresight Ventures, Pragma, SALT, Symbolic Capital, Canonical Crypto, Black Dragon, NEAR, Celestia, and Thanefield Capital also participated. No valuation was disclosed.
AI systems are no longer just answering questions. They are executing trades, managing assets, and issuing decisions with minimal human oversight. Reported AI-related incidents rose 21% from 2024 to 2025, according to the AI Incidents Database, and a recent BCG-MIT survey found that only 10% of companies currently allow AI agents to make decisions autonomously — a figure expected to climb to 35% within three years. What makes this acceleration risky is not the autonomy itself. It is the fact that most AI infrastructure gives developers no way to verify what a model actually did: which version ran, what data it received, or what it returned. JustAINews has a broader look at how this accountability gap is playing out across industries.
Why AI infrastructure has an accountability problem
Most AI applications today run through a small number of large cloud providers. The companies and developers building on top of those providers have no reliable way to audit what is actually running underneath. They cannot confirm which model version executed a request, what input it received, or whether the output reflects what the model genuinely produced. BCG has flagged this as a growing operational risk: autonomous AI systems can initiate actions with limited supervision, and when the infrastructure is opaque, tracing failures becomes difficult or impossible.
The stakes are higher in regulated sectors. In finance, healthcare, legal services, and government, AI-driven recommendations that affect real decisions require documented oversight to guard against errors and regulatory violations. A model that cannot be inspected creates accountability problems that no governance policy can fully fix. That pressure is pushing demand toward infrastructure that makes AI execution verifiable by design.
Investor interest is following. The AI and crypto sector, once associated primarily with speculation, has moved into what analysts are calling a phase of structural maturity. Projects offering verifiable compute and live utility are increasingly drawing institutional capital, and verifiable AI infrastructure is emerging as one of the more consequential categories within that shift.
How OpenGradient plans to spend the funding
OpenGradient will use the capital to scale its network ahead of a broader ecosystem launch. The company reports more than 2 million users, 2 million verifiable inferences processed, 500,000 cryptographic proofs generated, and over 2,000 models from more than 100 developers listed on its Model Hub. Six revenue streams are active across the platform.
The money will go toward expanding compute capacity, building out the developer community around its SDKs and APIs, and bringing in more models and enterprise clients. Wang described the company’s direction plainly:
“The AI stack is consolidating around a handful of closed providers, and the applications being built on top have no way to audit what’s running underneath. We’re building the open alternative — infrastructure where models are inspectable, execution is provable, and developers own the intelligence their products depend on. This funding lets us scale that vision.”
Matthew Wang, Co-founder and CEO of OpenGradient
What OpenGradient actually builds
OpenGradient functions as a specialized AI coprocessor. Rather than operating as a standalone blockchain, it works alongside existing systems, letting applications, smart contracts, and autonomous agents hand off computationally intensive AI tasks to a dedicated network of GPU and Trusted Execution Environment nodes. A Trusted Execution Environment, or TEE, is a secure hardware enclave that processes code in a way that outside parties can verify without accessing the private data involved.
The platform has three parts. The Verifiable Inference Network attaches a cryptographic proof to every AI inference, so any downstream system can confirm exactly what model ran and what it returned. The Decentralized Model Hub is an on-chain repository where creators can publish, monetize, and combine models without going through an intermediary. The third piece is developer tooling: SDKs and APIs that let engineers tap into verifiable inference without needing to understand the cryptographic systems underneath.
The practical application is straightforward. A company running sybil detection or content generation on OpenGradient can share cryptographic proof of the results with its clients, letting them independently verify the output without seeing the underlying model or raw data.
The investors backing OpenGradient
The round was led by a16z crypto, the crypto-focused arm of Andreessen Horowitz. Coinbase Ventures and SV Angel co-invested, joined by Foresight Ventures, Pragma, SALT, Symbolic Capital, Canonical Crypto, Black Dragon, NEAR, Celestia, and Thanefield Capital.
The angel list reflects deep roots in the crypto and decentralized infrastructure space. Balaji Srinivasan, former CTO of Coinbase, took part alongside Illia Polosukhin, co-founder of NEAR Protocol, and Sandeep Nailwal, co-founder of Polygon. Bruno Faviero of Magna, Daniel Cheung and Ryan Watkins of Syncracy Capital, and Ekram Ahmed of Celestia also invested.
Origianl Creator: Ekaterina Pisareva
Original Link: https://justainews.com/companies/funding-news/a16z-backs-opengradient-in-9-5m-verifiable-ai-round/
Originally Posted: Wed, 15 Apr 2026 14:35:37 +0000












What do you think?
It is nice to know your opinion. Leave a comment.