OpenLedger: Scaling Attributable, Payable Intelligence

TL;DR
- The future of AI isn’t “one model to rule them all” but thousands of domain-specific AIs.
- AI thrives on high-quality, domain-specific data, but experts who produce this data rarely get paid.
- OpenLedger is bringing AI into its “YouTube moment”—making specialized data monetizable for contributors.
- Payable AI is where AI model users compensate data providers whenever their contributions are used. OpenLedger builds on Hugging Face’s open-access philosophy but adds financial incentives and accountability.
- DataNets are decentralized repositories where contributors submit specialized, high-quality datasets for use by models. They are built on an EVM-compatible L2.
- Proof of Attribution (PoA) is OpenLedger’s secret sauce. It tracks and verifies how specific data influences AI outputs.
- Attribution in AI is notoriously difficult. OpenLedger is doing the heavy-lifting in making frontier research practical in an approach works for specialized models
Let’s say every doctor, lawyer, or engineer followed the exact same training manual. Your heart surgeon, instead of perfecting bypass techniques, would be deep in rocket science equations.
A fraud investigator? Stuck in hours of art history lectures.
That’s what happens when we try to force a single general AI model to solve every problem.
If you ask me, the future won’t be about one AI to rule them (us?) all.
It’s about thousands of specialized models, each fine tuned to its domain—biotech, cybersecurity, logistics. We don’t expect one human to be the best doctor, lawyer, and engineer. Why should we expect the same for AI?
Source: Epoch AI
Frontier labs have been chasing scale: bigger models, bigger datasets, more general intelligence. The theory was simple—train on enough data, and an AI can do everything. And to some extent, it worked. GPT-4, Claude, and Gemini can write code and summarize papers. But here’s the reality check: size alone doesn’t make an expert.
Current scaling laws suggest that increasing pre-training compute improves model performance, but we’re seeing diminishing returns emerge in domain-specific tasks. There is demand for specialized systems optimized for speed and accuracy in narrow domains
Businesses don’t want AI that thinks slowly but deeply—they want AI that’s fast and accurate, tailored to their needs.
OpenLedger is betting big on this specialized future.
OpenLedger → Specialised AI Models
But here’s the challenge: specialized AI requires specialized data.
A legal AI isn’t helpful if it’s trained on Wikipedia. A biotech AI won’t revolutionize medicine if its knowledge is just scraped from PubMed. The best AI needs high-quality, domain-specific data, sourced from real experts. And yet, the very people producing this knowledge—scientists, analysts, engineers, lawyers—are rarely compensated for the role their insights play in AI development.
That’s the broken system OpenLedger wants to fix.
Imagine a world where every time your research paper or financial model is used to train or fine-tune a specialised AI model, you get paid.
Not once, in a lump-sum sale of your data, but continuously, every time that data is accessed or used to generate an AI response.
This is OpenLedger’s vision: an economy where data contributors are directly rewarded, with full transparency into how their knowledge is being applied.
The process is simple:
- Anyone can contribute data.
- AI models use that data to enhance their intelligence.
- Contributors earn micropayments every time their data is used.
It’s a fundamental restructuring of how AI is built and sustained. OpenLedger is not here to compete with AI giants like OpenAI or infrastructure players like AWS.
Instead, it offers a decentralized, permissionless framework where data, compute, and AI models interact seamlessly—with transparency and financial incentives built in.
“Payable AI”
OpenLedger does this via Payable AI.
errr…. what’s that?
Before YouTube, creating video content was a dead end for most people. Unless you had Hollywood backing, there was no way to monetize.
That changed in 2005. YouTube gave anyone with a camera and an internet connection a global audience. By 2006, it was streaming 100 million views per day, proving user-generated content had massive demand.
Then came the real game-changer: the YouTube Partner Program (2007). Creators could now earn directly from ad revenue—and the results were staggering:
- $31.5B in ad revenue (2023).
- Paid out over $70 billion to creators (2021-2023).
- MrBeast earned $82M in 2023 alone.
YouTube built a creator economy where individuals, not just media giants, could profit.
Now, OpenLedger is doing the same for AI data — think of it as data’s YouTube moment.
From a business perspective, the YouTube comparison is especially apt:
- YouTube: Creators upload videos for free; YouTube runs ads, splits revenue with creators.
- OpenLedger: Contributors upload specialized data for free. AI apps pay usage fees for inferences, and the chain splits the revenue with data contributors, model creators, and stakers.
Right now, if you own a valuable dataset—say specialized legal documents, a high-quality code library, or IoT sensor data—your options are limited: sell it once or keep it private. OpenLedger introduces a third path: earn continuously as AI models use it.
This creates a flywheel Effect
- More valuable data enters the system.
- AI models become more specialized and useful.
- Higher demand increases payouts.
- More contributors join, further strengthening the ecosystem.
The scale of the opportunity is huge. PWC estimates that AI will contribute at least $15.7 to the global economy by 2030, growing at a blistering 28% CAGR. And the biggest bottleneck to better AI models and apps? Obtaining high-quality, labeled data. Companies like OpenAI spend billions acquiring good datasets because they know data quality defines model performance.
Cracking the Data Attribution Problem
To make Payable AI work, OpenLedger has to solve the data attribution problem.
That’s figuring out which data points actually shaped an AI’s response. Some data is crucial for improving a model, while other data barely moves the needle. But can we actually track this influence?
Truth be told, I’ve been skeptical.
Attribution in AI—especially for large language models (LLMs)—is one of the thorniest problems in machine learning. These models process trillions of tokens across billions of parameters, making it nearly impossible to pinpoint which specific data influenced an output.
For years, researchers have been trying to measure data influence. The simplest method? Remove a data point, retrain the model, and compare the outputs.
The problem is that retraining a model like GPT-4 would cost 10–100x the original compute—easily $10B+. A total non-starter.

Instead, researchers have experimented with more efficient estimation methods:
Influence Functions use mathematical approximations to estimate how removing a single data point would affect model predictions—without retraining. They’ve been improving, but scaling them to massive models remains difficult.
More recently, LoGra introduced a gradient projection strategy that reduces computation by 6,500× and cuts GPU memory use by 5×—making influence tracking feasible on mid-sized models like LLaMA-3 8B. But at the 100B+ parameter scale, it’s still largely impractical.
Another method, Shapley Values, borrowed from game theory, estimate a data point’s marginal contribution to model performance—but require training models on many subsets of data, making them insanely expensive.
For huge, web-scale AI models trained on unstructured, heterogeneous data, tracking influence is a pipe dream.
But for specialized AI models trained on curated, domain-specific datasets, attribution is suddenly much more feasible. And the AI landscape is shifting toward LoRA-tuned and domain-specific models.
- A medical LLM trained on curated medical texts or a self-driving vision model trained on driving videos has thousands or millions of examples, not billions. That alone makes brute-force attribution techniques more viable.
- Narrower scope = clearer attribution. A medical Q&A model might directly rely on a textbook paragraph for an answer. With a generalist model, that same answer might be a mishmash of sources. In specialized models, the signal-to-noise ratio for attribution is much better.
A key paper that inspired OpenLedger’s protocol introduced a way to estimate data influence far more efficiently than traditional methods. The breakthrough? It’s optimized for LoRA-tuned models—AI adapters that fine-tune using small, targeted updates instead of retraining the entire model.
None of this makes attribution easy—even in specialized models, tracking influence is effort-intensive. But for the first time, it’s actually possible.
And once you can track influence, you can pay contributors fairly. That’s what makes Payable AI real.
Proof-of-Attribution
OpenLedger’s secret sauce is something they call Proof of Attribution (PoA). In essence, PoA does four key things:
- Tracks how data is contributed and used in AI models.
- Validates the utility of the data via on-chain verification.
- Rewards data contributors proportionally whenever a model that uses their data produces an inference.
- Enables transparent, verifiable compensation and explanation for how decisions are reached.
Every time an AI model is queried, the network retraces which data points contributed to the response and logs the attribution on-chain. This “influence mapping” might not be 100% exact, but it’s good enough to weigh how crucial each dataset was and ensure fair distribution of rewards.
This allows OpenLedger to distribute micropayments in real-time to:
- The model creator – For developing and fine-tuning the specialized AI.
- The data contributor – For providing valuable training data.
- The liquidity provider – For staking tokens to support the model’s operation.
This is the heart of the “Payable AI” concept. When you sell your dataset, you get recurring checks forever, for as long as your data is used by live AI models.
Hugging Face vs. OpenLedger: The Missing Incentive Layer
Hugging Face comes in as an interesting comparison. Hugging Face is valued at $4.5B based on its last funding round, with revenue of $70M in 2023.
Hugging Face revolutionized open-source AI by making it easier for researchers to share and fine-tune models, but it never solved the problem of attribution and compensation. AI contributors still give away their work for free.
OpenLedger builds on Hugging Face’s open-access philosophy but adds financial incentives and accountability. While Hugging Face is great for exploration, OpenLedger is designed for real-world AI economies, where contributors, model builders, and enterprises are financially aligned.
That said, PoA has its limits. It works well for small, domain-specific models but remains impractical for large generalist LLMs. Attribution at 100B+ parameter scale is still an open challenge. Increasing use of LoRA reduces compute costs for fine-tuning, but it doesn’t solve attribution—just shrinks the problem.
Whether OpenLedger can scale attribution effectively remains to be seen—but if it does, it could change the economics of AI.
How OpenLedger Works: Under the Hood
We’ve established that paying contributors is great. Now let’s see how the system actually works.
The plumbing behind OpenLedger operates across a few key layers:
- Data Layer – Specialized data networks (DataNets) store domain-specific information.
- Model & RAG Layer – Houses specialized language models (SLMs) and RAG (Retrieval-Augmented Generation) systems, which fetch live data at inference time.
- App Layer – The end-consumer layer for actual AI apps (e.g., DeFi agents, Web3 AI assistants)
Where does the blockchain come in? It allows permissionless coordination, aligns incentives between different parties, and it makes micro-payments (crucial in this design) practically possible.
DataNets
You can contribute individual datasets, but the real power lies in aggregating specialized datasets.
At the heart of OpenLedger’s system are DataNets—decentralized, verifiable data pools that provide high-quality, domain-specific datasets for training and enhancing AI models
Technically, a “DataNet” on OpenLedger is essentially a smart contract (on its EVM-compatible Layer 2) that defines how specific domain data should be collected. This could be Solidity code, legal data, iOT sensor data etc.
DataNets function like “data DAOs”. Contributors submit domain‑specific data to these DataNets. Each DataNet has rules about what data is acceptable and how validators will check quality. Once verified, it’s stored in a distributed manner: hashes are stored on-chain, while the raw data is off-chain but pinned.
To maintain integrity, validators have to stake tokens, aligning incentives to keep the data reliable and relevant. Everything is tracked and attributed on-chain, so contributors earn continuous rewards whenever their data is used.
In AI, quality trumps sheer volume, and DataNets ensure that intelligence is built on better data, not just more data.
The Solidity Datanet
Let’s say you’re a Solidity developer, using AI to write and debug smart contracts. It’s unreliable. General-purpose models are trained on messy, outdated data—half-baked GitHub snippets, vague best practices, and forum posts full of bad advice. They hallucinate code, miss security flaws, and waste gas.
Now, what if your AI actually understood Solidity? And can actually audit, optimize gas, and catch vulnerabilities before they hit production?
That’s exactly what the Ethereum developer community is building: a Solidity DataNet—a curated, on-chain repository of verified code snippets, security audits, and expert Q&A. Instead of relying on generic models, they’re fine-tuning specialized AI copilots on security-audited Solidity data, designed for gas-optimized, audit-ready contracts.
Here’s how it works:
Developers pool verified, high-signal data into a Solidity Knowledge DataNet. They stake tokens to contribute, validators ensure accuracy, and the dataset grows. This powers a fine-tuned open-source LLM (like Llama).
The result is a Solidity-native AI assistant that understands Ethereum’s execution environment. Every time devs or enterprises pay to use it, revenue flows back to model creators, data contributors and LP stakers who locked tokens to support the model’s launch.
Several other datanets being spun up, each tackling a different domain…
Data intelligence model
Ex-Google DeepMind researcher Krishna Srinivasan is leading Data Bootstrap, a real-time, continuously updating Datanet. It’s essentially a next-gen Common Crawl.
Instead of scraping the web in static batches, it leverages community-run nodes for parallel web scraping. The DataNet can be used to keep AI agents fed with fresh, contextual information for sharper and more accurate responses.
DePIN DataNet
The team is working with several DePIN (Decentralized Physical Infrastructure Networks) projects to build a real-time, shared data layer, tapping into everything from weather prediction models to decentralized mapping systems and IoT networks.
Why does this matter? Because some types of apps lose value instantly if their data is not continuously refreshed.
In weather predictions: If thousands of small sensors across a city feed data to a specialized weather model, the city can have hyper-local, minute-by-minute forecasting.
In Real-Time Mapping: Drones, cars, or IoT devices collect geospatial data. That data is fed into specialized maps. The model updates in real time, paying data owners.
If you have a large network of contributors streaming updates, your specialized model remains fresh. Everyone is incentivized to keep the dataset pristine and up-to-date.
The Web3 Trading DataNet
Say you’re deep in DeFi and want better trading signals. You spin up a Web3 Trading DataNet, pooling:
- On-chain historical data
- Off-chain sentiment (Twitter, Telegram feeds).
- Wallet analytics (whale and smart money movements).
- Macroeconomic news & market trends.
You stake tokens, attract contributors, and fine-tune a model built specifically for advanced trading intelligence. Then, someone—maybe even you—packages that model into an autonomous trading agent. The agent executes thousands of trades per day, each one triggering micropayments to data providers, model stakers, and the agent’s operators.
Alpha-as-a-Service, anyone?
The Healthcare DataNet
Hospitals can build private DataNets within secure enclaves (TEEs), ensuring privacy and accessibility for AI-driven medical diagnosis. These datasets combine anonymized patient records, clinical trial results, and real-world medical insights, creating a constantly improving intelligence layer for healthcare AI.
Once fine-tuned and rigorously validated, these AI models can be deployed across smaller clinics, telemedicine platforms, and research institutions, making advanced diagnostics widely accessible. Crucially, every query triggers micropayments—ensuring data owners are continuously rewarded—while keeping medical AI open to authorized institutions rather than locked away in proprietary silos.
Model & RAG layer: OpenLedger’s Testnet
The testnet is already up and running with two functional products, giving you a hands-on way to see how the model and RAG layers work in action.
Live Product #1: Model Factory
Fine-tuning LLMs iis notoriously complex, often requiring command-line tools, API integrations, and deep ML expertise. As a non-developer, I often get lost in the weeds and get frustrated.
ModelFactory eliminates this friction with a fully GUI-based platform, allowing anyone to fine-tune models without writing a single line of code.
Instead of wrestling with data pipelines, you can simply request datasets through OpenLedger, get approval from providers, and link them directly to their ModelFactory interface. From there, you can select a model (LLaMA, Mistral, DeepSeek, and more), configure hyperparameters via an intuitive dashboard, and kick off LoRA or QLoRA fine-tuning—all without touching a terminal.
Live training analytics provide real-time insights, and once fine-tuned, models can be tested in a built-in chat interface and shared within the OpenLedger ecosystem.
What’s particularly interesting is that ModelFactory also has a RAG attribution feature. When a response is generated, you can see exactly which datasets were used in producing the response. Pretty cool.
Source: OpenLedger
ModelFactory’s LoRA tuning is a faster and more efficient way to fine-tune AI models compared to traditional methods like P-Tuning. It trains models up to 3.7 times faster, meaning you get results much quicker.
It also improves accuracy in tasks like text generation, measured by Rouge scores (which track how well AI-generated text matches human-written text).
In short: Faster training, better results
Live Product #2: Open LoRA
OpenLoRA is a framework that lets developers run thousands of models on one GPU instead of one per model, drastically cutting costs and boosting flexibility.
Built for low-latency, high-throughput inference, OpenLoRA allows a single GPU to handle 100-1000s of LoRA (Low-Rank Adaptation) models without the usual memory bloat.
Instead of preloading every fine-tuned model, adapters are dynamically retrieved and merged on demand. A base model (like Llama 3 or Mistral) sits in memory, while OpenLoRA loads only the necessary fine-tuned adapter, keeping resource usage lean. Optimizations like FlashAttention, PagedAttention, and quantization ensure smooth performance, while real-time token streaming slashes response times.
The results: sub-100ms model switching, lightning-fast inference, and massive cost savings.
It’s a leap toward making large-scale, fine-tuned AI both affordable and practical. With OpenLoRA, swarms of fine-tuned AI models can efficiently share GPU infrastructure, and AI agents can switch between specialized models in real-time.
This unlocks personalized AI at scale, allowing models to dynamically adapt to different users and tasks—all on the same hardware.
The OpenLedger Roadmap
OpenLedger’s testnet is more than a trial run. It’s rolling out in phases to pave the way for permissionless model-building on the mainnet.
Phase 1: Testnet Launch (Dec 2024)
The testnet lays the groundwork with:
- The Data Intelligence Layer: A real-time pipeline for aggregating, verifying, and processing internet data.
- 1M+ Node Downloads: So far, adoption has been strong, with nodes running on Android, Windows, Linux, and browser extensions in over 150 countries. Node runners contribute bandwidth and storage to scrape internet data, similar to Grass
Note that this is incentivised with a points system that may lead to future airdrops.
Phase 2: Expanding the Dataset (Q1 2025) → We are here today
With the infrastructure in place, the next step is refining the data itself.
Today, the community tests, validates, and enhances OpenLedger’s dataset, while early builders spin up specialized DataNets, experimenting with model training and validation ahead of the mainnet launch.
And here’s the kicker: they’re also preparing the “Golden Dataset”
Okayyyy… what’s that?
The Golden Dataset is a high-quality, structured, and curated dataset, refined by community input—designed to be the foundation for specialized AI. Unlike Common Crawl, which scrapes vast amounts of raw, unfiltered internet data, the Golden Dataset is carefully selected, cleaned, and optimized for AI training
If executed well, the Golden Dataset will be a competitive advantage, lowering the barriers for domain-specific AI models and setting a new standard for high-quality, permissioned data.
Phase 3: Mainnet & Token Launch (Q2 2025)
By Q2 2025, OpenLedger aims to transition from testnet to mainnet, unlocking full network capabilities:
- Permissionless AI Model-Building – Anyone can stake tokens to fine-tune or deploy Specialized Language Models (SLMs).
- Token Generation Event (TGE) – Expected in early Q2 2025, enabling the native token’s full utility for staking, governance, data contribution, and rewards.
- Enterprise & Ecosystem Growth – Expansion of specialized Datanets, refinement of Proof of Attribution, and onboarding of large enterprises and data curation partners.
By mainnet, OpenLedger will have all the pieces in place for a self-sustaining AI economy, where data providers, model creators, and businesses collaborate to build the next generation of domain-specific intelligence.
Fundraising
So far, OpenLedger has raised $10.5 million in funding. They secured $8 million in seed funding in July 2024, led by Polychain Capital and Borderless Capital. Other backers include Finality Capital, Hash3, HashKey Capital, STIX, TRGC, Mask Network, MH Ventures, and WAGMI Ventures.
Equally notable, the round also attracted key figures in crypto and AI, including Balaji Srinivasan (former CTO of Coinbase), Sandeep Nailwal (Polygon), Sreeram Kannan and Vyas Krishnan (EigenLabs), Scott Moore (Gitcoin).
In recent months, they raised another $2.5 million from private investors (Echo + venture capital).
OPN Tokeneconomics
The full details of OpenLedger’s token design haven’t been released yet, but the core mechanics are taking shape
The token’s value flows through a structured lifecycle. Developers launch models by staking OPN, liquidity providers back them, and the best-performing models earn attribution rewards from the treasury.
Once validated, these models become accessible via API, serving enterprises and Web3 applications. As demand grows, top models generate compounding network effects, benefiting both model creators and data contributors.
Every transaction, governance decision, and AI model deployment runs through the OPN token, creating an incentive-aligned system where contributors, developers, and users all have skin in the game.
OPN serves multiple functions, starting with AI model funding. Through Initial AI Offerings (IAOs), users can stake tokens to bootstrap new AI models. Governance is decentralized—OPN holders decide on key ecosystem upgrades, model funding, and treasury management, with the option to delegate votes to trusted representatives.
OPN is also the native gas token for OpenLedger’s Layer 2 (L2) network.
Meanwhile, staking enforces accountability—users must stake OPN to participate, with penalties for underperformance or malicious behavior. The more critical the AI service, the higher the required stake. OpenLedger envisions multiple “tiers” of token staking that unlock features progressively.
Stakers receive rewards and get priority access to new tools and beta releases. More importantly, you can’t spin up a model or DataNet without showing “skin in the game,” ensuring you’re a serious participant.
Some Thoughts..

1. Can Data Attribution actually work?
The boldest idea behind OpenLedger is making data attribution work at scale.
“Payable AI” is powerful. Hugging Face cracked open the door for open-source AI collaboration, but its monetization model leans on subscriptions and managed services. It lacks a true financial incentive system for contributors. OpenLedger, in contrast, aims to solve this by embedding attribution and payments directly into the protocol layer, creating a self-regulating data marketplace.
Still, data attribution is notoriously difficult, especially in AI. As we’ve seen, tracking contributions to large models is an open research challenge. Can we scale beyond LoRA-tuned models?
Here’s our take: Emerging technologies often look clunky before they become inevitable. It takes time and R&D to make them robust. Just like zero-knowledge proofs in its early days.
So far, the team has already implemented Proof of Attribution internally, demonstrating its effectiveness for fine-tuned models and RAG pipelines. As they approach mainnet, this will serve as foundational infrastructure for builders to extend and innovate upon.
If achieved, OpenLedger could redefine how AI models attribute and reward data contributions, spawning an entirely new category of AI-driven microeconomies.
Then there’s the demand question. Will enterprises actually pay for specialized data and models in a decentralized marketplace? Most companies aren’t used to buying AI this way. They might prefer direct partnerships with established AI firms.
OpenLedger is betting that both problems—attribution and demand—can be solved. Time will tell.
2. Specialization is Decentralised AI’s Best Shot Against Big AI Labs
Skeptics will argue that decentralized AI has no chance against OpenAI, Google, and Anthropic, who have virtually unlimited funding. But this assumes the future of AI is just a race for scale. That’s not necessarily the case.
The real battle is for domain-specific intelligence. While general-purpose AI models (GPT-4, Claude, Gemini) are impressive, they lack depth in niche fields.
No single company will be able to build, fine-tune, and maintain models for every industry or research area. It’s simply too expensive and fragmented.
OpenLedger’s value proposition lies in enabling global collaboration: independent researchers, specialists, and data owners could form micro-markets where specialized AI models thrive.
If OpenLedger builds an ecosystem where businesses and researchers find better domain-specific AI models than what Big Tech offers, it wins. But if its platform lacks usability, trust, or liquidity, contributors will have little reason to participate—and the whole system collapses under its own weight.
3. This is really, fk-ing important
Pardon my language.
When I scroll through social media, I see endless debates about AI replacing jobs, but almost no discussion on fairly compensating the people whose data built these models in the first place.
Every day, AI models are trained on human expertise, but the people behind the data rarely get paid.
“Payable AI” resonates to me more than just a marketing tagline.
People could form co-ops to gather extremely rare or valuable data (e.g., geospatial data for farmland, curated policy documents for governments, etc.). They’d rely on a platform like OpenLedger to monetize it. Over time, that has the potential to lead to:
- Better data (since it’s in each contributor’s interest to maintain quality).
- Less hallucination (specialized models with domain-specific knowledge).
- Transparent auditing (for regulated industries).
Imagine:
- A horticulturist maintains a dataset on rare tropical plants. An AI trained on it helps farmers diagnose crop diseases. Every time it’s used, they get paid.
- A legal researcher compiles decades of case law. Instead of selling it once to a publishing giant, they continuously profit whenever the AI references specific briefs or annotated rulings.
This is how specialized domain experts become part of a broader AI ecosystem without surrendering all control to giant labs. It redefines the idea of user-generated content to user-generated data, building specialized intelligence.
The next wave of AI innovation won’t be about building the biggest model—it will be about making the right models for the right jobs.
🌈 Research Level Alpha
I’m going to keep this short and sweet.
The alpha is to go and participate in OpenLedger’s testnet by running a node and experiment with fine-tuning their various models with datasets. Epoch 2 is about to start. You’ll get a much better grasp of what they’re trying to accomplish.
Invite others to join as node operators and you’ll earn static rewards for successful referrals. Additional points are also rewarded during specific events, milestones.
Conclusion
We’re in a moment where AI is on everyone’s lips, and the “crypto mania” is overshadowed by the excitement of generative models. But the synergy is potent:
- AI needs new ways to source, verify, and pay for data.
- Crypto provides automated micropayments and decentralized governance.
OpenLedger is building the infrastructure where specialized data fuels specialized models. If it works, it could be the app store for domain-specific AI, where data owners and model builders earn recurring payments.
Next time an AI helps draft a contract or makes a medical diagnosis, remember: the people behind the data could be finally getting their due.
Call me an optimist, but that’s the kind of equitable future that keeps me hooked on web3 and AI.
If OpenLedger succeeds, that future isn’t far off.
Cheers,
Teng Yan
Useful Links
Chain of Thought received a grant from OpenLedger for this initiative. All insights and analysis however are our own. We uphold strict standards of objectivity in all our viewpoints.
To learn more about our approach to sponsored Deep Dives, please see our note here.
This report is intended solely for educational purposes and does not constitute financial advice. It is not an endorsement to buy or sell assets or make financial decisions. Always conduct your own research and exercise caution when making investment choices.