Scaling Next-Gen AI Is Making It Riskier, Not Better

What to Know
- Global data center electricity demand will more than double by 2030, driven almost entirely by AI expansion
- A UK High Court warning in June 2025 found lawyers citing AI-fabricated case law — 18 of 45 citations in one case were entirely invented
- Training costs for frontier AI models are multiplying year-over-year, with single training runs projected to soon exceed $1 billion
- Neurosymbolic AI systems can deliver higher reasoning capability with far lower compute and energy requirements than large language models
AI scaling is making the technology riskier, not more reliable — and crypto is where you feel that first. Nowhere else does a fabricated output move real capital in seconds, trigger automated liquidations, or quietly corrupt a smart contract audit. Mohammed Marikar, co-founder at Neem Capital, argued this week that the industry's obsession with bigger models and faster hardware is producing systems that are dangerously fluent but structurally unreliable.
The Scaling Assumption Is Breaking Down
The bet on AI scaling was always borrowed from traditional tech. Add more compute, train on more data, watch costs fall and performance climb. That playbook worked for chips, for cloud storage, for streaming. It is not working for intelligence. The two things that actually scale with model size — fluency and pattern recognition — are not the things that matter most in high-stakes environments. Reasoning does not scale that way. Cause-and-effect logic, uncertainty calibration, explainable conclusions — none of those improve linearly with more parameters.
According to the IEA Energy and AI report, electricity demand from global data centers will more than double by 2030. In the US alone, data center power draw is projected to rise well over 100 percent before the decade ends. That is not progress paying for itself — it is a sector spending trillions to make infrastructure bigger without making it meaningfully smarter. The capital expenditure is real. The reasoning improvements are not keeping pace.
Scaling is amplifying AI's weaknesses rather than solving them.
— Mohammed Marikar, co-founder at Neem Capital
What Does AI Scaling Mean for Crypto and DeFi?
Crypto is the fastest feedback loop for AI failure that exists. AI tools now monitor on-chain activity, analyze market sentiment, generate code for smart contracts, flag suspicious transactions, and automate trading decisions — often in the same pipeline. An error in any one of those steps does not just produce a bad output. It moves capital. A false signal in a sentiment analysis model can front-run a trade. A hallucination in a smart contract audit can leave a vulnerability undetected. A fabricated explanation in an AML flagging system wastes compliance resources on innocent activity while real threats slip through.
False positives in automated Anti-Money Laundering systems are already a documented problem — a direct consequence of deploying fluent but poorly-reasoned models in compliance contexts. The faster these systems are integrated into trading, risk management, and on-chain governance, the harder it becomes to catch errors before they propagate. Decentralized systems are particularly exposed: once a bad decision executes on-chain, there is no rollback. That asymmetry — fast deployment, no undo — deserves more scrutiny than it's getting.
When AI Invents Case Law — And Lawyers Believe It
The legal system showed exactly how bad this gets. In June 2025, the UK High Court issued a formal warning to lawyers after discovering filings that cited AI-fabricated case law — references to precedents that simply never existed. In one case, 18 of 45 citations were entirely invented by an AI tool. The court warned that lawyers face severe penalties for submitting such filings.
That case is not an outlier. It is what happens when you deploy systems optimized for fluency into environments that require accuracy. Law, compliance, finance — these are domains where a wrong answer dressed confidently in correct-sounding language is worse than no answer at all. Marikar's argument is that the same risk is being embedded into every sector that uses AI for decisions, and throwing more compute at that problem does not fix it. It just makes the fluent hallucinations more convincing.
Is Neurosymbolic AI the Fix the Industry Needs?
The architectural alternative gaining traction is neurosymbolic AI — systems that combine neural network pattern recognition with structured symbolic reasoning. Rather than brute-force pattern matching on billions of parameters, these systems organize knowledge into interrelated concepts, apply explicit rules, and can show their reasoning step by step. Higher reliability per compute cycle, with a verification burden that humans can actually track.
Emerging cognitive AI platforms are demonstrating that this kind of structured reasoning can run on local servers or even edge devices — meaning users keep control over their own knowledge rather than outsourcing it to distant, energy-hungry data centers. For crypto communities running their own auditable AI layer, that is not an abstract benefit. It means you do not have to trust that a centralized provider is not hallucinating mid-audit.
There is a real trade-off here. Neurosymbolic systems are harder to design and can underperform on open-ended tasks where LLMs genuinely excel. But for compliance, smart contract verification, on-chain risk monitoring — the structured approach wins cleanly. Reasoning that is reusable rather than rederived from scratch on every query makes inference cheaper and more predictable as usage scales.
Decentralized AI and the Question of Who Controls Intelligence
Marikar raises a second structural argument the crypto community should find familiar: the problem is not just how AI reasons, but who controls it. Some platforms are already exploring blockchain-based models where individuals and organizations contribute data, compute, and model weights directly — reducing concentration risk and aligning deployment with local needs instead of the priorities of a handful of centralized providers.
Training frontier AI models currently costs extraordinary sums. Credible estimates of frontier AI training costs put them multiplying year over year, with projections that a single training run could soon exceed $1 billion. That number does not include inference — the cost of running the model every time it is queried, at scale, continuously. As usage grows, energy costs and infrastructure expenses compound without proportional gains in reliability.
Scaling has already delivered what it can — fluency and pattern recognition at remarkable breadth. What it exposed in the process is the hard ceiling on that approach. The industry keeps treating this as a compute problem when it is fundamentally an architecture problem. And in crypto, where bad AI output executes automatically and on-chain, the cost of getting that wrong is not abstract.
The question now is whether the industry keeps pushing scale or starts investing in architectures that make intelligence reliable before making it bigger.
— Mohammed Marikar, co-founder at Neem Capital
Stay ahead of the market.
Crypto news and analysis delivered every morning. Free.
More from TheCryptoWorld
About the Author
Contributor
TheCryptoWorld Staff is a contributor at TheCryptoWorld.
View all contributorsFollow thecryptoworld.io on Google News to receive the latest news about blockchain, crypto, and web3.
Follow us on Google News