The Generative AI Timeline for the Last 6 Months — Did We Hit Exponential Self-Improving AI Growth?
Six months of AI releases, billion-dollar bets, and models writing their own code. The acceleration is real. But is it exponential?
Something Shifted in October 2025
The pace of AI changed. Not gradually. Abruptly.
Between October 2025 and March 2026, every major AI lab shipped multiple model generations. Releases that once took a year now took weeks. And the models themselves started writing the code for their successors.
This is the timeline. Then the question: are we actually seeing exponential self-improvement?
The Timeline
October-November 2025
Anthropic shipped Claude Opus 4.5 on November 24. It brought advanced multi-step reasoning, 200K token context, and native computer-use. Meanwhile, Breaking Rust became the first AI artist to top the US Billboard chart. AI wasn't just writing code anymore. It was writing hits.
December 2025
OpenAI released its Codex update on December 18. MIT Technology Review published a major investigation on how AI was transforming software development. The shift was becoming impossible to ignore.
January 2026
Elon Musk declared "2026 is the year of the singularity." SpaceX and xAI announced a merger to automate space mission trajectories. NextBigFuture echoed the singularity claim. The hype machine was running hot.
February 2026
This is where it got intense.
Anthropic released Opus 4.6 (Feb 5) and Sonnet 4.6 (Feb 17). Both came with 1M token context windows. Opus 4.6 could sustain tasks for 14.5 hours autonomously.
Alibaba unveiled Qwen 3.5 with video analysis up to 2 hours. NASA's Perseverance completed its first Mars drives planned by AI using Claude vision-language models. Eli Lilly launched LillyPod, a supercomputer with over 9,000 petaflops.
OpenAI released a dramatically improved Codex. Less than two months after the December version.
March 2026
OpenAI shipped GPT-5.4 with a 1M token context window. It scored 75% on OSWorld-V, beating the human baseline of 72.4%. A computer using a computer better than a human using a computer.
Google dropped Gemini 3.1 Flash-Lite. Nvidia released Nemotron 3 open reasoning models. Meta revealed four generations of custom AI chips.
Yann LeCun's AMI Labs raised $1.03 billion at a $3.5 billion valuation. Europe's largest seed round ever. And OpenAI crossed $25 billion in annualized revenue.
Six months. That's all it took.
AI Is Writing Itself. Literally.
Here's the number that stops people: Claude Code is roughly 90% self-written. At Anthropic overall, 70-90% of code is AI-generated. OpenAI researchers report writing zero code themselves. Google sits at about 50%. Microsoft at 30%.
Industry-wide, 84% of developers use AI tools. AI writes 41% of all code. The developer's role has shifted from creating to judging.
Dario Amodei claims labs achieve "400% efficiency improvements per year." Same compute, 4x better models. OpenAI plans "hundreds of thousands of automated research interns" within nine months.
This looks like recursive self-improvement. The AI gets better, so the AI it builds gets better, so the next AI gets even better. A feedback loop.
But Is It Actually Exponential?
Not quite. Here's why.
Scaling laws show diminishing returns. A tenfold increase in compute pushes accuracy from 90% to 99%. The next tenfold pushes it from 99% to 99.9%. Each jump costs more and delivers less.
Yann LeCun argues current architectures will never reach human-level intelligence. Fundamentally different approaches are needed. He put his money where his mouth is with that billion-dollar AMI Labs raise.
Demis Hassabis at DeepMind says one or two major breakthroughs are still missing. AI excels at verifiable tasks like coding and math. But scientific discovery and creative reasoning remain harder. Rapid learning from few examples is still a gap.
The ICLR 2026 workshop on Recursive Self-Improvement confirmed the field takes this seriously. But the academic consensus: we're seeing "assisted acceleration," not autonomous self-improvement. Humans still drive the hypotheses. AI executes faster.
A mathematical model from researchers on arXiv forecasts the current AI wave peaks around 2024 and fades by 2035-2040 without new breakthroughs.
The Singularity Scorecard
"We're there" camp:
- Elon Musk: "We have entered the singularity."
- Sam Altman: His team "now understands how to build AGI." OpenAI has shifted to superintelligence.
- Dario Amodei: 50% chance of minimal AGI by 2028. AI could surpass Nobel laureates on cognitive benchmarks by 2026.
"Not yet" camp:
- Yann LeCun: Current AI will never reach human-level intelligence.
- Demis Hassabis: 50% probability of AGI before 2030. But missing breakthroughs.
- Academic researchers: Current wave fades by 2035-2040 without new approaches.
The truth sits between these camps. We're seeing real recursive improvement in narrow domains. Coding is the clearest example. But broad, autonomous self-improvement that defines a true singularity? Not yet.
The Economic Reality
The market doesn't care about the singularity debate. Money is flowing.
The generative AI market hit $71.36 billion in 2025. Projections put it at $890.59 billion by 2032. That's a 43.4% compound annual growth rate.
Computing infrastructure investment will hit $600 billion in 2026. Double what it was in 2024. For every $1 invested in generative AI, companies report an average return of $3.70.
But there's a cost. Oracle cut 20,000-30,000 jobs. Amazon cut 16,000. Atlassian cut 1,600. All explicitly linked to AI automation. Enterprise AI agent adoption is projected to jump from under 5% to 40% by year's end.
Meanwhile, 80% of US workers use unapproved AI systems. Over 80% of critical infrastructure firms deployed AI-generated code without understanding security risks. And the EU AI Act enforcement begins in August 2026, with penalties up to EUR 35 million.
What This Actually Means
The last six months proved AI self-improvement is real. Models write their own code. Each generation helps build the next. The pace of releases has compressed from years to weeks.
But "real" isn't the same as "exponential." The feedback loop exists. It's powerful. It's also bounded by physics, diminishing returns, and the continued need for human direction.
We didn't hit the singularity. We hit something else: a sustained, accelerating curve that's transforming every industry it touches. Whether that curve stays steep or flattens depends on breakthroughs no one can predict.
The only certainty? Six months from now, this article will already feel outdated.
Sources
- Dean Ball: On Recursive Self-Improvement - Deep analysis of AI labs automating research, evidence for and against exponential improvement
- CFR: How 2026 Could Decide the Future of AI - Policy, governance, and geopolitical analysis of AI in 2026
- Crescendo AI: Latest AI News and Breakthroughs 2026 - Comprehensive timeline of model releases and milestones Jan-Mar 2026
- Fortune: Top Engineers Say AI Writes 100% of Their Code - AI coding automation statistics at leading labs
- LaunchNinjas: AI Singularity by 2026? - Expert quotes and predictions on singularity timing
- 90+ Generative AI Statistics 2026 - Market size, adoption rates, and ROI data
- ICLR 2026 Workshop on Recursive Self-Improvement - Academic recognition of RSI as mainstream research area
- AI-Generated Code Statistics 2026 - Developer adoption and code generation metrics