What is AGI? Why the term should not enter your procurement decisions

A person at a meeting-room table speaking to two others with a laptop and notebook between them
TL;DR

AGI, artificial general intelligence, is the AI industry's term for systems that match or exceed human performance across a wide range of tasks rather than one narrow domain. It does not exist yet. OpenAI, Anthropic, DeepMind and Meta each use the term differently, in ways tied to their fundraising and policy positioning. For an SME owner the term matters in two ways only. It is a leading signal of where capability is heading, and it is vocabulary you need for investor calls. Beyond that, AGI should not enter your procurement decisions.

Key takeaways

- AGI is the label for AI systems that match or exceed human performance across a wide range of tasks. None exist yet, and there is no shared definition across the labs. - OpenAI, Anthropic, DeepMind and Meta each frame AGI in subtly different ways. Sam Altman talks of a technical path. Dario Amodei prefers "powerful AI". Demis Hassabis says five to eight years. Yann LeCun says current architectures will not get there. - Current frontier models score well on hard reasoning benchmarks like AIME and GPQA, and still fail on novel-task generalisation benchmarks like ARC-AGI. None of this proves AGI is here or imminent. - Track AGI for two reasons. It signals where the labs are pouring talent, and it turns up in investor calls and board chatter where you need a credible one-paragraph view. - Keep AGI out of your procurement decisions. Whether it arrives in 2027 or 2037 does not change whether you automate invoice processing this quarter.

A founder I work with was mid-pitch in a Series B round last month when the lead investor asked how the firm was hedging its automation roadmap “given AGI timelines are compressing, with most labs now putting it in the 2028 to 2032 window.” The room nodded along. The roadmap on screen ended in 2026. Nobody had defined the term. The owner gave a vague answer about watching the space carefully and lost the room for a beat.

The investor was not running a gotcha. They wanted to know whether the leadership team had a position on AI direction or was buying tools reactively. The right answer was a 90-second paragraph and it was available. It just had not been prepared.

What is AGI?

AGI, artificial general intelligence, is the AI industry’s term for systems that match or exceed human performance across a wide range of tasks rather than one narrow domain. Today’s systems are specialists. ChatGPT writes, AlphaFold predicts protein structures, AlphaGo plays Go. AGI would be the unrestricted version that moves between finance, strategy and operations without retraining. It does not exist yet.

Researchers at Google DeepMind have proposed a five-level framework, from emerging to superhuman, as the most useful internal map of progress. Current frontier systems sit at level one, “emerging” AGI. No system yet meets the bar for level two, “competent” AGI, defined as matching the 50th percentile of skilled adults across a wide range of cognitive tasks. The framework matters because it lets you ask “what level are you claiming?” instead of arguing about definitions.

Why does the term have no shared definition?

Each lab uses AGI in a way that supports its fundraising and policy positioning, and the public statements diverge by more than a decade. Sam Altman has said OpenAI is “now confident we know how to build AGI”, talking about a technical path. Dario Amodei calls it “a marketing term” and prefers “powerful AI”. Demis Hassabis says five to eight years. Yann LeCun says current architectures will not get there.

These are not casual disagreements. They are different bets on what the next decade looks like, made by people with billions of dollars riding on each position. LeCun’s argument, in particular, is that the transformer architecture underneath today’s LLMs cannot reach general intelligence by scale alone. The 80,000 Hours review of expert forecasts shows median timelines compressing into the late 2020s and early 2030s, with very wide error bars on either side. Treat any single timeline claim with caution. Track the spread, not the loudest voice.

The benchmark picture is similarly mixed. AIME mathematics scores moved from 74.3% to 96.7% in a year. GPQA Diamond, a PhD-level science reasoning test, sits around 87.7% on top models versus roughly 34% for non-expert PhD holders. ARC-AGI, which measures generalisation to genuinely novel tasks, sees top models at around 55.5% while humans typically score above 80%. Current systems are getting better at hard reasoning inside known territory, and still failing on novel abstraction. None of this proves AGI is here or imminent.

Where will you actually meet AGI talk?

You will meet it in three specific places. The first is investor calls, where venture capital and private equity teams now routinely cite AGI timelines as part of a “technology risk” narrative. The second is board chatter, where peers are reading headlines about OpenAI claiming a path to AGI and quietly worrying they are missing something. The third is senior hiring, where good engineers want to work somewhere with a defensible view of AI direction.

The pattern across all three is the same. The audience is not testing your AGI prediction. They are testing whether you have a position. A vague answer signals you are not tracking the landscape. A panicked answer signals you are tracking it badly. A calm, brief, specific answer signals you have thought about it and the answer is not currently shifting your strategy.

You will also meet AGI talk in vendor pitch decks, where the term is mostly being used to imply that the vendor is building toward something that does not exist yet. The translation is usually “we are spending heavily on R&D, please factor that into your willingness to pay.” Worth noticing.

When should AGI enter your decisions, and when should you ignore it?

Track AGI when you are making a three-to-five-year technology bet, when you are pitching investors, or when you are hiring senior engineering talent. In each case the work is the same. Spend two hours building a one-paragraph view that you can deliver in 90 seconds, and rehearse it before the conversation where it might come up. The audience is testing whether you have a position, not your prediction.

The shape that holds up reads something like this. “We are agnostic on AGI timelines. We are betting on frontier model improvements every 12 to 18 months regardless. Our architecture is built for multi-vendor flexibility. If AGI arrives, we integrate it. If it does not, we are still moving 10x faster than competitors waiting for perfection.” That paragraph is enough.

Ignore AGI when you are deciding what to automate next quarter. Whether AGI arrives in 2027 or 2037 does not affect whether you should automate invoice processing in Q3. Ignore it when you are picking between foundation models for a production system. Pick on benchmarks for your specific use case, not on AGI timelines. Ignore it when you are deciding whether to hire an AI consultant or train your team in-house. That decision turns on capability gap and budget.

The 80% of companies investing in AI without seeing tangible value are not failing because AGI has not arrived. They are failing on process design, data quality and change management. The 20% winning are doing the unglamorous work of fixing those things with the systems that already exist. AGI timelines are 0% of that equation.

“Powerful AI” is the framing Anthropic’s Dario Amodei prefers over AGI. The argument is that the business-relevant horizon is any system capable of reshaping the economy, which may arrive before or instead of full AGI in the strict sense. If a system can handle most remote work tasks autonomously, your labour costs and competitive dynamics shift regardless of what we call it.

Frontier model is the practical procurement category. Frontier models are the most capable systems available at any given time, which in May 2026 means GPT-5, Claude Opus 4.6 and Gemini 3 Pro. The UK AI Security Institute uses the term in its evaluations of model risk. This is the term that should enter your buying decisions, not AGI.

AI alignment is the research area concerned with making AI systems do what their designers intend, especially as capability increases. AGI hype quietly assumes alignment will be solved. Current systems already exhibit hallucination, prompt injection vulnerability and reward-hacking behaviours. Each is a mature alignment problem in miniature, and none is solved.

Autonomy is the term boards most often conflate with AGI. An AI agent that books your meetings or drafts your contracts is best understood as a current-generation language model wrapped in tool-calling infrastructure, not AGI. Many “AGI is here” arguments in the trade press are actually arguments about agent autonomy. Worth separating the two when the term comes up in your next board meeting.

The point of the vocabulary is to give you enough purchase that the next time an investor asks how you are hedging AGI timelines, you have a 90-second answer ready and the conversation stays on what you are actually building.

Sources

Sam Altman (2025). Reflections, the OpenAI CEO blog post claiming a technical path to AGI. https://blog.samaltman.com/reflections Dario Amodei (2024). Machines of Loving Grace, the Anthropic CEO essay framing "powerful AI" as the business-relevant horizon. https://www.darioamodei.com/essay/machines-of-loving-grace Business Insider (2025). Anthropic CEO calls AGI a marketing term. https://www.businessinsider.com/anthropic-ceo-calls-agi-marketing-term-2025-1 80,000 Hours (2025). Shrinking AGI timelines, a review of expert forecasts. https://80000hours.org/2025/03/when-do-experts-expect-agi-to-arrive/ Observer (2026). Demis Hassabis on AGI timelines and jagged intelligence. https://observer.com/2026/02/google-deepmind-ceo-demis-hassabis-ai-jagged-intelligence/ ARC Prize (2024). ARC Prize 2024 technical report, the leading benchmark for novel-task generalisation. https://arcprize.org/media/arc-prize-2024-technical-report.pdf Jon Krohn (2024). The five levels of AGI, plain-English summary of the DeepMind framework. https://www.jonkrohn.com/posts/2024/1/12/the-five-levels-of-agi LessWrong (2024). What does Yann LeCun think about AGI, summary of the Meta chief AI scientist's position. https://www.lesswrong.com/posts/jKCDgjBXoTzfzeM4r/what-does-yann-lecun-think-about-agi-a-summary-of-his-talk Matthew Hopkins (2025). The 154 billion mistake, why 80% of companies get nothing from AI. https://matthopkins.com/business/the-154-billion-mistake-why-80-percent-of-companies-get-nothing-from-ai/ UK AI Security Institute (2025). 2025 year in review, frontier model evaluations. https://www.aisi.gov.uk/blog/our-2025-year-in-review

Frequently asked questions

Does AGI exist today?

No. The most capable models in May 2026, including GPT-5, Claude Opus 4.6 and Gemini 3 Pro, are highly capable specialists. They score above human performance on many narrow benchmarks and still fail on novel-task generalisation. Researchers at Google DeepMind classify current frontier systems at "emerging" AGI under their five-level framework, the lowest rung. No lab claims to have shipped AGI as a product.

When will AGI arrive?

Nobody knows, and the major lab leaders disagree by more than a decade. Anthropic's Dario Amodei has talked about "powerful AI" arriving in two to three years. DeepMind's Demis Hassabis says five to eight. Meta's Yann LeCun says current transformer architectures will not get there at all. The 80,000 Hours review of expert forecasts shows median estimates compressing into the late 2020s and early 2030s, with very wide error bars on either side.

Should AGI timelines change my AI roadmap?

Not directly. The 20% of companies seeing real returns on AI investment are winning on process design, data quality and change management with current systems, not on bets about future capability. Whether AGI arrives in 2027 or 2037 does not change whether you should automate invoice processing this quarter. Track AGI for investor conversations. Build for the systems that already work.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation