A founder I work with was mid-pitch in a Series B round last month when the lead investor asked how the firm was hedging its automation roadmap “given AGI timelines are compressing, with most labs now putting it in the 2028 to 2032 window.” The room nodded along. The roadmap on screen ended in 2026. Nobody had defined the term. The owner gave a vague answer about watching the space carefully and lost the room for a beat.
The investor was not running a gotcha. They wanted to know whether the leadership team had a position on AI direction or was buying tools reactively. The right answer was a 90-second paragraph and it was available. It just had not been prepared.
What is AGI?
AGI, artificial general intelligence, is the AI industry’s term for systems that match or exceed human performance across a wide range of tasks rather than one narrow domain. Today’s systems are specialists. ChatGPT writes, AlphaFold predicts protein structures, AlphaGo plays Go. AGI would be the unrestricted version that moves between finance, strategy and operations without retraining. It does not exist yet.
Researchers at Google DeepMind have proposed a five-level framework, from emerging to superhuman, as the most useful internal map of progress. Current frontier systems sit at level one, “emerging” AGI. No system yet meets the bar for level two, “competent” AGI, defined as matching the 50th percentile of skilled adults across a wide range of cognitive tasks. The framework matters because it lets you ask “what level are you claiming?” instead of arguing about definitions.
Why does the term have no shared definition?
Each lab uses AGI in a way that supports its fundraising and policy positioning, and the public statements diverge by more than a decade. Sam Altman has said OpenAI is “now confident we know how to build AGI”, talking about a technical path. Dario Amodei calls it “a marketing term” and prefers “powerful AI”. Demis Hassabis says five to eight years. Yann LeCun says current architectures will not get there.
These are not casual disagreements. They are different bets on what the next decade looks like, made by people with billions of dollars riding on each position. LeCun’s argument, in particular, is that the transformer architecture underneath today’s LLMs cannot reach general intelligence by scale alone. The 80,000 Hours review of expert forecasts shows median timelines compressing into the late 2020s and early 2030s, with very wide error bars on either side. Treat any single timeline claim with caution. Track the spread, not the loudest voice.
The benchmark picture is similarly mixed. AIME mathematics scores moved from 74.3% to 96.7% in a year. GPQA Diamond, a PhD-level science reasoning test, sits around 87.7% on top models versus roughly 34% for non-expert PhD holders. ARC-AGI, which measures generalisation to genuinely novel tasks, sees top models at around 55.5% while humans typically score above 80%. Current systems are getting better at hard reasoning inside known territory, and still failing on novel abstraction. None of this proves AGI is here or imminent.
Where will you actually meet AGI talk?
You will meet it in three specific places. The first is investor calls, where venture capital and private equity teams now routinely cite AGI timelines as part of a “technology risk” narrative. The second is board chatter, where peers are reading headlines about OpenAI claiming a path to AGI and quietly worrying they are missing something. The third is senior hiring, where good engineers want to work somewhere with a defensible view of AI direction.
The pattern across all three is the same. The audience is not testing your AGI prediction. They are testing whether you have a position. A vague answer signals you are not tracking the landscape. A panicked answer signals you are tracking it badly. A calm, brief, specific answer signals you have thought about it and the answer is not currently shifting your strategy.
You will also meet AGI talk in vendor pitch decks, where the term is mostly being used to imply that the vendor is building toward something that does not exist yet. The translation is usually “we are spending heavily on R&D, please factor that into your willingness to pay.” Worth noticing.
When should AGI enter your decisions, and when should you ignore it?
Track AGI when you are making a three-to-five-year technology bet, when you are pitching investors, or when you are hiring senior engineering talent. In each case the work is the same. Spend two hours building a one-paragraph view that you can deliver in 90 seconds, and rehearse it before the conversation where it might come up. The audience is testing whether you have a position, not your prediction.
The shape that holds up reads something like this. “We are agnostic on AGI timelines. We are betting on frontier model improvements every 12 to 18 months regardless. Our architecture is built for multi-vendor flexibility. If AGI arrives, we integrate it. If it does not, we are still moving 10x faster than competitors waiting for perfection.” That paragraph is enough.
Ignore AGI when you are deciding what to automate next quarter. Whether AGI arrives in 2027 or 2037 does not affect whether you should automate invoice processing in Q3. Ignore it when you are picking between foundation models for a production system. Pick on benchmarks for your specific use case, not on AGI timelines. Ignore it when you are deciding whether to hire an AI consultant or train your team in-house. That decision turns on capability gap and budget.
The 80% of companies investing in AI without seeing tangible value are not failing because AGI has not arrived. They are failing on process design, data quality and change management. The 20% winning are doing the unglamorous work of fixing those things with the systems that already exist. AGI timelines are 0% of that equation.
Related concepts you will hear
“Powerful AI” is the framing Anthropic’s Dario Amodei prefers over AGI. The argument is that the business-relevant horizon is any system capable of reshaping the economy, which may arrive before or instead of full AGI in the strict sense. If a system can handle most remote work tasks autonomously, your labour costs and competitive dynamics shift regardless of what we call it.
Frontier model is the practical procurement category. Frontier models are the most capable systems available at any given time, which in May 2026 means GPT-5, Claude Opus 4.6 and Gemini 3 Pro. The UK AI Security Institute uses the term in its evaluations of model risk. This is the term that should enter your buying decisions, not AGI.
AI alignment is the research area concerned with making AI systems do what their designers intend, especially as capability increases. AGI hype quietly assumes alignment will be solved. Current systems already exhibit hallucination, prompt injection vulnerability and reward-hacking behaviours. Each is a mature alignment problem in miniature, and none is solved.
Autonomy is the term boards most often conflate with AGI. An AI agent that books your meetings or drafts your contracts is best understood as a current-generation language model wrapped in tool-calling infrastructure, not AGI. Many “AGI is here” arguments in the trade press are actually arguments about agent autonomy. Worth separating the two when the term comes up in your next board meeting.
The point of the vocabulary is to give you enough purchase that the next time an investor asks how you are hedging AGI timelines, you have a 90-second answer ready and the conversation stays on what you are actually building.



