As generative AI tools become more powerful - scoring gold medals in the Math Olympiad, solving seemingly unsolvable problems like protein folding - I've come to the following conclusion:
We already have Artificial General Intelligence.
This isn't just provocation. Let me explain.
While watching The Thinking Game - the documentary about the rise of Demis Hassabis and DeepMind (which I highly recommend - you can find it on my AI Resources page) - I was struck by this image. It's Demis in the early days of Google DeepMind in London, comparing AI to AGI.
His definition was simple:
- AI: Human level intelligence at one task
- AGI: Human level intelligence at many tasks
Notice what he did not say. He didn't say "all economically valuable tasks." He didn't say "a majority of tasks." He said many tasks.
This was Demis Hassabis - arguably the #1 AI researcher in the world, now a Nobel Prize winner - articulating to DeepMind (the best AI research team ever assembled) his own definition of AGI. This was before the hype explosion, before the corporate stakes, before AGI became a contractual trigger between Microsoft and OpenAI to be determined by third-party experts.
By his original definition? We've smashed through AGI and barely noticed.
The Evidence Is in Our Daily Lives
Never mind the scientific breakthroughs AI has already enabled. Never mind the creative applications - music generation, image generation, video generation, the transformation of social media. Those are breathtaking in their own right.
But just from my own workflow this week:
- Teaching: Generative AI helped me prepare a lesson for my graduate-level MBA class
- Cooking: It helped me prepare and cook a prime rib over Thanksgiving week - a tradition in my family where we cook beef instead of turkey (although we did have some turkey because we needed to make gravy)
- Negotiation: My wife used it to help negotiate a new salary for a job she's starting at an international startup with complex salary and compensation implications
Three wildly different tasks. Human-level intelligence - or better - at each one.
That's not narrow AI. That's not "one task." That's AGI by the definition the field's leading researcher gave his own team.
The Goalposts Moved Without Us Noticing
Sam Altman has noted how we essentially blew past the Turing test and it was barely in the news. He's predicted we'll reach AGI someday and it'll be similar - we'll freak out for about a week, then move on, and the world will continue.
I think we've already passed that point.
When companies and researchers talk about AGI now, they're really describing what should be called ASI - Artificial Super Intelligence. The bar where every single economically valuable task that could be done by software or robot, is. We're not even close to that.
But AGI as originally conceived? Human-level intelligence at many tasks?
We're way past it.