Skip to main content

Command Palette

Search for a command to run...

The "Average" Problem: Why AGI is a Moving Goalpost

Updated
2 min read
The "Average" Problem: Why AGI is a Moving Goalpost

The pursuit of Artificial General Intelligence (AGI) — a concept I learned about this week — is often framed as a climb toward a distant mountain peak. We define it broadly, as an AI that matches or exceeds the sum total of human capability—the ability to reason like a scientist, create like an artist, and synthesize information like a polymath.

However, a quieter, more humbling realization is hidden in that definition.

If we shifted the goalposts and defined AGI as the average of human intelligence, the "intelligence explosion" wouldn't be a future event — it would be a retrospective one.

When we look at the "average" human baseline, we aren't looking at the ability to solve quantum equations or write symphonies. We are looking at a baseline that often struggles with media literacy, falls for basic phishing scams, and engages in circular logic on internet forums (without getting into what we are doing to the planet and with our political system 🤦‍♂️). By that metric, Large Language Models (LLMs) didn't just meet the average; they soared past it somewhere between GPT-3 and GPT-4.

The humor in this observation masks a technical truth: AI doesn’t need to be "God-like" to be transformative; it only needs to be slightly more competent than the median human at a specific task. We keep moving the goalpost toward the "Sum of Humanity" because if we admitted we’d already reached the "Average," we’d have to face the fact that our machines are already more "human" than we care to admit.

We aren't just building a mirror of our best selves; we are building a tool that proves the "average" was never as high as we thought it was.

Stay calm. We aren’t being replaced by superintelligence yet; we’re just being joined by a very fast version of ourselves."