AI News
Real Time

AI Godfather Hinton Warns: The $4.8 Trillion AI Race Is a Car Without a Steering Wheel

When 78-year-old Geoffrey Hinton sat before the camera and addressed hundreds of delegates, the room fell silent for several seconds."They want a...

When 78-year-old Geoffrey Hinton sat before the camera and addressed hundreds of delegates, the room fell silent for several seconds.

"They want a super-fast car with no steering wheel."

The 2024 Nobel Prize laureate in Physics, known worldwide for over a decade as the "Godfather of AI," once agAIn sounded the alArm for humanity at the Global Digital World Conference.

The elderly scientist spoke with a tone APProaching pleading caution: "We don't know if humans can coexist with superIntelligent AI."

"But we are building it."

Only 1% on Safety, 99% on the Accelerator

During his address, Hinton laid out a stark accounting. The global AI industry is expanding at an unprecedented pace in human history. According to UN Trade and Development data, the global AI market was valued at 189billionin2023andisprojectedtosoarto4.8 trillion by 2033. In just one decade, humanity will have built an economic entity larger than Japan's entire GDP.

Nearly all of this Investment is funneled into building larger models and deploying more computing power.

And safety? Hinton offered a single figure: roughly 1%. Only about 1% of global AI research and development spending is devoted to figuring out how to prevent these systems from causing harm.

Hinton's assessment: "It's crazy."

"The AI Technology lobbying groups are spending huge sums on advertising to make everyone accept an analogy—AI is the accelerator, regulation is the brake. Their message is, don't hit the brakes, it will slow us down."

Hinton flatly rejected this framing. "Progress is the accelerator, yes. But regulation is not the brake. Regulation is the steering wheel. They want a car racing at top speed, but without a steering wheel."

Seated beside him, computational neuroscientist Terry Sejnowski immediately added: "Have you ever driven a car with no brakes? Going downhill, you'll find out just how bad it gets. But what's even worse is we don't even have a steering wheel."

Pedal to the floor, steering wheel rEMOved—that, according to these two pioneers, is the ACTual state of the global AI race.

"AGI Is a Stupid Term"

When the moderator steered the conversation toward AGI and societal risk, Hinton shifted gears completely. Asked how to DeFine AGI and what benchmarks would signal its ARRival, he did not hold back.

"AGI is a stupid term."

His reasoning is straightforward: it assumes intelligence is one-dimensional, like a thermometer—the higher the number, the smarter the entity. "But intelligence is clearly highly multidimensional. There is no single point at which AI equals human capability. Its abilities relative to humans are jagged—vastly surpassing us in some dimensions, falling short in others."

He offered an example: ask any large language model the tax filing deadline in Slovenia or how to moisture-proof a porch, and it will give you a comprehensive answer. In General knowledge, AI left humans in the dust long ago. Yet on certain reasoning tasks, it still hasn't fully caught up. "So the term AGI is meaningless."

What term does carry meaning? For Hinton, it is "superintelligence." Its definition is clear: a system that surpasses human performance in nearly all intellectual tasks. And we believe it is on its way.

Then came the core question of the entire dialogue. A government official in attendance asked: When superintelligence arrives, will humans still be able to maintain meaningful control over the systems they created?

Hinton answered: "We don't know if we can coexist with superintelligent AI."

But we are building it, so we still have considerable control right now. We should build carefully, in a way that allows us to continue to exist, coexisting with it harmoniously. In all known models, there is only one example of a much smarter entity willingly granting freedom to a far less intelligent one: a mother and her infant. The mother genuinely cares about the baby.

The problem now is that we are at an urgent juncture in history—we must try to solve this problem—and the resources devoted to it are pitifully small. Perhaps only 1% of all work is studying this. In contrast, 99% is focused on making AI smarter. It is sheer madness.

Tobacco, Asbestos, and the Blueprint for AI

Hinton categorized AI risks into three tiers.

The first is deliberate misuse: people intentionally using AI to cause harm—creating deepfakes to corrode democracy, engineering lethal viruses to trigger pandemics, or launching cyberattacks. This is the most immediate threat.

The second is profit-driven side effects: using AI to generate non-consensual explicit imagery, or recommendation algorithms that continuously push increasingly extreme content, ultimately fracturing society into groups that share no common language. "They are just making money, but the side effect is tearing society apart."

The third is the existential threat of autonomous AI takeover.

Hinton believes the third category might attract international coOperation because everyone is afraid. But the first two—particularly the first—will not. Nations will pay lip service to cooperation while actively attacking each other. That is far harder to resolve.

He drew a historical parallel: look at the history of tobacco and asbestos. Developed countries that produced them—Canada, for example—introduced regulations to protect their own citizens. But they continued selling these products to developing nations. "So we must genuinely worry: even if countries developing AI enact regulations in the right direction, they may still sell AI to other nations where it produces devastating consequences—even if those very applications are banned at home."

There is nothing new under the sun. The playbook of tobacco and asbestos is likely to be replayed.

The $4.8 Trillion Divide

Another fissure exposed at the conference was the crisis of distribution. Pedro Manuel Moreno, Acting Secretary-General of UN Trade and Development, pointed out bluntly at the concurrent Commission on Science and Technology for Development: the capacity to build and shape AI is concentrated in a tiny handful of economies and corporations.

Doreen Bogdan-Martin, Secretary-General of the International Telecommunication Union, offered a glaring Comparison: AI Adoption rates in developed nations are nearly double those in developing countries. "If this problem is not addressed, it will be the second great divergence."

Between the countries that create AI and those that can only consume it, the chasm is widening at visible speed. In a $4.8 trillion market, the infrastructure, investment, and talent are all clustered at a few points in the Global North. The rest of the world is not even granted a seat at the table to participate in rule-making. The consequences of this are clearly terrifying.

Who Is Holding the Steering Wheel?

Zooming out, Hinton's dialogue at this conference is essentially a culmination of his whistleblowing efforts over the past three years.

In 2023, upon leaving Google, he said he regretted his life's work. In 2024, accepting his Nobel Prize, he used the podium to call for prioritizing AI safety. In 2025, he repeatedly emphasized the urgency of regulation across multiple forums. By 2026, his language has grown even more specific.

Yet another facet of Hinton was equally striking amid the technical discussions. This 78-year-old man, moments after discussing AI apocalypse risks, could seamlessly pivot to explaining why the restricted Boltzmann machine constitutes correct Bayesian inference, why current Image Generation models use only half of the wake-sleep algorithm, and how combining generative and reCognition models represents the correct path forward.

He inhabits two worlds simultaneously—one contemplating how to make AI more powerful, the other how to prevent humanity from being destroyed by that very power. These two Threads run in parallel through his mind without contradiction.

This is perhaps why his warnings carry such weight. It is the voice of someone who built the thing, saying: I know what it can do, and therefore I know what to fear.

And that car Hinton described—the accelerator is now floored, a $4.8 trillion engine roaring. Whether there is a steering wheel depends on whether those in the driver's seat—governments, corporations, and scientists—are willing to reach for it in the next few years.

We now Stand at an exceptionally critical point in time. Before AI surpasses us in intelligence, it is the only window in which humanity can still set the rules of the game.

When Hinton resigned from google three years ago and spoke his warnings, many dismissed it as alarmism. Three years later, he is still saying the Same thing. Only now, many more people truly understand his concern.

And the car without a steering wheel is still accelerating.


★★★★★
★★★★★
Be the first to rate this article.

Comments & Questions (0)

Captcha
Please be respectful — let's keep the conversation friendly.

No comments yet

Be the first to comment!