‘Godfather of AI’ Geoffrey Hinton: Short-term profits, not AI endgame is top-of-mind for tech companies

Elon Musk has a moonshot vision of life with AI: The technology will take all our jobs, while a “universal high income” will mean anyone can access a theoretical abundance of goods and services. Provided Musk’s lofty dream could even become a reality, there would, of course, be a profound existential reckoning.

“The question will really be one of meaning,” Musk said at the VivaTechnology conference in May 2024. “If a computer can do—and the robots can do—everything better than you… does your life have meaning?” 

But most industry leaders aren’t asking themselves this question about the endgame of AI, according to Nobel laureate and “godfather of AI” Geoffrey Hinton. When it comes to developing AI, Big Tech is less interested in the long-term consequences of the technology—and more concerned with quick results.

“For the owners of the companies, what’s driving the research is short-term profits,” Hinton, a professor emeritus of computer science at the University of Toronto, told Fortune.

And for the developers behind the technology, Hinton said, the focus is similarly focused on the work immediately in front of them, not on the final outcome of the research itself.

“Researchers are interested in solving problems that have their curiosity. It’s not like we start off with the same goal of, what’s the future of humanity going to be?” Hinton said.

“We have these little goals of, how would you make it? Or, how should you make your computer able to recognize things in images? How would you make a computer able to generate convincing videos?” he added. “That’s really what’s driving the research.” 

Hinton has long warned about the dangers of AI without guardrails and intentional evolution, estimating a 10% to 20% chance of the technology wiping out humans after the development of superintelligence.

In 2023—10 years after he sold his neural network company DNNresearch to Google—Hinton left his role at the tech giant, wanting to freely speak out about the dangers of the technology and fearing the inability to “prevent the bad actors from using it for bad things.”

Hinton’s AI big picture

For Hinton, the dangers of AI fall into two categories: the risk the technology itself poses to the future of humanity, and the consequences of AI being manipulated by people with bad intent.

“There’s a big distinction between two different kinds of risk,” he said. “There’s the risk of bad actors misusing AI, and that’s already here. That’s already happening with things like fake videos and cyberattacks, and may happen very soon with viruses. And that’s very different from the risk of AI itself becoming a bad actor.”

Financial institutions like Ant International in Singapore, for example, have sounded the alarms about the proliferation of deepfakes increasing the threat of scams or fraud. Tianyi Zhang, general manager of risk management and cybersecurity at Ant International, told Fortune the company found more than 70% of new enrollment in some markets were potential deepfake attempts. 

“We’ve identified more than 150 types of deepfake attacks,” he said.

Beyond advocating for more regulation, Hinton’s call to action to address the AI’s potential for misdeeds is a steep battle because each problem with the technology requires a discrete solution, he said. He envisions a provenance-like authentication of videos and images in the future that would combat the spread of deepfakes. 

Just like how printers added names to their works after the advent of the printing press hundreds of years ago, media sources will similarly need to find a way to add their signatures to their authentic works. But Hinton said fixes can only go so far.

“That problem can probably be solved, but the solution to that problem doesn’t solve the other problems,” he said.

For the risk AI itself poses, Hinton believes tech companies need to fundamentally change how they view their relationship to AI. When AI achieves superintelligence, he said, it will not only surpass human capabilities, but have a strong desire to survive and gain additional control. The current framework around AI—that humans can control the technology—will therefore no longer be relevant. 

Hinton posits AI models need to be imbued with a “maternal instinct” so it can treat the less-powerful humans with sympathy, rather than desire to control them.

Invoking ideals of traditional femininity, he said the only example he can cite of a more intelligent being falling under the sway of a less intelligent one is a baby controlling a mother.

“And so I think that’s a better model we could practice with superintelligent AI,” Hinton said. “They will be the mothers, and we will be the babies.”

Introducing the 2025 Fortune Global 500, the definitive ranking of the biggest companies in the world. Explore this year’s list.

#Godfather #Geoffrey #Hinton #Shortterm #profits #endgame #topofmind #tech #companies

Leave a Reply

Your email address will not be published.