Connect with us


The “Godfather of AI” quits Google, warns of dangers of misinformation



Dr. Geoffrey Hinton, who developed a neural network with two of his students at the University of Toronto in 2012, left Google, citing concerns over its dangers of flooding the world with misinformation. Hinton, 75, stated that he left to speak out publicly about the hazards of (Artificial Intelligence (AI) and, in part, regretted his contribution to it. He was hired by Google a decade ago to help build the company’s AI technology, and the approach he pioneered paved the path for modern systems such as ChatGPT.

Hinton’s ground-breaking work on neural networks inspired AI systems, which are now used in numerous products. He worked on Google’s AI development projects part-time for a decade, but he has since expressed reservations about the technology and his role in its evolution.

Hinton’s decision to leave Google and speak out about the technology’s hazards comes at a time when a growing number of advocacy groups and tech insiders are concerned about the potential for AI-powered chatbots to spread disinformation and displace jobs. This is visible from the surge of interest in ChatGPT late last year, which fuelled a renewed race among internet businesses to create and implement similar AI tools in their products.

Hinton’s Journey to Becoming the “Godfather of AI”

Hinton was a computer science professor at Carnegie Mellon University in the 1980s, but he left to travel to Canada because he was hesitant to accept Pentagon support. At the time, the Defence Department financed the majority of AI research in the United States. Hinton is vehemently opposed to the deployment of AI on the battlefield, which he refers to as “robot soldiers.”

In 2012, Hinton and two of his Toronto students, Ilya Sutskever and Alex Krishevsky, developed a neural network that could assess hundreds of images and teach itself to distinguish common objects such as flowers, pets, and vehicles.

Gaining attention from Google, the tech giant paid $44 million to acquire Hinton and his two students’ startup. And their methodology resulted in the development of increasingly sophisticated technologies, such as new chatbots like ChatGPT and Google Bard. Sutskever went on to become the CEO of OpenAI. Hinton and two other long-time collaborators were awarded the Turing Award, dubbed “the Nobel Prize of computing,” in 2018 for their work on neural networks.

Around the same time, Google, OpenAI, and other companies began constructing networks that could learn from enormous amounts of digitised text. Hinton thought it was a powerful method for robots to comprehend and generate language, but it was inferior to how humans handled language.

What changed?

It wasn’t until last year, when Google and OpenAI constructed algorithms that used far larger amounts of data, that his perspective shifted. He thought that in some aspects, the systems were inferior to the human brain, but in others, he thought they were surpassing human intellect. He speculated that whatever was happening in those systems was probably a lot better than what’s going on in the brain.

He argues that when firms enhance their AI systems, they become more harmful. “Look at how it was five years ago and how it is now,” he remarked of AI technologies. “Take the difference and propagate it forwards. That’s terrifying.”

Hinton stated that Google operated as a “proper steward” of the technology, taking care not to divulge anything that could cause harm. However, now that Microsoft has complemented its Bing search engine with a chatbot, posing a threat to Google’s main business, Google is racing to release similar technologies. According to Hinton, the tech titans are embroiled in a competition that may be impossible to break.

His immediate concern was that the internet would be saturated with fake text, photographs, and videos. This will create a vague line between reality and illusion, and an individual will not be able to know what is true anymore. He was also concerned that AI technology would eventually disrupt the work economy. Chatbots like ChatGPT currently supplement human labour, but they could eventually replace paralegals, personal assistants, translators, and other rote taskers.

He is concerned that future versions of the technology would represent a threat to humans because they frequently learn unexpected behaviour from the massive volumes of data they analyse. This becomes a problem, he says, as individuals and businesses empower AI systems to not only produce but also operate their own computer code. And he is terrified of the day when genuinely autonomous weapons, such as killer robots, become a reality.


Hinton’s decision to leave the firm and speak out about the technology comes as an increasing number of lawmakers, advocacy groups, and tech insiders have expressed concern about the potential for a new generation of AI-powered chatbots to disseminate misinformation and displace jobs.

However, Hinton stated on Twitter that he was quitting Google not to disparage the company, but to be able to discuss the hazards of AI without fear of repercussions from the corporation where he works. “Google has acted very responsibly,” he continued.

Many other specialists, including many of his students and colleagues, believe this threat is purely speculative. However, Hinton believes that the race between Google, Microsoft, and others will grow into a worldwide race that will not be stopped until some type of global regulation is implemented.

But, as he pointed out, that might be impossible. In fact, unlike nuclear weapons, there is no way of knowing whether businesses or countries are secretly developing the technology. The best hope is for the world’s top tech savants and scientists to work together on strategies to manage the technology and limit the dangers it poses to the world. He even added that Google should not scale this up any further until they know whether they can control it.