‘Godfather of AI’ quits Google to speak more freely on concerns

On Monday, there was widespread astonishment as Geoffrey Hinton, a pioneer in the field of artificial intelligence, announced his resignation from Google. Having dedicated the past ten years to AI projects within the company, his decision came as a surprise to many.

Frequently hailed as the “AI trailblazer,” Geoffrey Hinton, a British native now aged 75, has earned this moniker due to his pioneering contributions that serve as the foundation for numerous contemporary AI systems. In an interview with the New York Times, Hinton expressed profound apprehensions regarding the swift pace at which entities like Open AI with innovations such as ChatGPT, and Google with projects like Bard, are pushing forward with their product development endeavors. His primary concern lies in the potential compromise of safety that might accompany such rapid advancements.

So worried is Hinton that he even said that a part of him regrets his past efforts in the field of AI, the Times reported.

The report highlights that generative AI tools are progressively veering towards substituting human labor, and this technology also has the capability to generate and disseminate misinformation.

Geoffrey Hinton’s concerns align with the possibility that the internet, which serves as the training data for generative AI tools, might become inundated with false information. This influx could lead to instances where chatbots like ChatGPT and Bard produce an endless stream of deceptive content that appears convincing.

Hinton’s apprehension stems from the fact that companies introducing AI-powered tools to the public might not fully comprehend their potential implications. This lack of understanding raises his worry about the challenge of curbing malicious actors from exploiting the technology for nefarious purposes.

According to Hinton, regulating this technology poses a challenge due to the potential for companies and governments to develop it rather discreetly. He added that one approach to address this challenge is to encourage prominent scientists to collaborate on strategies for managing the technology.

In a tweet made on Monday, the AI expert mentioned that he chose to leave Google in order to freely discuss the risks associated with AI, without being constrained by its impact on Google. This suggests that he intends to share his insights more extensively as the technology continues to advance.

Expressing even more profound concerns during a recent CBS interview when asked about the possibility of AI causing harm to humanity, Hinton remarked, “That’s not beyond the realm of possibility.”

Following the announcement of Hinton’s departure, Google’s Chief Scientist, Jeff Dean, conveyed in a statement, “We remain dedicated to a responsible approach to AI. We are consistently gaining insights into comprehending emerging risks while also boldly fostering innovation.”

Hinton isn’t the sole expert to voice apprehensions about the recent surge of AI technology that has captivated the globe.

Somewhat surprisingly, Sam Altman, the head of OpenAI, recently confessed to feeling “slightly uneasy” about the potential impacts of AI technology.

Back in March, a letter endorsed by technology leaders and scholars asserted that this technology carries “significant risks for society and humanity.” Published by the Future of Life Institute and signed by notable figures like Elon Musk, the letter advocated for a six-month pause in development efforts. The aim was to facilitate the formulation and implementation of safety protocols for these advanced tools. It emphasized that if managed appropriately, humanity could indeed “embrace a prosperous future with AI.”

Leave a Reply