Renowned Computer Scientist Warns About AI Risks

Renowned Computer Scientist Warns About AI Risks

Geoffrey Hinton, the renowned computer scientist known as the “Godfather of AI,” has issued a warning about the future of humanity. In an interview with CBS News' 60 Minutes, Hinton shared his concerns about the potential risks posed by artificial intelligence (AI) systems. While acknowledging the enormous benefits that AI can bring to healthcare and drug development, Hinton also highlighted several areas of concern.

One of Hinton’s concerns is the impact of AI on the job market. He warned that if AI systems can take on complex tasks in industries such as healthcare and drug development, it could lead to significant job losses. Hinton also raised the issue of “fake news” spreading through AI, as well as the potential for AI to introduce biases in areas such as law enforcement and employment processes.

Perhaps the most alarming concern raised by Hinton is the possibility of AI systems creating and implementing their own computer codes. This could mean that AI systems are able to update themselves, potentially leading to unforeseen consequences. Hinton emphasized the need for caution in the development of AI, stating that “we can’t afford to get it wrong with these things.” He urged humans to find a way to prevent AI systems from wanting to take over.

The rapid progress of AI systems is evident in the example of OpenAI’s ChatGPT. The tool has recently been equipped with new features that allow it to respond to visual and audio data. Users have demonstrated its capabilities by using it to solve equations, decipher traffic signs, and identify films based on single screenshots. Hinton noted that these systems appear to know “far more than you do,” despite having a fraction of the connections that a human brain has.

Hinton compared the development of AI with other technological advancements, pointing out that AI poses unique challenges. Unlike previous technologies, AI systems cannot afford to fail early on without serious repercussions. The potential consequences of AI systems going awry are significant, and Hinton stressed the importance of getting it right from the beginning to avoid any negative outcomes. While he acknowledged that AI surpassing human reasoning may be achievable by the end of the decade, he also noted that it is not a guaranteed outcome.

The military use of AI is another concerning area raised by Hinton. Retired U.S. General Mark Milley recently revealed that “maybe 15 years or so” could see 20 percent or more of sophisticated militaries become robotic. This raises ethical concerns, as the International Committee of the Red Cross (ICRC) has warned about the risks posed to civilians and troops by automated weapons systems. The ICRC has called for the creation of new international rules to regulate AI in warfare.

Hinton is not the only one expressing concerns about AI. Earlier this year, tech leaders signed an open letter calling for a temporary halt to advanced AI development efforts, citing the potential risks to society and humanity. OpenAI CEO Sam Altman has also advocated for AI regulation, emphasizing the need to balance safety and accessibility.

As AI continues to advance, it is crucial for us to consider the potential benefits and risks it brings. Hinton’s warning serves as a reminder that AI must be developed responsibly to ensure a positive impact on society. Without proper caution and regulation, the future of humanity could be at stake.


Written By

Jiri Bílek

In the vast realm of AI and U.N. directives, Jiri crafts tales that bridge tech divides. With every word, he champions a world where machines serve all, harmoniously.