Artifical Intelligence: Navigating the Risks of an Existential Catastrophe
As artificial intelligence (AI) continues to advance at an unprecedented pace, concerns about its potential risks and implications are growing. Roman Yampolskiy, an associate professor of computer engineering and science at the Speed School of Engineering, University of Louisville, has spent significant time studying the scientific literature on AI, and he warns that the technology poses an “existential catastrophe” for humanity.
Yampolskiy argues that there is no evidence to suggest that AI can be fully controlled. Even if partial controls are introduced, he believes they will likely be insufficient. In his upcoming book, “AI: Unexplainable, Unpredictable, Uncontrollable,” Yampolskiy delves into the ways in which AI has the potential to dramatically reshape society, not always to our advantage.
“We are facing an almost guaranteed event with potential to cause an existential catastrophe. No wonder many consider this to be the most important problem humanity has ever faced. The outcome could be prosperity or extinction, and the fate of the universe hangs in the balance,” says Yampolskiy.
The key issue, according to the researcher, is that our ability to produce intelligent software surpasses our ability to control or even verify it. Despite any potential benefits, Yampolskiy asserts that advanced AI systems can never be fully controllable and will always carry some level of risk.
In Yampolskiy’s view, the AI community should focus on minimizing these risks while maximizing the potential benefits. He warns that blindly accepting AI’s answers without understanding the reasoning behind them poses significant dangers.
“If we grow accustomed to accepting AI’s answers without an explanation, essentially treating it as an Oracle system, we would not be able to tell if it begins providing wrong or manipulative answers,” explains Yampolskiy.
As AI systems become more powerful, their autonomy increases while human control diminishes, leading to potential safety risks. Yampolskiy emphasizes that less intelligent agents, such as humans, cannot permanently control more intelligent agents like superintelligent AI. The issue lies not in a failure to find a safe design, but in the inherent uncontrollability of superintelligence itself.
Amid concerns, Yampolskiy proposes potential pathways to mitigate the risks. He suggests sacrificing some capabilities of AI in return for increased control. Transparency and easy-to-understand “undo” options in human language could also enhance safety measures. Yampolskiy further advocates for limited moratoriums or partial bans on certain AI technologies, as well as increased efforts and funding for AI safety research.
His message is clear: while achieving 100% safe AI may be a lofty goal, even incremental improvements in AI safety are better than doing nothing. It is crucial to approach the development and deployment of AI with caution and wisdom to ensure the well-being of humanity.
As the world grapples with the challenges and promises of AI, Yampolskiy’s insights serve as a reminder that we must tread carefully in this era of rapid technological advancement. The potential consequences of mishandling AI are great, and the fate of humanity hangs in the balance.
Use the share button below if you liked it.