In a gathering of government scientists and AI experts from across the globe, held in San Francisco this week, the future of AI technology and its safety measures amidst changes in political leadership has been a key topic of discussion. The conference, hosted by the Biden administration, includes officials from countries such as Canada, Kenya, Singapore, the United Kingdom, and the European Union, and aims to address issues like detecting and combatting the spread of AI-generated deepfakes that contribute to fraud and harmful activities.
This meeting is significant as it is the first of its kind since world leaders came together at an AI summit in South Korea earlier this year. During that summit, leaders agreed to establish a network of publicly supported safety institutes to advance the research and testing of AI technology. In line with this commitment, President Biden signed an executive order last year and subsequently formed the AI Safety Institute at the National Institute for Standards and Technology.
However, with President-elect Donald Trump vowing to repeal President Biden’s AI policies when he returns to the White House for a second term, the future of AI safety and the work of the AI Safety Institute is uncertain. While Trump’s campaign platform called the executive order “dangerous” and criticized Biden’s approach to AI development, he has not specified his objections to the order or his plans for the institute.
Despite the potential changes in political leadership, the tech industry, supported by companies like Amazon, Google, Meta, and Microsoft, has expressed satisfaction with Biden’s AI safety approach. These industry groups have advocated for Congress to preserve the AI Safety Institute and codify its work into law. Experts expect that the technical work carried out at the San Francisco conference will continue regardless of who is in charge, as there is already overlap in the goals and objectives of different administrations.
Heather West, a senior fellow at the Center for European Policy Analysis, states that there is no reason to believe there will be a complete reversal in the work of the AI Safety Institute. The focus on AI safety and regulations has been a shared concern among various stakeholders. Even during the Trump administration, there was recognition of the need for a stronger AI strategy in line with other countries' efforts.
The emergence of ChatGPT in 2022, which sparked public fascination and concerns about generative AI, has further highlighted the importance of AI safety measures. It has also brought attention to Elon Musk, a tech mogul and Trump adviser who has been selected to lead a government cost-cutting commission. Musk has been vocal about the risks associated with AI and has even sued OpenAI, the creator of ChatGPT, with whom he has personal grievances.
The discussions in San Francisco underscore the international collaboration and commitment to ensuring AI safety. While the political landscape may shift, the focus on addressing AI risks and promoting responsible development remains consistent among experts and industry leaders. As the conference progresses, further insights and strategies are expected to emerge to safeguard against the potential misuse and harmful effects of AI technology.
Use the share button below if you liked it.