In the spring of 2023, the UK government released a white paper outlining its approach to regulating artificial intelligence (AI). Rather than creating new legislation, the government decided to clarify existing laws that could be applied to AI. According to the paper, imposing new regulations on businesses could hinder AI innovation and the country’s ability to respond rapidly to technological advancements.
However, experts believe that beneath the surface of this pro-business stance is a subtle message: the UK wants to attract AI companies and is not yet ready to regulate AI. Rishi Sunak, the UK’s chancellor, recently expressed the desire to strengthen the country’s position as an AI leader, both in terms of innovation and safety oversight. Despite this, he stated that it is too soon for the government to legislate on AI, as more scrutiny of advanced models is needed.
The UK is organizing a global AI summit in November, focusing on “frontier AI,” which refers to the most advanced AI models. Documents released ahead of the summit outline numerous risks associated with AI, including AI-generated disinformation, job market disruption, potential election interference, erosion of social trust, exacerbation of global inequalities, and concerns about the development of bioweapons.
According to a spokesperson from the Department for Science, Innovation, and Technology, the summit will focus on frontier AI because it represents the greatest risks and the vast potential of the future economy. However, critics argue that this emphasis on hypothetical existential risks overshadow the urgent need for meaningful regulations addressing current AI-related issues like surveillance, discrimination, and the spread of misinformation.
The UK’s approach to AI regulation seems to have been inspired by the US, where discussions with AI leaders have taken place in Congress, and voluntary AI safety commitments have been laid out by the White House. President Joe Biden recently issued an executive order to establish guardrails for the use of advanced AI systems by federal agencies. Nonetheless, meaningful regulation remains elusive.
The UK summit will feature tech executives and global leaders discussing how they adhere to voluntary AI safety commitments like those from the White House. However, a group of experts organized a counter-summit to address the focus on tech leaders' perspectives, arguing that it allows big tech companies to advocate for self-regulation and voluntary commitments.
Both the UK and the US are motivated by the desire to compete globally in the field of AI. The US is concerned about countries like China moving quickly to develop AI systems that could pose national security threats. The UK, led by Chancellor Sunak, highlights the country’s expertise and talent in AI and aims to distinguish itself from the EU post-Brexit.
However, any radical deviation by the UK from EU regulations could disrupt existing scientific collaborations. The EU is in the final stages of developing its AI Act, which proposes a risk-based approach to legislating AI. Civil society groups have successfully pushed for transparency requirements regarding law enforcement’s use of high-risk AI. The EU, like the UK and US, needs to refocus its efforts on addressing existing AI harms rather than getting carried away by the hype surrounding futuristic AI models.
In conclusion, the UK’s emphasis on apocalyptic AI risks aims to attract AI companies and position the country as an AI leader. However, experts argue that this focus detracts from the urgent need for practical regulations addressing current AI-related issues. The US and EU face similar challenges and need to redirect their efforts toward creating meaningful legislation that can effectively mitigate the harms caused by AI.
Use the share button below if you liked it.