In a groundbreaking development, four cyber attackers in China have been apprehended for their involvement in a ransomware attack that utilized the popular chatbot, ChatGPT. This marks the first case in the country involving the utilization of an artificial intelligence (AI)-powered chatbot for such cybercriminal activities.
The attack was initially reported by an unnamed company in Hangzhou, the capital of Zhejiang province in eastern China. The company’s systems were compromised by ransomware, leading to the perpetrators demanding a ransom of 20,000 Tether, a stablecoin cryptocurrency pegged to the US dollar, in exchange for restoring access.
In late November, the police made arrests, capturing two suspects in Beijing and two others in Inner Mongolia. According to reports, the arrested individuals admitted to developing various versions of ransomware, optimizing their malicious software using ChatGPT, conducting vulnerability scans, infiltrating systems, implanting ransomware, and ultimately extorting victims.
The report does not clarify whether the use of ChatGPT was included as part of the charges against the cyber attackers. ChatGPT currently exists within a legal gray area in China, as the government has implemented restrictions to limit access to foreign generative AI products. Despite these restrictions, Chinese users have shown interest in ChatGPT and similar AI chatbots, leading some to bypass restrictions using virtual private networks (VPNs) and phone numbers registered in supported regions.
However, from a commercial standpoint, domestic companies face compliance risks when building or renting VPNs to access OpenAI’s services, including ChatGPT and the text-to-image generator Dall-E, according to a report by law firm King & Wood Mallesons.
This incident underscores the growing number of legal cases tied to generative AI technologies. In February, Beijing police warned about the potential for AI chatbots like ChatGPT to “commit crimes and spread rumors.” In May, police in northwestern Gansu province detained an individual who allegedly used ChatGPT to generate fake news about a train crash, which garnered significant attention online. Similarly, in August, Hong Kong police arrested six individuals involved in a fraud syndicate that employed deepfake technology to create doctored images of identification documents for loan scams targeting financial institutions.
However, these controversies are not limited to China alone. Overseas, similar concerns have arisen regarding the misuse of AI technologies. Earlier this year, the mayor of Hepburn Shire in Australia sent a legal notice to OpenAI after ChatGPT falsely implicated him in a bribery and corruption scandal. The US Federal Trade Commission also issued a warning about scammers exploiting AI-cloned voices to impersonate individuals, requiring only a short audio clip of their voices.
Furthermore, individuals and organizations whose work has been used to train large language models are now pushing back against what they perceive as the mass infringement of intellectual property. In a case that will be closely monitored due to its legal implications, The New York Times recently filed a lawsuit against OpenAI and its primary supporter, Microsoft, alleging that the companies' powerful models were trained using millions of articles without permission.
As the prevalence of generative AI models like ChatGPT continues to grow, the legal and ethical challenges surrounding their use will undoubtedly persist. It is crucial for policymakers, technology companies, and society at large to navigate these complexities and strike a balance between innovation and responsible use.
Use the share button below if you liked it.