Artificial intelligence (AI) has revolutionized various aspects of our lives, from improving medical diagnoses to enhancing access to information. However, with every advancement comes the potential for misuse. Criminals are increasingly taking advantage of AI tools, such as ChatGPT and Dall-E, to carry out scams and hacking attempts, posing a threat to ordinary citizens. The UK government’s Generative AI Framework and the National Cyber Security Centre have acknowledged the risks associated with AI and online threats. It’s crucial to understand these risks and take measures to protect ourselves.
One way criminals can exploit AI technology is by using large language models (LLMs) like ChatGPT to craft convincing scams and phishing messages tailored to individuals. These messages can be designed using personal information, such as names, genders, and job titles, extracted from LLMs. Additionally, LLMs enable large-scale phishing scams, targeting thousands of people in their native languages. Analysis of underground hacking communities has revealed instances of criminals using AI chatbots, including ChatGPT, for fraud, stealing information, and even creating ransomware.
Apart from using AI tools for scams, entire malicious variants of LLMs have emerged. WormGPT and FraudGPT, for example, are capable of creating malware, finding security vulnerabilities, supporting hacking, and compromising electronic devices. Love-GPT is another variant that has been used in romance scams, generating fake dating profiles to deceive unsuspecting victims on various dating apps.
The use of LLMs like ChatGPT also raises concerns about privacy and trust. As more people utilize these AI tools, there is a risk that personal and confidential corporate information may be shared. LLMs typically incorporate any data input into their future training datasets, and if they are compromised, they can potentially share confidential information with others. Research has already shown that ChatGPT can inadvertently expose training data, posing a significant privacy risk. This can lead to a lack of trust in AI technology, prompting companies like Apple, Amazon, and JP Morgan Chase to ban the use of ChatGPT as a precautionary measure.
To stay safe from AI-powered cybercriminals, here are some tips:
- Be cautious with messages, videos, pictures, and phone calls that may appear legitimate but could be generated by AI tools. Verify their authenticity with a second or known source.
- Avoid sharing sensitive or private information with ChatGPT and similar LLMs.
- Remember that AI tools are not perfect and may provide inaccurate responses. Exercise caution, especially when relying on them for medical diagnoses or in professional settings.
- Check with your employer before using AI technologies in your job, as there may be specific rules or prohibitions in place.
As technology continues to advance, it’s essential to take sensible precautions to safeguard ourselves against known and future threats.
Use the share button below if you liked it.