Singapore Enhances Security of AI Systems

Singapore Enhances Security of AI Systems

Singapore is taking proactive measures to enhance the security of artificial intelligence (AI) systems. The country’s Cyber Security Agency (CSA) is preparing to release its draft Technical Guidelines for Securing AI Systems for public consultation. These guidelines, which are voluntary, aim to provide practical measures that organizations can adopt to mitigate the potential risks associated with AI systems.

Janil Puthucheary, Singapore’s senior minister of state for the Ministry of Communications and Information, emphasized the importance of ensuring the safety and security of AI tools against malicious threats. He acknowledged that the rapid proliferation and deployment of AI in various domains have significantly impacted the threat landscape. Puthucheary highlighted the concept of adversarial machine learning, where attackers compromise the functioning of AI models. As an example, he referenced how security vendor McAfee successfully manipulated Mobileye’s AI system by altering the speed limit signs it was trained to recognize.

To address these emerging risks, Singapore’s government CIO, the Government Technology Agency (GovTech), is developing capabilities to simulate potential attacks on AI systems. By doing so, they aim to identify vulnerabilities and implement appropriate safeguards to bolster security. Puthucheary emphasized that the industry and community must also play their part in safeguarding AI systems from threats.

The minister emphasized that AI not only introduces new risks but also leaves existing vulnerabilities exposed. He specifically mentioned data privacy as a classic cyber threat that AI can be vulnerable to. The growing adoption of AI expands the attack surface through which data can be compromised or exposed. Puthucheary also warned about the creation of increasingly sophisticated malware, such as WormGPT, which can evade detection by traditional security systems.

However, Puthucheary also highlighted the potential for AI to enhance cyber defense. AI-powered security tools can detect anomalies and enable swift autonomous action to mitigate potential threats. By leveraging machine learning, security professionals can identify risks faster and with greater precision.

In response to the evolving AI landscape, the Association of Information Security Professionals (AiSP) is establishing an AI special interest group. This group will facilitate the exchange of insights and developments among its members, driving the technical competence and interests of Singapore’s cybersecurity community.

Singapore’s focus on AI security aligns with global efforts in this domain. In April, the US National Security Agency’s AI Security Center issued a best practices guide called “Deploying AI Systems Securely.” This document, developed jointly with the US Cybersecurity and Information Security Agency, aims to enhance the integrity and availability of AI systems and provide mitigations for known vulnerabilities.

As AI continues to proliferate and impact various sectors, it is crucial to prioritize the security of these systems. Singapore’s proactive approach in developing technical guidelines and simulating potential attacks reflects a commitment to staying ahead of emerging threats. By fostering collaboration between industry, government, and cybersecurity professionals, Singapore aims to create a safer AI landscape that can benefit society at large.


Written By

Jiri Bílek

In the vast realm of AI and U.N. directives, Jiri crafts tales that bridge tech divides. With every word, he champions a world where machines serve all, harmoniously.