SINGAPORE – Rogue chatbots that spew lies or racial slurs may be just the beginning, as maliciously coded free chatbot models blindly used by businesses could unintentionally expose sensitive data or result in a security breach.
In new guidelines published on Oct 15, Singapore’s Cyber Security Agency (CSA) pointed out these dangers amid the artificial intelligence (AI) gold rush, and urged businesses to test what they plan to install rigorously and regularly.
This is especially crucial for firms that deploy chatbots used by the public, or those linked to confidential customer data.
Frequent system tests can help weed out threats like prompt injection attacks, where text is crafted to manipulate a chatbot into revealing sensitive information from linked systems, according to the newly published Guidelines on Securing AI Systems .
The guidelines aim to help businesses identify and mitigate the risks of AI to deploy them securely. The more AI systems are linked to business operations, the more they should be secured.
Announcing the guidelines at the annual Singapore International Cyber Week (SICW) at the Sands Expo and Convention Centre on Oct 15, Senior Minister and Coordinating Minister for National Security Teo Chee Hean said the manual gives organisations an opportunity to prepare for AI-related cyber-security risks while the technology continues to develop.
Mr Teo said in his opening address that managing the risks that come with emerging technology like AI is an important step to build trust in the digital domain. He urged the audience to learn lessons from the rapid rise of the internet.
“When the internet first emerged, there was a belief that the ready access to information would lead to a flowering of ideas and the flourishing of debate. But the internet is no longer seen as an unmitigated good,” he said, adding that there is widespread recognition that it has become a source of disinformation, division and danger.
“Countries now recognise the need to go beyond protecting digital system to also protecting their own societies,” he said. “We should not repeat these mistakes with new technologies that are now emerging.”
The ninth edition of the conference is being held between Oct 14 and 17 and features keynotes and discussion panels by policymakers, tech professionals and experts.
AI owners are expected to oversee the security of AI systems from development, deployment to disposal, according to CSA’s guidelines, which do not address the misuse of AI in cyber attacks or disinformation.
In a statement released on Oct 15, CSA said: “While AI offers significant benefits for the economy and society… AI systems can be vulnerable to adversarial attacks, where malicious actors intentionally manipulate or deceive the AI system.”
Organisations using AI systems should consider more frequent risk assessments than with conventional systems to ensure tighter auditing of machine learning systems.