HB Ad Slot
HB Mobile Ad Slot
Singapore Consults on Cybersecurity Guidelines for AI Systems
Wednesday, August 14, 2024

Singapore has published and is inviting public feedback on two proposed sets of guidelines for securing AI systems.

The first is the Guidelines on Securing AI Systems, intended to help system owners secure AI throughout its life cycle. These guidelines are meant to provide principles to raise awareness of adversarial attacks and other threats that could compromise AI system security, and guide the implementation of security controls to protect AI against potential risks.

The second is the Companion Guide for Securing AI Systems, which is intended to be a community-driven resource for supporting system owners and will entail the Cybersecurity Agency of Singapore (Agency) working closely with AI and cybersecurity practitioners to develop it.

Noting that AI “offers significant benefits for the economy and society”, including driving “efficiency and innovation across various sectors, including commerce, healthcare, transportation, and cybersecurity”, the Agency also stressed that AI systems must “behave as intended”, and that the outcomes must be “safe, secure, and responsible”. Such objectives are put at risk when AI systems are vulnerable to adversarial attacks and other cybersecurity risks.

Accordingly, the Agency notes that AI should be secure by design and secure by default.

This companion guide offers the following framework for tailoring a systematic defence plan:

  1. Carry out a risk assessment, focusing on security risks related to AI systems.
  2. Prioritise which risks to address, based on risk level, impact and available resources.
  3. Identify relevant actions and control measures to secure the AI system. These should be the following stages in an AI system life cycle:
    a. Planning and design
    b. Development
    c. Deployment
    d. Operations and maintenance
    e. End of life
  4. Evaluate the residual risk after implementing security measures for the AI system, to inform decisions about accepting or addressing residual risks.

While neither of the guidelines will be mandatory or prescriptive, it is a curation of practical measures and controls, drawing from industry use cases, as well as advice on more novel risks, such as adversarial machine learning threats, from resources such as the MITRE ATLAS database and OWASP Top 10 for Machine Learning and Generative AI, which will undoubtedly be a useful reference for system owners navigating this developing and crucial space. 

The consultation closes on 15 September.

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins