On November 27, 2023, the UK government announced the first global guidelines to ensure the secure development of AI technology (the “Guidelines”), which were developed by the UK National Cyber Security Centre (“NCSC”) and the U.S. Cybersecurity and Infrastructure Security Agency (“CISA”), in cooperation with industry experts and other international agencies and ministries. The guidelines have been endorsed by a further 15 countries, including Australia, Canada, Japan, Nigeria, and certain EU countries (full list here).
The Guidelines aim to raise the cybersecurity levels of AI, and help ensure that it is designed, developed and deployed securely. The Guidelines target “providers of AI systems who are using models hosted by an organisation, or are using external application programming interfaces,” and follow a “secure-by-design” approach to AI. They “are broken down into four key areas within the AI system development life cycle: secure design, secure development, secure deployment, and secure operation and maintenance.” Each section contains considerations and mitigations that are intended to help reduce the overall risk to an organized AI system development process.