HB Ad Slot
HB Mobile Ad Slot
Australia: Regulating AI – Emerging Issues
Friday, June 16, 2023

Amid global calls for tailored regulation of artificial intelligence tools, the Australian Federal Government has released a discussion paper on the safe and responsible use of AI.  The Government is consulting on what safeguards are needed to ensure Australia has an appropriate regulatory and governance framework.

While the advances in generative AI (such as ChatGPT, Bard, etc) have garnered much attention recently, the Discussion Paper also focusses on a range of other AI tools including using AI for automated decision making.  However, the Government has flagged that the rapid acceleration of tools like ChatGPT has prompted it to consider what further action is required now to manage the potential risks, while continuing to encourage uptake of innovative technologies.

Notably, the discussion paper does not address possible copyright issues (considered previously in our post) associated with Generative AI systems.  Instead, the discussion paper focusses on the use of these tools in a responsible way.  However, any coordinated government-wide response to AI cannot ignore the potential impact on content creators and the potential uncertainty for users who may be impacted by copyright infringement claims.  OpenAI, creator of ChatGPT, has previously proposed a mechanism for rewarding content creators when their data is used by Generative AI, however no detail has yet emerged for how such a system would operate.

 Within the financial services and fintech sectors, automation and technology have long been used to deliver efficiencies and assist with regulatory compliance.  The rapid advancements in AI tools have the potential to greatly expand such automation.

A number of the changes contemplated in the discussion paper build on options being explored in the Government’s ongoing review of the Privacy Act.  For instance, the Privacy Act review included proposals to require businesses to disclose when they use personal information in automated decision making systems and, when requested, inform consumers how automated decisions are made, following similar requirements under the GDPR in Europe. Some of the proposals from the Privacy Act review will pose regulatory challenges. Notably, the Privacy Act review proposes introducing an individual’s right to request erasure of their personal information from an entity’s data stores. The black box nature of AI networks makes it inherently difficult to ensure complete erasure.

A key focus of the discussion paper is transparency.  Large language models (LLMs) and other machine learning algorithms are often opaque, relying on the dataset they have been training on to deliver outcomes in ways which are difficult to analyse from the outside.  The Government is exploring the extent to which businesses using AI tools should be required to disclose publically the training datasets and how decisions are made, as well as allowing consumers affected by AI-powered decisions to request detailed information about the rationale.

This approach is consistent with research carried out by the Office of the Australian Information Commissioner, which found that more than three quarters of Australians believe they should be told when AI is used to make a decision impacting them and should also be told of the factors (and relative weightings) which led to the decision.  Whether this will be possible remains to be seen.

The discussion paper also highlights the need for supervision and oversight of AI systems, as they are being rolled out.  For AI use cases that have an enduring impact on persons, such as the use of AI in hiring and employee evaluation, the discussion paper contemplates frequent internal monitoring of the tool, as well as training for relevant personnel.  In this regard, the discussion paper highlights a principle from the voluntary “AI Ethics Framework” which was adopted by the Australian Government in 2019, in line with OECD recommendations.  The relevant principle is that “people responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled”.  It is likely that any regulatory responses to AI will be guided by this principle.

For ASIC and APRA regulated entities, for the time being at least, any use of AI-enabled tools is likely to be treated in a similar way to the use of other tools.  The assumption will be that the regulated entity is responsible for the AI system and its outcomes, as well as being responsible for carrying out proper due diligence at the outset and proper supervision and monitoring on an ongoing basis.  Implementing appropriate guardrails and quality assurance testing will be vital.

Consultation closes 26 July 2023 and submissions can be made to DigitalEconomy@industry.gov.au.

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins