Amid the rapid acceleration of tools like ChatGPT and global calls for tailored regulation of artificial intelligence tools, the Australia Federal Government has released a discussion paper on the safe and responsible use of AI. The Government is consulting on what safeguards are needed to ensure Australia has an appropriate regulatory and governance framework to manage the potential risks, while continuing to encourage uptake of innovative technologies.
A key focus of the discussion paper is transparency. Large language models (LLMs) like ChatGPT and other machine learning algorithms are often opaque, relying on the dataset they have been training on to deliver outcomes in ways which are difficult to analyse from the outside. The Government is exploring the extent to which businesses using AI tools should be required to disclose publically the training datasets and how decisions are made, as well as allowing consumers affected by AI-powered decisions to request detailed information about the rationale. This builds on proposals from the Attorney-General’s review of the Privacy Act to require businesses to disclose when they use personal information in automated decision-making systems and, when requested, inform consumers how automated decisions are made, following similar requirements under the GDPR in Europe.
The discussion paper also highlights the need for supervision and oversight of AI systems, as they are being rolled out. For AI use cases that have an enduring impact on persons, such as the use of AI in hiring and employee evaluation, the discussion paper contemplates frequent internal monitoring of the tool, as well as training for relevant personnel.
Consultation closes 26 July 2023 and submissions can be made to DigitalEconomy@industry.gov.au.