On 31 January 2024, ASIC Chair Joe Longo outlined ASIC’s position on Australia’s AI regulatory landscape: while current laws offer some regulation, they fall short in effectively addressing the risks associated with AI.
Longo acknowledged that Australia already has general laws which would apply to AI, such as those for privacy, online safety, corporations, intellectual property, and anti-discrimination. He also noted that the directors’ duties under the Corporations Act also apply to companies using AI.
Additionally, ASIC is already scrutinising the use of AI in financial services, alongside its recent focus on cybersecurity compliance by AFSL holders.
However, ASIC considers that gaps remain in Australia’s regulatory landscape for AI. One of ASIC’s key concerns is algorithmic biases that perpetuate discrimination when AI is used in the decision-making process. These biases can show up, for example, in credit scoring, where AI processes may favour certain groups over others based on the limited training data set.
A major challenge for ASIC and other regulators looking to regulate AI is the opacity of the technology (at least in its current form), which makes it difficult to identify and address algorithmic issues.
Moreover, the rapid uptake and development of AI requires agile and prompt regulatory responses. AI experts have advocated for greater transparency and accountability into AI’s decision-making by implementing regulatory frameworks that prioritise openness, fairness, and ethical standards.
Although Longo believes AI requires greater regulation, he considers it may also be a useful for ASIC in upholding the safety and integrity of the financial system for the benefit of consumers.