HB Ad Slot
HB Mobile Ad Slot
AI and Your Obligations as an Australian Financial Services Licensee
Tuesday, November 19, 2024

On 29 October 2024, the Australian Securities and Investments Commission (ASIC) published REP 798 Beware the gap: Governance arrangements in the face of AI innovation. This report details ASIC's findings from a review of how artificial intelligence (AI) is being used and adopted in financial services and by credit licensees.

In REP 798, ASIC warned that licensees are adopting AI technologies faster than they are updating their risk and compliance frameworks. This lag creates significant risks, including potential harm to consumers. For instance, ASIC raised concerns about an AI model used by one licensee to generate credit risk scores, describing it as a "black box." They noted that this model lacked transparency, making it impossible to explain the variables influencing an applicant's score or how they affected the final outcome.

In the report, ASIC emphasised to licensees planning to use AI the need to stay aware of the rapidly evolving technological landscape. They highlighted the importance of prioritising governance, risk, and regulatory compliance when implementing new tools.

Below, we summarise how the adoption and use of AI by licensees aligns with the existing regulatory framework and industry best practices to ensure compliance with both current and future regulatory requirements in Australia. 

Existing Regulatory Framework

In REP 798, ASIC reminded financial services businesses to consider and comply with their existing regulatory obligations when adopting and using AI. They pointed out that the current regulatory framework for financial services and credit licensees is technology-neutral, meaning it applies equally to both AI and non-AI systems and processes.

These existing obligations are as follows: 

  • General obligations: Use of AI must comply with the general obligation to provide financial or credit services "efficiently, honestly and fairly". ASIC highlighted that AI models can potentially treat consumers unfairly, resulting in outcomes or decisions that are difficult to explain. 
  • Unconscionable conduct: The use of AI must not lead to actions that are unconscionable towards consumers. ASIC provided an example where AI could unfairly exploit consumer vulnerabilities or behavioural biases. 
  • Misleading and deceptive conduct: Representations regarding the use of AI, model performance, and outputs must be factual and accurate. This obligation includes ensuring that any AI-generated representations are not false or misleading.
  • Directors' duties: Directors must recognise that their duty to exercise their powers with the care and diligence a reasonable person would use in similar circumstances extends to the adoption, deployment, and use of AI. They should keep this responsibility in mind when relying on AI-generated information to fulfil their duties, as well as the reasonably foreseeable risks that may arise from its use. 

Best Practices–ASIC 

ASIC has also set out the best practices it has observed from licensees.

  • Review and documentation: Licensees should identify and update their governance and compliance measures as AI risks and challenges evolve. These measures need to be documented, monitored, and regularly reviewed. ASIC noted that some licensees did not take a proactive approach, failing to update their governance arrangements in line with their increasing use of AI. To prevent potential consumer harm, licensees must consistently review and update their arrangements, ensuring there is no lag in their AI adoption. 
  • AI governance arrangements: Licensees should establish a clear overarching AI strategy that aligns with their desired outcomes and objectives, while also considering their skills, capabilities, and technological infrastructure. ASIC recommends that best practices include the formation of a specialist executive-level committee with defined responsibility and authority over AI governance, along with regular reporting to the board or committee on AI-related risks. Additionally, incorporating the eight Australian AI Ethics Principles into AI policies and procedures is considered a hallmark of best practice. 
  • Technological and human resources: Licensees should assess whether they have sufficient human capital with the necessary skills and experience to understand and implement AI solutions. They must also ensure they have adequate technological resources to maintain data integrity, protect confidential information, and meet their operational needs.
  • Risk management systems: Licensees should evaluate how the use or increased adoption of AI alters their risk profile and risk management obligations. They should determine whether these changes necessitate adjustments to their risk management frameworks. 
  • AI third-party providers: Licensees should ensure they have appropriate measures in place to select suitable AI service providers, monitor their performance, and manage their actions throughout the entire AI lifecycle. While many licensees quickly relied on third parties for their AI models, they often overlooked the associated risks. ASIC noted that best practices include applying the same governing principles and expectations to models developed by third parties as those used for internally developed models.

In combination with REP 798, ASIC provided 11 questions for licensees to review and consider to ensure their AI innovation is balanced with the above regulatory obligations and best practices. 

These are as follows: 

  1. Taking stock: Where is AI currently being used in your business? 
  2. Strategy:  What is your strategy for AI, now and in the future? 
  3. Fairness: How will you provide services efficiently, honestly and fairly when using AI? 
  4. Accountability: Who is accountable for AI use and outcomes in your business? 
  5. Risks: How will you identify and manage risks to consumers and regulatory risks from AI? 
  6. Alignment: Are your governance arrangements leading or lagging your use of AI? 
  7. Policies: Have you translated your AI strategy into clear expectations for staff? 
  8. Oversight: What human oversight does your AI use require, and how will you monitor it? 
  9. Third parties: How do you manage the challenges of using models developed by third parties? 
  10. Regulatory reform: Are you engaging with the regulatory reform proposals on AI?
  11. Resources: Do you have the technological and human resources to manage AI?

Best Practices-Voluntary AI Safety Standard

Separately, the Australian Government has introduced the Voluntary AI Safety Standard, which outlines 10 guardrails for the development and deployment of AI by Australian organisations. These guardrails were developed in response to feedback from Australian companies that expressed a need for clear guidance and consistency in implementing AI.

The guardrails are as follows:

  1. Regulatory compliance: Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance. This is designed to create the foundation for an organisation's use of AI. 
  2. Risk management: Establish and implement a risk management process to identify and mitigate risks. Complete a risk assessment considering the full range of potential harms, and consider this on an ongoing basis. 
  3. Data integrity: Protect AI systems and implement data governance measures to manage data quality and provenance. Focus on the unique characteristics of the AI system proposed or implemented.
  4. Testing: Test AI models and systems to evaluate model performance and monitor the system once deployed.
  5. Ensure control: Enable human control or intervention in an AI system to achieve meaningful human oversight across the life cycle, reducing the potential for unintended consequences and harms. 
  6. Create trust with users: Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content. Instil confidence and trust in users by proper disclosure of the use of AI. 
  7. Establish processes: Create processes for users, organisations, people, and society impacted by AI systems to challenge how they are using AI and contest decisions, outcomes, or interactions that involve AI.
  8. Be transparent: Be transparent with other organisations across the AI supply chain about data, models, and systems to help them effectively address risks.
  9. Maintain records: Keep and maintain records to allow third parties to assess compliance with the guardrails.
  10. Engage stakeholders: Evaluate their needs and circumstances, with a focus on safety, diversity, inclusion, and fairness. Conduct a stakeholder impact assessment to identify any potential bias or ethical prejudices in AI deployment. 

The Government is also consulting on mandatory guardrails for AI in "high-risk" cases, which will largely build on the existing voluntary guardrails. The definition of "high-risk" AI is still under consideration. In a proposals paper, the Government suggested that "high-risk" could include two broad categories:

  1. Intended and foreseeable use risks: This includes any adverse impacts on individual human rights or health and safety.
  2. General purpose risks: This refers to AI that can be used for various purposes and adapted for different applications.

Key Takeaways

Licensees who are currently using or planning to use AI must ensure they are familiar with the existing regulatory framework for developing and deploying AI in Australia. They should strive to adhere to the best practices outlined by ASIC in REP 798 and embrace the standards set forth in the Government’s ten guardrails. We anticipate that these best practices and guardrails will be incorporated into future legislation, influencing regulators like ASIC in enforcing current regulatory requirements.

Implementing an appropriate risk and governance framework is just the first step. Licensees should proceed cautiously, conduct thorough due diligence, and establish effective monitoring to ensure their policies adequately address the ongoing risks and challenges associated with AI. However, licensees with robust technology platforms and a strong track record in risk management are well-positioned to experiment with AI and should feel confident in moving forward with this technology. 

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins