Open to AI-deas - Regulating Artificial Intelligence in Australia

Hogan Lovells
Contact

Hogan Lovells

On 1 June 2023, the Federal Government released the Safe and Responsible Use of Artificial Intelligence in Australia Discussion Paper which set out a number of potential mechanisms and regulatory approaches through which AI can be regulated in Australia. Interested organisations have one more month to have their say on how the development of AI should be handled in Australia.  Consultation on the discussion paper closes on 26 July 2023.


In light of the rapid advancement of artificial intelligence, including the rise of generative AI technologies such as ChatGPT, the Department of Industry, Science and Resources is currently seeking industry input into how best to implement appropriate governance mechanisms and regulatory responses to ensure AI is used safely and responsibly. The consultation is intended to ensure Australia is able to reap the benefits of AI, while supporting responsible AI practices and maintaining community trust and confidence.

The consultation follows an announcement that the Federal Government has identified artificial intelligence as a priority technology, and committed AU$41.2 million to support the responsible deployment of AI in the national economy.  The discussion paper also follows a previous round of consultation in 2022 on ‘Positioning Australia as a leader in digital economy regulation (automated decision making and AI regulation): issues paper and a paper published by the National Science and Technology Council earlier this year called ‘Rapid Response Information: Generative AI’ which addresses questions regarding the opportunities and risks of large language model and multimodal foundation models.


Current state of regulation

The discussion paper sets out a summary of the current regulatory landscape both domestically and internationally.

Currently, there is no law that specifically deals with AI in Australia. Instead, depending on its use, AI may be captured under existing laws (such as, for example, privacy and consumer laws), and through sector-specific regulations in industries such as therapeutic goods, food, financial services, motor vehicles and airline safety. Additionally, there are a number of voluntary frameworks in place, including the national AI Ethics Framework, which was released in 2019 to help guide businesses to responsibly design, develop and implement AI.  

The discussion paper made several observations about the various regulatory approaches taken by different jurisdictions with regards to AI. At this early stage in AI regulation, diverging approaches have been taken, with some countries preferring voluntary approaches (for example, Singapore) while other jurisdictions are moving towards hard regulation (for example, the European Union (EU) has recently voted to pass the draft text of their own AI Act).

The discussion paper also considers that there is a growing international trend towards taking a risk-based approach to AI regulation, particularly in the EU, United States, Canada and New Zealand. The discussion paper contains an overview of these approaches, and number of consultation questions regarding the potential implementation of a similar risk-based approach to AI in Australia.


Key issues and risks with AI

While AI technologies evidently provides significant economic and social benefits, with AI estimated to add between AU$1.1 trillion to AU$4 trillion to the Australian economy by the early 2030s, the discussion paper identifies a number of key challenges and risk areas associated the use of AI, including the potential for AI to cause harm by:

  • generating deep fakes to influence democratic processes or cause other deceit;

  • creating misinformation and disinformation;

  • encouraging people to self-harm;

  • containing inaccuracies and unwanted bias, or generating erroneous outputs;

  • containing algorithmic bias (i.e. the AI learns based off datasets that contain biased information) which can lead to outcomes such as racial, socio-economic and gender discrimination where AI is used to make decisions (for example, AI recruitment algorithms prioritising male over female candidates).

Interestingly, the discussion paper raises the possibility of banning high-risk AI applications, and is seeking further input on this matter. High-risk activities that were specifically called out as potentially warranting a ban were social scoring and facial recognition technology.

Additionally, the discussion paper notes that there are technical elements to AI that need to be de-risked, including system accountability and transparency, and ensuring the validity and reliability of the data used to train the AI models for their intended purpose. In relation to transparency, the discussion paper also flagged that AI used for automated decision-making (ADM) should be disclosed to individuals and consumers so that they are capable of understanding the risks of engaging with those technologies and can challenge and seek review of the decisions made by AI and ADM. Businesses that deal with ADM should note that ADM is a topic that is also being addressed in the Attorney General’s Privacy Act Review.


Consultation

The discussion paper contains 20 consultation questions, which broadly cover the following topics:

  1. whether the definitions of terms associated with AI proposed in the discussion paper are appropriate (for example, ‘generative AI’, ‘machine learning’, ‘large language model’);

  2. whether there are any potential gaps in approaches to AI in Australia, and whether there are any approaches from overseas jurisdictions that are relevant to Australia;

  3. whether there are any target sectors or areas for AI technology that may require a different approach;

  4. whether banning certain AI technologies will impact Australia’s tech sector and trade and exports with other countries;

  5. whether changes are required to Australian conformity infrastructure to mitigate against AI risks; and

  6. whether industry supports a risk-based approach to addressing AI risk.


Next steps

The consultation on the discussion paper is now open and closes on 26 July 2023.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Hogan Lovells | Attorney Advertising

Written by:

Hogan Lovells
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Hogan Lovells on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide