Artboard 1Icon/UI/CalendarIcons/Ionic/Social/social-pinterestIcon/UI/Video-outline

AI regulation gets underway

09 October 2024

3 min read

#Data & Privacy, #Corporate & Commercial Law

Published by:

AI regulation gets underway

The Commonwealth government has released its latest initiative to regulate the development and use of artificial intelligence (AI) tools. The package comprises of two parts, a voluntary regime and a proposed mandatory regime that will apply to so-called “high risk” AI cases once the basis of its application has been resolved.

In each case, a series of guardrails or safety standards has been set out, which are almost identical, with the last safety standard for the voluntary scheme requiring engagement with the organisation’s stakeholders, rather than assessment and certification as proposed under the mandatory guardrails.

In summary, the 10 guardrails are:

  1. establish an accountability process and arrangements to ensure regulatory compliance
  2. establish a risk management process to identify and minimise risks
  3. establish systems to manage the quality of data involved
  4. test AI tools to assess performance, both before and following implementation
  5. ensure there is a human in the loop to allow oversight and interventions as required
  6. keep users informed regarding the use of AI and AI-generated content
  7. establish protocols for those affected by AI to challenge the outcomes
  8. ensure transparency with other parties to facilitate their risk assessment
  9. maintain records to allow assessment of compliance
  10. assess performance and certify compliance.

The approach to implementing the mandatory guardrails will include determining what amounts to “high risk” AI. The government has identified two potential categories:

  1. The first category has regard to intended and foreseeable uses of the AI, and consideration is currently being given as to whether the determination of risk is done by way of lists of uses (as in the EU), or by way of principles (where the organization makes its own decision having regard to issues such as the likely impact on users’ legal rights, safety or reputation).
  2. The second category is “general purpose” AI, namely AI that can be used for a variety of purposes or integrated into other products, and the issue here is whether such AI should be regarded as high risk by default given the possibility of use in unforeseen situations.

Consideration is also being given to implementing the mandatory guardrails by means of a specific AI Act, or by supplementing existing regulatory requirements across various existing regimes (such as privacy, child safety, consumer protection). A third alternative is to have a general framework sit across existing regulation and require that any inconsistencies be resolved in due course.

While the legislation giving effect to the mandatory guardrails will likely not be in place until at least next year, the Voluntary Safety Standards already apply for all AI scenarios (not just high-risk ones). It is therefore prudent for any organisation to look at the arrangements it needs to have in place to manage its use of AI now. This will not only position it well for any mandatory requirements when they come into effect but also demonstrate to its customers and other stakeholders that it is taking AI seriously and working to manage any risks proactively.

If you have any questions or would like to discuss how your organisation can prepare and uplift current privacy policies and AI-related practices, please get in touch with our team below.

Disclaimer
The information in this article is of a general nature and is not intended to address the circumstances of any particular individual or entity. Although we endeavour to provide accurate and timely information, we do not guarantee that the information in this article is accurate at the date it is received or that it will continue to be accurate in the future.

Published by:

Share this