Artboard 1Icon/UI/CalendarIcons/Ionic/Social/social-pinterestIcon/UI/Video-outline

Privacy and AI – new guidance from OAIC

07 November 2024

2 min read

#Data & Privacy, #Technology, Media & Telecommunications, #Corporate & Commercial Law

Published by:

Nastassya Naude

Privacy and AI – new guidance from OAIC

The Office of the Australian Information Commissioner (OAIC) has released new guidance to help organisations and developers comply with their privacy obligations when using AI.

The guidance is divided into two parts – the first targets developers who are in the initial stages of developing and fine-tuning their AI models, while the second is directed at organisations that deploy AI in their business operations.

AI practices that could breach the Australian Privacy Principles

The guidance sets out ways in which AI development, deployment and use can amount to breaches of the Australian Privacy Principles (APPs). For example:

  • failing to understand the risks associated with using AI could put your organisation at risk of breaching its obligations under APP 1 (to manage personal information in an open and transparent way)
  • AI products that generate or infer personal information do not collect it directly from the individuals concerned, as required by APP 3. This principle states that personal information should be collected directly from the individual unless it is unreasonable or impracticable to do so. To comply with APP 3, organisations must demonstrate that such collection is indeed unreasonable or impracticable and that it is reasonably necessary for their functions or activities
  • data sets, some of which include personal information, are used to train AI. When an individual provides an organisation with their personal information, they may not expect it to be used to train an AI model, which could breach APP 6
  • it might be difficult for an individual to understand how AI products work and where the data used by the products will be transferred to, which risks breaching APPs 5 and 6
  • personal information can become outdated. An organisation should ensure that the personal information collected, used and disclosed is accurate, relevant, and up to date (as required by APP 10). This can be difficult when the information has been used to train an AI model, as extracting the relevant record may be impossible. Organisations should make it clear to any end users that the data generated by AI products may be inaccurate.

While there is an obvious need for organisations to ensure compliance with their legislative obligations, including compliance with the APPs, it is also prudent to assume that contractors may access and use AI products when supplying goods or services to your entity. Organisations should consider whether their standard contract terms appropriately address privacy risks that arise from AI use by contractors.

Access the OAIC's guidance for developers here and guidance for businesses using AI here.

If you have any questions about the guidance or require assistance with privacy complaints and breach allegations, advice on AI usage in your business, reviewing commercial contract terms, drafting Privacy Impact Assessments or general privacy advice, please get in touch with our team below.

Disclaimer
The information in this article is of a general nature and is not intended to address the circumstances of any particular individual or entity. Although we endeavour to provide accurate and timely information, we do not guarantee that the information in this article is accurate at the date it is received or that it will continue to be accurate in the future.

Published by:

Nastassya Naude

Share this