18 February 2025
5 min read
#Data & Privacy, #Technology, Media & Communications
Published by:
Rapid advancements in AI and high-profile privacy breaches are driving a wave of government legislative reform. Following our earlier articles on the regulation of AI tool development and use, and the new Cyber Security Act, we provide an update on the latest developments.
Since releasing the mandatory guidelines and voluntary safeguards in September 2024, the government has been consulting on issues that require further work, including how to determine what constitutes a high risk setting for AI deployment and how to effectively legislate the proposed reforms.
The government also announced that its expert group plans to develop a National AI Capability Plan (Plan) once consultation is complete. The outputs are expected to draw on EU models, and to be released by the end of 2025. However, there have been recent calls to bring forward the Plan’s release, which would allow its resourcing to be included in the budget planning for the next financial year.
Overseas, President Trump moved away from the responsible AI safety rules that had been introduced by the previous administration and called for AI development to be a key focus for the US. The recent release of the Chinese generative AI product, DeepSeek, caused market turmoil, signalling that the US’s lead in AI could be quickly eroded by cheaper alternatives. This development also prompted the Australian Government to ban the app on its staff’s devices.
On 11 February 2025, the Commonwealth’s House of Representatives Standing Committee on Employment, Education and Training delivered its report, ‘The Future of Work’, where it called for AI products used by employers in recruitment, remuneration and training to be regarded as “high risk” for the purposes of attracting the mandatory guardrails.
Meanwhile, the Attorney-General’s Department is accepting submissions on automated decision-making (ADM) in government, following recommendations from the Robodebt Royal Commission to review the legislative framework allowing for such decision-making. While ADM will not necessarily involve the use of AI tools, there will likely be overlaps and the review is intended to complement and align with the AI program being undertaken. The focus on transparency and safeguards (at the pre-implementation stage, at the systems level, and at the decision stage and following) are consistent with the principles proposed in the AI guardrails. The government is expected to provide an update on its Plan soon, amid growing calls from the technology sector to bring greater certainty as soon as possible.
On the cyber security front, the government is receiving submissions on the subordinate legislation to the Cyber Security Act and the Security of Critical Infrastructure Act 2018. There are currently six proposed rules under the legislative package passed late last year, covering matters such as:
Submissions closed on 14 February 2025, and the rules are expected to be finalised and given effect to in the months following that process.
A recent report reviewing Australia’s online safety laws has recommended introducing a “digital duty of care” for large social media platforms to take reasonable steps to actively protect users from harmful content. Under this ‘duty’, platforms would be required to conduct regular risk assessments to identify any harmful content on their platform, be transparent about the results, and respond to user complaints regarding their content.
Users would also have a right to submit complaints to a regulator, potentially similar to the EU’s Digital Services Coordinator, which could take action if the platform fails to address the issue. There could also be significant fines and other penalties for non-compliance.
The proposed duty is along similar lines to that announced by the government in November 2024 (which it plans to pursue if re-elected in 2025), and would likely draw on the examples recently established in the EU and the UK. If implemented, the duty is also seen as an alternative approach to combatting online misinformation and disinformation, particularly after the withdrawal of the bill that was specifically aimed at such conduct.
Some commentators have also advocated for this duty as a preferable alternative to the government’s recent legislation that will introduce an effective ban on children under 16 having access to social media platforms, which is due to be implemented by December 2025. Further work will be needed to define the types of harm that would attract the duty, and the report suggests separating the Online Safety Act regime from the existing national classification scheme for computer games and film to provide more flexibility in dealing with harms such as eating disorders.
Last but not least, the regulatory environment for the privacy of personal information continues to evolve. Recent developments include the legislation effecting major changes to the Commonwealth Privacy Act passed in December 2024, new guidance on privacy issues with the use of AI for developers and businesses using AI, and the decision regarding the use of facial recognition technology by Bunnings released in November 2024.
We will continue to see many developments in AI, cyber security and privacy as we settle into 2025. If you would like more information about the above, please get in touch with our team below.
Disclaimer
The information in this article is of a general nature and is not intended to address the circumstances of any particular individual or entity. Although we endeavour to provide accurate and timely information, we do not guarantee that the information in this article is accurate at the date it is received or that it will continue to be accurate in the future.
Published by: