Balancing innovation with managing risks: the current debate
Some of the complexity surrounding AI regulation in Australia stems from the competing views on its application. On the employer side is the argument that AI creates opportunities for increased productivity, efficiency and innovation. On the other hand, unions argue that there are risks ranging from potential discrimination in hiring to concerns over workplace surveillance and job displacement.
AI was a hot topic leading into the Productivity Roundtables in August 2025, a summit hosted by the Australian Government in which unions, businesses, and others came together to discuss ways to improve productivity. During the discussions, union representatives pushed for a strong regulatory framework and greater worker voice in the adoption of AI in the workplace. If union proposals are adopted, they have the potential to impede employers’ abilities to implement productivity measures in their workplaces.
As the union movement continues to push for stronger reforms, on 2 December 2025, the Albanese Government released its National AI Plan which retreated from the Government’s first-term commitment to introduce “mandatory guardrails” and instead directs regulators to report any gaps in existing legislation with the newly formed AI Safety Institute.
While employer representatives are heralding the supposed “light-touch” approach in the National AI Plan, the rhetoric from the Government suggests there is appetite for union-back reforms to the Fair Work framework.
___
Existing regulatory framework
Despite claims from the union movement to the contrary, Australia’s existing workplace laws already provide a foundation of protections that are relevant to the use of AI and ADM.
Unfair dismissal laws, anti-discrimination statutes, adverse action provisions and work health and safety (WHS) legislation all play a role in safeguarding employees. For example, even if an algorithm makes a decision to terminate employment without human oversight, the employer remains liable under unfair dismissal laws. The Fair Work Commission (FWC) would still require a valid reason for dismissal and would assess whether the process was fair and reasonable.
Discrimination law presents a more nuanced challenge. While intent is not required for a finding of discrimination, the law typically contemplates actions taken by a person. This raises questions about liability when a decision is made solely by an algorithm, such as where an employer may rely on ADM technology to vet prospective employees, in circumstances where candidates may inadvertently be rejected for discriminatory reasons.
While some may point to this as a “regulatory gap”, the general protections provisions under the Fair Work Act 2009 (Cth) (‘FW Act’) arguably capture circumstances where a prospective employee has been rejected for discriminatory reasons, regardless of whether a human or an algorithm made the decision. In practice, an employer would find it difficult to overcome the hurdle of a reverse burden of proof if relying exclusively on ADM technologies for recruitment purposes.
Workplace surveillance is governed by a patchwork of state and territory laws, as well as WHS obligations. While these laws may be in need of modernisation to keep pace with technological change, they do provide a level of protection against unreasonable monitoring and data collection.
Consultation requirements are another area of focus. Most employees are covered by modern awards or enterprise agreements that mandate consultation when major changes, such as the introduction of new technology, are likely to have a significant effect on employees. These obligations are broad enough to encompass AI and ADM, ensuring that employees and their representatives are involved in discussions about technological change. Recent committee reports and union submissions argue that consultation duties are sometimes “obviated by employers” and may lack transparency in practice, creating uncertainty over whether AI deployment constitutes a major change triggering formal consultation. While there is little proof that this is the case, this argument is quickly gaining support in the Federal Cabinet.
___
Where to from here?
Recent developments indicate that specific regulation of AI in the workplace is not just a possibility but is already here. Notably, the introduction of a statutory Digital Labour Platform Deactivation Code for gig economy platforms and proposed amendments to the Workers Compensation Act 1987 (NSW) signal a move towards greater oversight of automated systems.
The New South Wales workers compensation changes, currently before Parliament, are particularly novel and may provide a blueprint for similar laws in other jurisdictions. They seek to link work, health and safety risks with workplace surveillance and “discriminatory” decision-making, providing union officials with specific entry rights to inspect “digital work systems” to investigate suspected breaches of the law.
These reforms purportedly aim to ensure human oversight in key decisions, prevent unreasonable performance metrics and surveillance, and grant unions increased powers.
At the Federal level, unions, led by the Australian Council of Trade Unions (ACTU), are advocating for mandatory “AI Implementation Agreements” that would require employers to consult with staff before introducing new AI technologies. These agreements would guarantee job security, skills development, retraining, and transparency over technology use.
Most recently, we saw Microsoft Australia and the ACTU announce an agreement to “develop a framework to elevate the voices and expertise of working people in the introduction of AI and other emerging technologies into Australian workplaces”. The agreement, which is a first in Australia, is grounded in three core objectives: information sharing with union leaders and workers, worker voice in technology development, and collaboration on public policy and skills.
Additional union proposals include a right for workers to refuse to use AI in certain circumstances, mandated training, reforms to surveillance laws, and expanded bargaining rights related to AI adoption.
While the Australian Government appears to be moving away from a dedicated AI Act, recent supportive comments from key members of the Australian Government indicate that employers should be prepared for more targeted legislative changes which give workers and unions greater voice in the adoption of AI in the workplace.
___
Practical steps for employers
In this evolving landscape, employers should take proactive steps to manage both legal compliance and workforce relations. In practice, organisations should:
- Ensure human oversight: maintain human involvement in significant employment decisions made using AI or ADM, particularly in hiring, firing, promotion and performance management.
- Conduct AI risk assessments: evaluate the potential bias, privacy, WHS and discrimination risks before implementation.
- Consult with employees: engage in timely and meaningful consultation with employees and their representatives when introducing new technologies, in line with existing modern award or agreement obligations.
- Develop clear policies: establish and communicate clear policies on the use of AI, workplace surveillance and data handling to ensure transparency and build trust.
- Invest in skills development: provide upskilling and retraining opportunities to help employees use AI safely and effectively, adapt to technological change and maintain workforce capability.
- Monitor legal developments: track reforms at federal and state levels and emerging best practices to ensure ongoing compliance and readiness for future reforms.
___
Takeaway for employers
Artificial intelligence is no longer a distant concept – it is already shaping workplace regulation in Australia. While existing laws offer protections, the regulatory landscape is evolving quickly, with unions calling for stronger safeguards and the government signalling targeted reforms.
With government, unions, and business groups all weighing in on the future of AI regulation, it is crucial for employers to understand both the current legal framework and the likely direction of future reforms. Employers who act now, will be better prepared to meet upcoming reforms and maintain the trust needed for successful AI adoption.
*Kingston Reid
**Ius Laboris