Tech Council of Australia CEO Scott Farquhar

Scott Farquhar’s AI copyright conundrum

Scott Farquhar’s recent address to the National Press Club introduced compelling ideas for the future of Australia’s tech industry in the age of AI, but some elements need scrutiny.

Speaking as chair of the Tech Council of Australia—which represents both local startups and global tech giants operating in the Australian market—he outlined opportunities ranging from digital embassies and expanded API access to government services, to the potential for low-cost data centres powered by abundant green energy.

Much of Scott Farquhar’s vision was inspiring. But some elements deserve scrutiny.

One of the more contentious claims was his suggestion that Australian creatives and businesses should relinquish copyright protections when their intellectual property is used to train AI large language models. Farquhar argued that insisting on copyright claims could leave Australia lagging behind other nations, and advocated for relaxing local copyright law in line with jurisdictions like the United States.

In essence, he presented a binary choice: either forgo your copyright claim, or risk undermining the future of Australia’s AI industry. The alternative—preventing AI companies from crawling your site for content—was framed as equally problematic.

But there is a third path: requiring AI developers to pay a reasonable fee for using intellectual property. Big tech will still access the Australian content it needs. The sky won’t fall in, and the local AI sector won’t collapse.

Farquhar isn’t alone in this view. The Productivity Commission and others have echoed similar positions. While the Albanese government has so far resisted efforts to dilute copyright protections, its initial response in the previous parliament was underwhelming—convening a reference group and commissioning limited research, with little tangible progress.

Yet reform is urgently needed. Current copyright law was not designed with AI in mind, and updating it is both reasonable and necessary. The most constructive path forward would be to introduce specific provisions in the Copyright Act that address how AI models use intellectual property.

Big tech firms have long argued that training AI models—particularly large language models—is fundamentally different from piracy or unauthorized reproduction, and doesn’t affect creators’ income. Former UK Deputy Prime Minister Nick Clegg, speaking after his tenure at Meta, claimed that even requiring permission from creators would “instantly kill” the AI industry in a country.

US courts have largely accepted the distinction between training and reproduction. However, a recent judgment against AI developer Anthropic homed in on the use of pirated material—highlighting the issue of acquiring content illegally, rather than the boundaries of copyright law itself.

Australian courts may take a different view, given our stronger codified fair dealing provisions. But rather than leaving the issue to litigation, the government should proactively amend the Copyright Act to clarify AI’s use of IP.

A tiered payment schedule could be introduced: one for works used solely to train models and then discarded, another for content that’s retained and reproduced in some form. This would remove ambiguity and reduce the risk of drawn-out legal battles.

AI developers would still gain access to the content they need—while fairly compensating creators. 

Farquhar also proposed the creation of “digital embassies”—data centres on Australian soil that operate under the laws of client countries. This could appeal to nations across Asia, offering a secure and sovereign data solution. But it raises important questions: Could foreign governments sidestep Australian copyright law when training LLMs here? Would Australian law enforcement be hampered if such centres were used for espionage?

Despite these concerns, the concept is worth exploring.

Farquhar also emphasized the importance of developing AI to safeguard national sovereignty. He said this doesn’t necessarily mean building LLMs from scratch. Not everyone accepts this. You’d need to go back far enough in the AI supply chain to safeguard independence.

Bias in AI is now a pressing issue. In the US, President Trump has called for the removal of climate change, diversity, equity, and inclusion content from federal AI models. Another White House executive order promotes the global adoption of US-developed AI.

Whether Australian LLMs built on US foundations will inherit those biases remains to be seen. But it’s reasonable to expect that Australian models should reflect Australian values—especially where bias is concerned.

There’s an even darker dimension. Authoritarian regimes have long understood the power of controlling populations through media and social platforms. AI amplifies that control.

It’s a path humanity must not take.

Originally published by iTWire, 11 August 2025

Posted in AI Copyright, Artificial Intelligence, Australian AI, Scott Farquhar and tagged , , , , , , , , , .

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.