Now Reading: UK lawmakers back licensing‑first approach, adding pressure to global AI copyright standards

Loading
svg

UK lawmakers back licensing‑first approach, adding pressure to global AI copyright standards

NewsMarch 7, 2026Artifice Prime
svg18

AI developers must obtain licenses for copyrighted material before using it to train models, a committee of the House of Lords, the UK Parliament’s upper chamber, said Thursday.

The committee called the approach “licensing-first,” meaning no training on protected works without prior permission and payment, regardless of how the material is sourced.

The committee has demanded that the government enshrine the licensing-first principle in policy, make disclosure of AI training data a statutory obligation, and permanently reject a proposed copyright exception that would allow developers to train on protected works without consent.

“Watering down the protections in our existing copyright regime to lure the biggest US tech companies is a race to the bottom that does not serve UK interests,” Baroness Keeley, chair of the Communications and Digital Committee, said in a statement. “We should not sacrifice our creative industries for AI jam tomorrow.”

The committee published the suggestions in a report titled “AI, Copyright and the Creative Industries,” published following an inquiry launched in November 2025.

The government must publish its own mandatory policy response by March 18 under the Data (Use and Access) Act 2025.

What licensing-first requires

A licensing-first regime cannot function without transparency, the committee said. The report called for a statutory obligation requiring AI developers to disclose what data their models were trained on, backed by open technical standards for rights reservation, data provenance, and labelling of AI-generated content.

“Without those foundations,” the report said, “rights holders have no reliable way to establish whether their work has been used.”

Karthi P, senior analyst at Everest Group, said standards such as C2PA (Coalition for Content Provenance and Authenticity) are being adopted by device manufacturers, generative AI providers, and platform players to attach content credentials and enable traceability, a kind of provenance infrastructure that a licensing-first regime would require. “The challenge is scaling this across decades of legacy content and a highly fragmented creator economy,” he said. “That infrastructure exists in pockets, but it is not yet industrialised end to end.”

The EU has already moved in this direction. Under Article 53 of the EU AI Act, providers of general-purpose AI models must publish a detailed summary of their training content, a requirement that came into force in August 2025. Non-compliance carries fines of up to $17.3 million (€15 million) or 3% of global annual revenue. The UK committee said comparable requirements are needed here.

The exception that would undermine it

The licensing-first framework depends on one critical precondition: the permanent rejection of a proposed copyright exception that the committee said would undermine it before it takes hold. That exception is known as a text and data mining (TDM) exception, a legal carve-out that would allow AI developers to train models on copyrighted works for commercial purposes without seeking permission, subject to an opt-out mechanism for rights holders.

“The Government should, in the next year, publish a final decision on its approach to AI and copyright. In the meantime, it should set out clearly that it will not introduce a new TDM exception with an opt-out mechanism, as initially proposed in its consultation on AI and copyright,” the Committee said in the report.

The government had previously backed such an exception before abandoning it under pressure in 2025. The committee called on the government to make that reversal permanent.

The report argued that an opt-out model places the burden on creators to police an industry where they cannot even confirm whether their work has been used.

A global problem without a settled answer

The UK is not alone in grappling with this question. In the US, more than 50 copyright lawsuits are pending in federal courts against AI developers, including OpenAI, Anthropic, and Google, filed by publishers, authors, and entertainment companies. Courts have reached divided conclusions on whether training on copyrighted material constitutes fair use, according to the Copyright Alliance.

The US Copyright Office concluded in May 2025 that a voluntary licensing market for AI training data is already taking shape, identifying lost licensing revenue and market dilution as the primary areas of harm from unlicensed training.

The committee added that the UK is well-positioned to lead a licensed AI training data market given the scale of its creative output.

The vendor dependency question

The committee recommended the government prioritise the development of sovereign AI models — domestically governed systems built with transparency and copyright compliance as design requirements — to reduce dependence on AI platforms whose training data practices cannot be independently verified.

For enterprise technology buyers, Karthi said, the shift does not mean procurement moves away from performance. “In content-heavy use cases such as marketing, media production, and customer engagement, buyers will increasingly look at a provider’s training data posture alongside model capability,” he said.

Enterprises scaling AI programmes will need procurement playbooks revisited regularly to balance innovation velocity with defensible data practices, he added. For AI vendors and the enterprises that buy from them, that response will set the terms. “Trusted data foundations will be just as important as model performance in determining sustainable AI adoption,” Karthi added.

Original Link:https://www.computerworld.com/article/4141726/uk-lawmakers-back-licensing%e2%80%91first-approach-adding-pressure-to-global-ai-copyright-standards.html
Originally Posted: Fri, 06 Mar 2026 12:07:40 +0000

0 People voted this article. 0 Upvotes - 0 Downvotes.

Artifice Prime

Atifice Prime is an AI enthusiast with over 25 years of experience as a Linux Sys Admin. They have an interest in Artificial Intelligence, its use as a tool to further humankind, as well as its impact on society.

svg
svg

What do you think?

It is nice to know your opinion. Leave a comment.

Leave a reply

Loading
svg To Top
  • 1

    UK lawmakers back licensing‑first approach, adding pressure to global AI copyright standards

Quick Navigation