Now Reading: Zero-trust data governance needed to protect AI models from slop

Loading
svg

Zero-trust data governance needed to protect AI models from slop

NewsJanuary 27, 2026Artifice Prime
svg30

Organizations need to be less trustful of data given how much of it is AI-generated, according to new research from Gartner.

As more enterprises jump on board the generative AI train — a recent Gartner survey found 84% expect to spend more on it this year — the risk grows that future large language models (LLMs) will be trained on outputs from previous models, increasing the danger of so-called model collapse.

To avoid this, Gartner recommends companies make changes to manage the risk of unverified data. These include the appointment of an AI governance leader to work closely with data and analytics teams; improve collaboration between departments with cross-functional groups including representatives from cybersecurity, data, and analytics; and updating existing security and data management policies to address risks from AI-generated data.

Gartner predicts that by 2028, 50% of organizations will have had to adopt a zero-trust posture for data governance as a result of this tidal wave of unverified AI-generated data.

“Organizations can no longer implicitly trust data or assume it was human generated,” Gartner managing VP Wan Fui Chan said in a statement. “As AI-generated data becomes pervasive and indistinguishable from human-created data, a zero-trust posture establishing authentication and verification measures is essential to safeguard business and financial outcomes.”

What makes matters even trickier to handle, said Chan, is that there will be different approaches to AI from governments. “Requirements may differ significantly across geographies, with some jurisdictions seeking to enforce stricter controls on AI-generated content, while others may adopt a more flexible approach,” he said.

Perhaps the best example of how AI can cause data governance issues was when Deloitte Australia had to refund part of a government contract fee after AI-generated errors, including non-existent legal citations, were included in its final report.

This article first appeared on CIO.

Original Link:https://www.infoworld.com/article/4122179/zero-trust-data-governance-needed-to-protect-ai-models-from-slop-2.html
Originally Posted: Mon, 26 Jan 2026 16:08:34 +0000

0 People voted this article. 0 Upvotes - 0 Downvotes.

Artifice Prime

Atifice Prime is an AI enthusiast with over 25 years of experience as a Linux Sys Admin. They have an interest in Artificial Intelligence, its use as a tool to further humankind, as well as its impact on society.

svg
svg

What do you think?

It is nice to know your opinion. Leave a comment.

Leave a reply

Loading
svg To Top
  • 1

    Zero-trust data governance needed to protect AI models from slop

Quick Navigation