newsence
來源篩選

AI is Poisoning Itself and Pushing LLMs Toward Collapse, But There's a Cure

Hacker News

The proliferation of unverified AI-generated content is creating a 'Garbage In, Garbage Out' problem for AI systems, leading to 'Model Collapse' where models drift from reality. Gartner predicts a rise in 'zero trust' data governance as a solution.

newsence

AI正自我毒害並推動大型語言模型走向崩潰,但有解方

Hacker News
大約 1 個月前

AI 生成摘要

未經驗證的AI生成內容氾濫,正導致AI系統面臨「垃圾進、垃圾出」問題,並引發模型偏離現實的「模型崩潰」。Gartner預測,為了解決此問題,「零信任」資料治理將日益普及。

AI is quietly poisoning itself and pushing models toward collapse - but there's a cure | ZDNET

AI is quietly poisoning itself and pushing models toward collapse - but there's a cure

Image Image

Follow ZDNET: Add us as a preferred source on Google.

ZDNET's key takeaways

According to tech analyst Gartner, AI data is rapidly becoming a classic Garbage In/Garbage Out (GIGO) problem for users. That's because organizations' AI systems and large language models (LLMs) are flooded with unverified, AI‑generated content that cannot be trusted.

Model collapse

You know this better as AI slop. While annoying to you and me, it's deadly to AI because it poisons the LLMs with fake data. The result is what's called in AI circles "Model Collapse." AI company Aquant defined this trend: "In simpler terms, when AI is trained on its own outputs, the results can drift further away from reality."

Also: 4 new roles will lead the agentic AI revolution - here's what they require

However, I think that definition is much too kind. It's not a case of "can" -- with bad data, AI results "will" drift away from reality.

Zero trust

This issue is already apparent. Gartner predicted that 50% of organizations will have a zero‑trust posture for data governance by 2028. These enterprises will have no choice, because unverified AI‑generated data is proliferating across corporate systems and public sources.

The analyst argued that enterprises can no longer assume data is human‑generated or trustworthy by default, and must instead authenticate, verify, and track data lineage to protect business and financial outcomes.

Ever try to authenticate and verify data from AI? It's not easy. It can be done, but AI literacy isn't a common skill.

Also: Got AI skills? You can earn 43% more in your next job - and not just for tech work

As IBM distinguished engineer Phaedra Boinodiris told me recently: "Just having the data is not enough. Understanding the context and the relationships of the data is key. This is why you need to have an interdisciplinary approach to who gets to decide what data is correct. Does it represent all the different communities that we need to serve? Do we understand the relationships of how this data was gathered?"

Making matters worse, GIGO now operates at AI scale. This situation means that flawed inputs can cascade through automated workflows and decision systems, producing worse results. Yes, that's right, if you think AI result bias, hallucinations, and simple factual errors are bad today, wait until tomorrow.

To counter this concern, Gartner said businesses should adopt zero‑trust thinking. Originally developed for networks, zero-trust is now being applied to data governance in response to AI risks.

Also: Deploying AI agents is not your typical software launch - 7 lessons from the trenches

Stronger mechanisms

Gartner suggested many companies will need stronger mechanisms to authenticate data sources, verify quality, tag AI‑generated content, and continuously manage metadata so they know what their systems are actually consuming. The analyst proposed the following steps:

So, will AI still be useful in 2028? Sure, but ensuring it's useful and not heading down a primrose path to a bad answer will require a lot of good, old-fashioned people work. However, this role will at least be a new job generated by the so-called AI revolution.

Related

I asked six popular AIs the same trick questions, and every one of them hallucinated

4 new roles will lead the agentic AI revolution - here's what they require

6 ways to stop cleaning up after AI - and keep your productivity gains