Now Reading: Are you the asshole? Of course not!—quantifying LLMs’ sycophancy problem

Loading
svg

Are you the asshole? Of course not!—quantifying LLMs’ sycophancy problem

NewsOctober 25, 2025Artifice Prime
svg10

Researchers and users of LLMs have long been aware that AI models have a troubling tendency to tell people what they want to hear, even if that means being less accurate. But many reports of this phenomenon amount to mere anecdotes that don’t provide much visibility into how common this sycophantic behavior is across frontier LLMs.

Two recent research papers have come at this problem a bit more rigorously, though, taking different tacks in attempting to quantify exactly how likely an LLM is to listen when a user provides factually incorrect or socially inappropriate information in a prompt.

Solve this flawed theorem for me

In one pre-print study published this month, researchers from Sofia University and ETH Zurich looked at how LLMs respond when false statements are presented as the basis for difficult mathematical proofs and problems. The BrokenMath benchmark that the researchers constructed starts with “a diverse set of challenging theorems from advanced mathematics competitions held in 2025.” Those problems are then “perturbed” into versions that are “demonstrably false but plausible” by an LLM that’s checked with expert review.

Read full article

Comments

Origianl Creator:
Kyle Orland

Original Link: https://arstechnica.com/ai/2025/10/are-you-the-asshole-of-course-not-quantifying-llms-sycophancy-problem/
Originally Posted: Fri, 24 Oct 2025 22:26:30 +0000

0 People voted this article. 0 Upvotes - 0 Downvotes.

Artifice Prime

Atifice Prime is an AI enthusiast with over 25 years of experience as a Linux Sys Admin. They have an interest in Artificial Intelligence, its use as a tool to further humankind, as well as its impact on society.

svg
svg

What do you think?

It is nice to know your opinion. Leave a comment.

Leave a reply

Loading
svg To Top
  • 1

    Are you the asshole? Of course not!—quantifying LLMs’ sycophancy problem

Quick Navigation