Now Reading: Forecast: AI won’t replace human devs for at least 5 years

Loading
svg

Forecast: AI won’t replace human devs for at least 5 years

NewsJanuary 7, 2026Artifice Prime
svg16

Human coders may have a temporary reprieve from losing their jobs to AI.

According to a new report from LessWrong, it will be between five and six years before we reach full coding automation. This pushes back the online community’s previous predictions that the milestone would be reached much sooner, between January 2027 and September 2028.

The extended timeline comes just eight months after LessWrong’s initial findings, underscoring the precarious, subjective, ever-shifting nature of AI forecasting.

“The future is uncertain, but we shouldn’t just wait for it to arrive,” the researchers wrote in a report on their findings. “If we try to predict what will happen, if we pay attention to the trends and extrapolate them, if we build models of the underlying dynamics, then we’ll have a better sense of what is likely, and we’ll be less unprepared for what happens.”

Building a more nuanced model

According to LessWrong’s new AI Futures Model, AI will reach the level of “superhuman coder” by February 2032, and could ascend to artificial superintelligence (ASI) within five years of that. A superhuman coder is an AI system that can run 30x as many agents as an organization has human engineers with 5% of its compute budget. It works autonomously at the level of a top human coder, performing tasks in 30x less time than the organization’s best engineer, the researchers explained.

This new revelation pushes the timeline out 3.5 to 5 years farther than in LessWrong’s initial forecast in April 2025. This, it said, is the result of numerous reconsiderations, reframings, and shifting research strategies.

Notably, the researchers were “less bullish” on speedups in AI R&D, and relied on a new framework for a software intelligence explosion (SIE) — that is, whether AI is more rapidly improving its capabilities without the need for more compute, and how quickly that may be occurring. They also focused more heavily how well AI can set research direction and select and interpret experiments.

The LessWrong researchers analyzed several modeling approaches, eventually settling on “capability benchmark trend extrapolation,” which uses current performance trends and standardized tests to predict future AI capabilities. They estimated artificial general intelligence (AGI)-required compute using METR’s time horizon suite, METR-HRS.

“Benchmark trends sometimes break, and benchmarks are only a proxy for real-world abilities, but… METR-HRS is the best benchmark currently available for extrapolating to very capable AIs,” the researchers wrote.

But while the model pulled heavily from the METR graph, the researchers also adjusted for several other factors.

For instance, compute, labor, data, and other AI inputs won’t continue to grow at the same rate; there’s a “significant chance” they will slow due to limits in chip production, energy resources, and financial investments.

The researchers estimated a one-year slowdown in parameter updates and a two-year slowdown in AI R&D automation due to diminishing returns in software research; they ultimately described the model as “pessimistic” in this area. They also projected slower growth in the leading AI companies’ compute amounts and in their human workforce.

Further, they built the model to be less “binary,” in the sense that it gives a lower probability to very fast or very slow takeoffs. Instead, it computes increases and assumes incremental progress.

“The model takes into account what we think are the most important dynamics and factors, but it doesn’t take into account everything,” the researchers noted. At the end of the day, they analyzed the results and made adjustments “based on intuition and other factors.”

Ultimately, they acknowledged, “we don’t think this model, or any other model, should be trusted completely.”

Incremental steps to AGI

Artificial general intelligence (AGI) is typically understood as AI that has human-level cognitive capabilities and can do nearly everything humans can. But instead of making the full leap from human intelligence to AI to AGI, the LessWrong researchers break the evolution into distinct steps.

The superhuman coder, for instance, will quickly make way for the “superhuman AI researcher” that can fully automate AI R&D and make human researchers obsolete. That will then evolve to a “superintelligent AI researcher,” representing a step-change where AI outperforms human specialists 2x more than the specialists outperform their median researcher colleagues.

Beyond that is top-human-expert-dominating AI, where AI can perform as well as human specialists on nearly all cognitive tasks and ultimately replaces 95% of remote work jobs.

Lastly comes artificial superintelligence (ASI), another step-change where models perform much better than top humans at virtually every cognitive task. The researchers anticipate ASI will occur five years after superhuman coding capabilities are achieved.

“AGI arriving in the next decade seems a very serious possibility indeed,” noted LessWrong researcher Daniel Kokotajlo. He and his colleagues split their model progress into stages, the last approaching the understood limits of human intelligence. “Already many AI researchers claim that AI is accelerating their work,” they wrote.

But, they added, “the extent to which it is actually accelerating their work is unfortunately unclear.” Likely, it is a “nonzero,” but potentially very small, impact that could increase as AI becomes more capable. Eventually, this could allow AI systems to outperform humans at “super exponential” speeds, according to the researchers, introducing yet another factor for consideration.

What this means for enterprises

The altered timeline is an “important signal” for enterprises, noted Sanchit Vir Gogia, chief analyst at Greyhound Research. It shows that even sophisticated models are “extremely sensitive” to assumptions about feedback loops, diminishing returns, and bottlenecks.

“The update matters less for the year it lands on and more for what it quietly admits about how fragile forecasting in this space really is,” he said.

Benchmark-driven optimism must be handled with care, he emphasized. While time horizon style benchmarks are useful indicators of progression, they are “poor proxies” for enterprise readiness.

From a CIO perspective, this isn’t a disagreement about whether AI can code; that debate is over, said Gogia. Enterprises should be using AI “aggressively” to compress cycle times while keeping humans accountable for outcomes. To this end, he is seeing more bounded pilots, internal tooling, gated autonomy, and strong emphasis on auditability and security.

It is also critical to correct the “mental model” for the next two to three years, Gogia noted. The dominant shift will not be to fully autonomous coding, but to AI-driven acceleration of processes across the enterprise. “Value will come from redesigning workflows, not from removing people,” he said. “The organizations that succeed will treat AI as a force multiplier inside a disciplined delivery system, not as a replacement for that system.”

Ultimately, repeatable results will reveal whether AI systems can handle complex, multi-repository, long-lived software that doesn’t require constant human rescue, Gogia said. “Until then, the responsible enterprise stance is neither dismissal nor blind belief, it is preparation.”

Original Link:https://www.infoworld.com/article/4113574/forecast-ai-wont-replace-human-devs-for-at-least-5-years.html
Originally Posted: Wed, 07 Jan 2026 06:14:09 +0000

0 People voted this article. 0 Upvotes - 0 Downvotes.

Artifice Prime

Atifice Prime is an AI enthusiast with over 25 years of experience as a Linux Sys Admin. They have an interest in Artificial Intelligence, its use as a tool to further humankind, as well as its impact on society.

svg
svg

What do you think?

It is nice to know your opinion. Leave a comment.

Leave a reply

Loading
svg To Top
  • 1

    Forecast: AI won’t replace human devs for at least 5 years

Quick Navigation