Tech Trends 2026 update: Thinking outside the LLM box
The rate at which AI models improve seems to have slowed in the last year, but there are still several promising avenues to get more out of today’s AI.
Ever since publicly available large language models (LLMs) went mainstream in late 2022, we’ve heard one refrain from the tech world: If AI has improved this much this fast, imagine where it will be next year, five years from now, 10 years from now. A lot of people seemed to assume that progress would be exponential, forever.
The problem with this idea is that rapid rates of improvement always eventually slow. As we discuss in this year’s Tech Trends report, we’re seeing early indications that performance improvements in the LLMs may be starting to plateau.
There are a few reasons why performance gains may be stagnating. For most of the last couple of years, AI developers assumed that performance improvement was simply a matter of scale: Build bigger data centers to train bigger models and performance will improve. But as the footprint of data centers has ballooned over the past year or two, we haven’t seen substantially better new models. Scale alone is likely not enough to coax out major AI gains.1
Training data may be another limiting factor, which we also discussed in the same Tech Trends chapter. At this point, the foundational models from the biggest AI players have trained on essentially all the publicly available data on the internet. So model developers are now training their models on AI-generated content. This content is typically lower quality than human-generated content, and models that use too much synthetic data may start edging toward model collapse, a situation where outputs actually degrade over time.2
Despite these challenges, we’re starting to see new approaches that could propel AI forward in the year ahead. For example, a group of researchers published a paper last year showing that LLMs can be incentivized to create better outputs through the use of reinforcement learning3, a type of unsupervised machine learning that gives digital agents a set of incentives and then asks the agent to perform tasks in a given environment to maximize its incentives. The approach has been around for decades and was seen as a leading AI technique until we entered the current LLM renaissance. But developers seem to be refocusing on reinforcement learning as a potential way to direct LLMs to perform better and more autonomously in real-world environments.
Another area researchers are working to improve LLM performance is memory. LLMs don’t generally remember any details of a conversation. They are reminded of previous user prompts at each subsequent prompt. But over longer conversation, they often lose important details. Tech companies are currently working on different approaches to make LLMs more aware of context across many user prompts and even different sessions.4,5
Further out, we may see LLMs incorporate recursive self-improvement. Recursive self-improvement enables AI models to rewrite their own code to improve their capabilities and progress toward a goal without requiring developers to specify how to achieve the goal. In the past, experts viewed this concept as futuristic component necessary for AGI, not necessarily as a realistic focus for current development. But people are beginning to talk about it as a near-term possibility. Leaders at most major AI companies have said recently they are working on developing models that can improve themselves.6
Technical details aside, most average enterprises can likely still get a lot more out of the AI tools they currently use. As we discuss in this year’s Tech Trends report, companies that are redesigning their processes to take advantages of the unique capabilities of AI are seeing the most ROI. There’s still a lot of process improvement runway to go, and that will likely matter more in the short term than model performance against benchmarks.
Nonetheless, businesses that are redesigning their operations around AI will be happy to know that the tools we have today are most likely not close to their end state. Even if the largest LLM models are reaching a plateau (something that is still contested), it doesn’t mean the AI we have today is the AI we’ll have tomorrow. In fact, slowing performance gains in LLMs appears to be prompting developers to look elsewhere, which may ultimately lead to more impactful innovations than we would see if everyone is simply trying to train the biggest LLM.
Models that improve on their own are AI's next big thing
— Ed Burns | Journalist | Office of the CTO
This article contains general information only and Deloitte is not, by means of this article, rendering accounting, business, financial, investment, legal, tax, or other professional advice or services. This article is not a substitute for such professional advice or services, nor should it be used as a basis for any decision or action that may affect your business. Before making any decision or taking any action that may affect your business, you should consult a qualified professional advisor. Deloitte shall not be responsible for any loss sustained by any person who relies on this article.
As used in this document, “Deloitte” means Deloitte Consulting LLP, a subsidiary of Deloitte LLP. Please see www.deloitte.com/us/about for a detailed description of our legal structure. Certain services may not be available to attest clients under the rules and regulations of public accounting.
Copyright © 2026 Deloitte Development LLC. All rights reserved.





