Echoes of the AI Winter
Last updated: Feb 1, 2026
I’ve been a systems programmer for most of my career and embraced several development paradigms. As a systems and backend focused developer, my exposure to AI systems has been small, but I worked with symbolic programming languages, rules based systems and graph databases. I have seen a lot of trends in technology, with the abundant availability of venture capital amplifying trends that promise maximum disruption and, consequently, profit.
It may be educational to have a look at the first commercial AI hype cycle, which occurred in the 1980s. Lisp based, special purpose computers emerged, which provided application development environments in which sophisticated symbolic and rules-based reasoning systems could be built. In that era, computers were generally operated by large organizations and operated through alphanumeric terminals. Contrasted to that, Lisp machines with graphical interactive environments and interactive reasoning capabilities provided a much more polished experience.
Like with any new technology, those people who created and used Lisp machines were quick to extrapolate how in the future, they would actually become really useful in either augmenting humans as kind of a brain extension, or become intelligent themselves and solve problems better than humans could. After all, computers were already known to compute better than humans, so they should eventually be able to be better in other areas that would normally be associated with human intelligence as well.
Starting in the early 1980s, AI research received massive amounts of funding both in the United States and in Japan. Rule-based expert systems were believed to eventually become “intelligent”, or at least become augmentative to humans in order to help with complex decision making and analysis processes. Military and civilian uses were envisioned, and huge projects were started in anticipation of the systems - which were still in their infancy - becoming mature enough to be deployed in mission critical applications.
One of the big bets of the era was in the field of hardware. Several vendors licensed the Lisp machine technology and design from MIT to commercialize it. It was believed that general purpose hardware was not going to be sufficiently capable to run AI software, so a whole family of systems was created that was really only useful for AI work. At the same time, however, advances in semiconductor manufacturing and integration made general purpose microprocessors cheaper and faster at an enormous pace. Economies of scale came to work, and towards the end of the 1980s, Lisp machines fell back in terms of performance and eventually completely disappeared from the market, together with the companies who built them and hundreds of millions of dollars spent on their development and commercialization.
What followed was what was called the (second) AI winter: The companies that made big promises and failed to deliver folded, the systems disappeared from the market and it took decades until AI could be used as a marketing buzzword again. That is where we are today.
History is not repeating, they say, but it is already clear that the current AI boom, based on the availability of large language models, also lives from overpromises and oversimplification. Like in the 1980s, contemporary systems can perform tasks that previously required human intelligence, and like then, it is easy to extrapolate from what these systems could do in the future. But it is only easy if the technical details are being glossed over and the effort and organization required to turn ideas into reality are ignored.
Unlike in the 1980s, however, the current AI boom is built on investments that reach the scale of the GDP of whole countries. While some of the things current AI systems can do are impressive, it is hard to imagine that these systems are actually going to change how societies work in a meaningful and sustainable way.
Are our economies really going to be fully dependent on these data centers? Are these systems not going to be obsolete in a few couple of years? Have we actually eclipsed the field of hardware and software research and now is the time where we can rebuild everything on what we have achieved?
I believe that “no” is the answer to these questions, and that the current AI hype cycle will find its end like all technology hype cycles do. If we look back into the recent past, we’ve already seen VR fail (again), we saw the blockchain fail, and we’ll see AI fail again. This is just how things are, but this time, the repercussions will be more notable because of the scale of the investments that were made.
LLMs are great tools, and programming won’t ever be the same. Maybe they are only great tools because they have been developed in this particular way. We will never know for sure.
P.S.: No LLM was used to compose this article, but ChatGPT suggested the title.