Everyone is asking if the rapid improvements we are seeing in AI are here to stay or if they will soon stagnate. The answer to the question in our opinion is both Yes and No. No in a sense that for major technologies nearly all predictions come across as false.
Bill Gates reportedly said that 640KB or ram was sufficient for anybody in 1990s. At surface level it might appear that he was completely wrong but he was also right in many ways. For majority of the computer users today there is a certain memory that is sufficient for their 90% of the use-cases for many years. This number was 512MB, 2GB, 4GB and in modern times around 16GB memory. But for the certain extreme cases even 1TB of RAM is insufficient.
Gates did not correctly predict the number but he was correct in identifying that there is indeed a number which ought to be sufficient for nearly all people.
Ways in which AI will not live up to the hype
In a new technological frontier startup founders and innovators have to constantly chase investor money. The money for which they have to compete with other founders and innovators. One way to beat the competition is by promising extraordinary things.
Some of the extraordinary things that are proposed are that AGI is round the corner, that AI will destroy all jobs, AI will destroy the need for working, AI will be so powerful that it will rule us etc. etc.
Amazon was supposed to destroy the corner stores. Uber was supposed to decimate the car industry, Tesla was supposed to make Toyota go bankrupt, AirBnb would have made Marriott part of history. None of that really happened and yet, Amazon, Telsa, AirBnb, Uber are incredibly successful and are here to stay and grow even more.
Unlike many I do not think LLM models are going to get better beyond a certain point. Language is a very useful window to human intelligence but there is a limit on how much intelligence languages can in themselves express. Language is not the building block of human intelligence, intuition is.
At some point we are going to run out of good data we can train on. LLMs however great will be ultimately limited by that data and the data they produce will never be "new" like a human mind can produce.
Human minds can produce new data because the raw sensory data human minds have access to is lot larger.
In the ancient Hindu scripture Rigveda, a sage Dirghatamas (one who stares deep into darkness) contributed only 2 verses but those two verses turned out to be seeds to massive philosophical tradition which continue to be influential even today. In one of this verse he describes two birds. One of them is eating a fruit where as other bird is watching the first bird eat the fruit and contemplate. What Dirghatamas highlighted here is that the living an experience and observing that experience as an impartial observer are two different things. The fruit represents material pleasure, the bird observing is is not getting that material pleasure and yet is able to contemplate its existence and effects because the bird itself might be aware of the pleasures of eating the fruit.
Human being oscillate between the two bird states often. We enjoy a wide variety of experiences ourselves and then we also observer other experiences. Also there is a concept of time. Watching a sunrise for the first time as a child is very different from watching it as an old man. The same sun invokes different feelings in our hearts and minds.
Such vivid nature of experiences leads to creation of original thoughts in minds of even ordinary minds.
An LLM driven approach or even when the models are truly multi-modal (where they are trained on text, images and videos) might still miss the concept of time necessary to emulate a full human experience, not to mention the inherent non-deterministic nature of human feelings at that particular point in time.
While AI can achieve a lot from "training" the nature of existing data would limit AI from becoming AGI.
Ways in which AI hype will endure
While I believe AGI won't become a reality soon, the advances in AI will have a significant impact on us. We will enter an era of unprecedented productivity gains, massive economic impact, and faster evolution of human ideas.
To benefit greatly, we don't need to invent AGI or an AI that is 100 times better than today's AI. It's okay if AI doesn't completely take over human work.
It's clear that AI, as it is today, has immense value. It can greatly improve human productivity. Students can learn faster with AI, teachers can teach better, writers can write better, and coders can code better.
Even this "better than before" effect in every field has the potential for a significant positive economic impact that improves human welfare.
We will see AI models getting better and better, eventually hitting some limits that they will struggle to overcome. But at the same time, this will lead to innovations in other fields, having the same impact on human life as if AI had managed to break through those limits.
Eventually, we will end up with a world where we have a better understanding of intelligence, the mind, and its limits.
Conclusion
I will not bet on specific AI hypes becoming reality. The specifics are likely to be wrong, but the overall impact will probably be very significant. When internet was new people predicted insane things like massive centralized world governments. It did not happen. But we indeed had a global society and global economy.
Things will get better and better than we could predict but not by taking the path we predict it will take.
[1] https://ieeexplore.ieee.org/document/8423983
[2] https://forum.kinfonet.org/t/what-did-krishnamurti-mean-by-the-observer-is-the-observed/2679