Have Generative Pretrained Transformer Models reached their limit?
- J1 Lee
- Apr 3
- 2 min read
The recent release of ChatGPT 4.5 has been lackluster. Made only available to ChatGPT pro subscribers ($200 per month), the model is the largest and most expensive that OpenAI has produced. However, the improvements that 4.5 has over 4-o, the previous model, have been barely noticeable.
The cost for the model is thirty times more per input token and fifteen times greater per output token compared to the previous model. In performance benchmarks, the model had an85.1% on a general knowledge test which was a 3.6% improvement from its predecessor; however, on the AIME (a math test), it was weaker.
So, what is GPT-4.5’s selling point?
Open-AI described GPT-4.5 as having better “vibes.” These “vibes” are the capability for creative thinking and sounding or acting more human. OpenAI CEO Sam Altman mentioned in a tweet that “it is a different kind of intelligence and there is a magic to it that I haven’t fell before.” However, there clearly is subjectivity when determining whether a certain large language model sounds human or not. The objective metric that OpenAI used to measure “vibes” was hallucinations, which are glitches that lead models to utter nonsense. A clear example would be the Google AI model stating that 1,000,000,000 humans could not defeat a lion. The new model had a significantly lower hallucination rate on simple questions and answers.
Has GPT reached its limit?
Previous models have been showing greater progress every single time; however, GPT 4.5 seems to be a plateau for OpenAI. OpenAI has marketed the new model not as a successor to GPT 4-o, but an alternative that has good “vibes” and has stated that the next model GPT 5 will take the role of the successor. Currently, OpenAI has stated that they are buying more GPUs to be able to train the next model, but the lackluster development shown in GPT 4.5 proves that it is becoming harder for these models to improve.

There are two main theories in the development of technology. One is the technological singularity theory that technology keeps developing at an exponential rate, eventually leading to far superior levels than the human intellect. The other theory is that the law of diminishing returns applies to technological development: it takes more effort to gain the same amount of development with each new improvement. The latter theory seems to be more of a reality as GPT 4.5 has made very minute improvements on its predecessor. However, we must wait and watch GPT 5 unfold to see if this trend continues.
.jpeg)



Comments