JP Global

For the whole of 2025 and even some time before that, I’ve sat on the sidelines watching the AI hype swell around me. Every single day I drown in an exponentially increasing amount of AI slop, a new Gemini 3.0 or GPT 5 release, or a new “state-of-the-art” research paper touted as the silver bullet that’ll finally bring “real AI intelligence” to life. Meanwhile I’ve patently been waiting and espousing, in my opinionated way that the Achilles Heel of all Transformers is memory (or how LTSM is implemented and used if I’m with fellow tech heads), and Google have the answer (or at least an answer) — Titans (https://arxiv.org/abs/2501.00663).
As the days rolled into weeks and the weeks rolled into months since last December it was the silent chirping of crickets. Whenever I met anyone from Google I would eagerly ask for the latest update on Titan, and they would look at me blankly like I’m talking an alien language (all except Andy though, he at least gave me some assurance that Google are still working on this internally). Meanwhile, what we got was a shinier version of the same old thing. Vast context windows, flashy demos but the same old underlying Achilles Heel of stateless, forgetful, brittle LLMs being used in ever more business critical ways. I’ve been saying that we hit the limit of Transformers a few years ago and the house of memory cards have been stacking up ever since. Today, there is light at the end of the tunnel and Titan architecture is moving forwards with a framework — https://research.google/blog/titans-miras-helping-ai-have-long-term-memory/.
All year I’ve suffered talk of “intelligent agents”, while my brain daydreams images of monkeys using probability models to bash out artificially intelligent words. It sounds nice to the layman and even most of my professionals in the industry who should know better drank the Koolaid by the litre.
Yesterday, Google finally delivered me a small but significant Christmas present. The Titans + MIRAS research paper doesn’t treat context memory as sledgehammer despite what RAG systems you have, but as a foundational architectural principle.
Instead of stacking bigger attention windows or stacking more parameters (a trillion parameters! Really!!), this approach says: “Let’s give these systems something that resembles memory, not just bigger RAM.” Surprise-driven consolidation, retention gates, evolving internal state aren’t just buzzwords, they’re structural architectures that transform how Transformers work (pun intended).
It’s a subtle shift. Maybe too subtle for the hype-hungry masses. But for anyone who cares about durable, real-world AI systems that act persistently, reason over time, learn irrevocably it’s a sign that the industry might finally be growing up (just need this damn bubble to pop now so the grown-ups can talk again!).
The joining of MIRAS and Titans is a significant incremental improvement as it rethinks the fundamental architecture. So instead of “bigger context windows” we can have persistent memory, so no more (or at least a significant reduction in) hallucinations. Instead of energy intensive “bigger models” we can have structured state and continuity for accurate non-determinism (though I have some thoughts on the Eigenvector problem which is still not solved — https://www.jpglobal.biz/post/llms-are-a-trap-beyond-the-hype).
Now, this isn’t just a hype piece. This snake oil actually comes with the bitter pill of truth. Memory comes with cost (complexity, instability and unpredictability). It will force us to confront hard questions about reliability, alignment, interpretability, truthfulness. But that is no different to what we should be looking at anyway. But once you give a system memory, you give it the ability to hold on to mistakes. To accumulate biases. To hallucinate not just once, but over decades. So while this could be the panacea it can also make the problem worse, much much worse!
If we embrace this path, we have to do so with discipline. With architecture that supports transparency. With guardrails that treat memory not as a feature, but as a serious responsibility.
I still maintain that GenAI has a long way to go but for me Titans maybe, just maybe, could move past hype and back to the fundamentals of good quality AI slop.
And I, for one, am watching closely for the Titans coming!





