What comes after text – how can we store information more efficiently in the AI era?

Introduction

Artificial intelligence becomes the primary interface through which humans access knowledge and it’s time to rethink how we store information in the first place.

Currently, writing is our most reliable method of recording knowledge. But right now information goes from

  1. An idea is fed into an LLM by a human, the LLM writes it out into a longer article which is put online
  2. Another human gets the content of this article summarized and presented by AI.

The result? The article itself is never touched. And this needs to be acknowledged and addressed by a frontend-less web.

The deeper layer

But in the future, this problem reaches one layer deeper. Why do we need to store that article in between if it isn’t meant to be ever touched?

The current model looks like this:

  1. A human asks AI to write an article and stores it online.
  2. Another human wants to know something about this topic or article directly and asks AI.
  3. An AI crawls the web, reads the article, interprets it and paraphrases or synthesizes an answer with different wording than the original article.

The result? Nobody ever touched that article in the middle. It was just the middleman to transmit information. This roundabout process might just be a holdover from a world that no longer exists.

We want the information stored so that it can be accessed efficiently and used by the AI. Written text is designed for humans, not for machines. AIs don’t need sentences to store and process ideas. They operate on vectors, weights, and graphs.

Why should we ask AI to write and store text only for another AI to later decode it and turn it back into meaning? Why not store the encoded piece of information?

But how?

Native AI Knowledge Structures

We can store…

Information as encoded knowledge graphs such as

(Marie Curie) --[won]--> (Nobel Prize in Physics)
(Marie Curie) --[field]--> (Chemistry)

Relations and meaning though semantic embeddings such as (simplified to 5 dimentions)

[0.24, -0.13, 0.78, 0.01, -0.56]

And “how to” knowledge and executable information in a format made for that.

And maybe we need to go one layer deeper and directly store the tokens or vectors. Would that be more efficient? I still need to figure that one out.

But What About Human Access?

This doesn’t mean we abandon text altogether. Humans still need interpretable versions of information. But rather than storing every piece of knowledge only in text, we might store core insights in AI-native formats and generate human-readable text on demand. For everyone in a slightly different form that fits his style of reading. Or answer his questions more directly.

Text becomes a user interface, not a storage format.

And where is that stored?

That’s what I haven’t quite figured out. It needs to be decentralized and secure. In my article about the frontend-less web, I explored the possibility of still everyone having their own server, but this still requires AI crawling the web, which isn’t as efficient as it should be.

We need like a global knowledge cloud where the entirety of information is stored and where every LLM has access to. Nobody can own this knowledge cloud, just like nobody manages crypto. Updated in real time – without needing to sift through pages of obsolete blog posts or ad-laden articles.

Let’s sum it up

I strongly believe that the AI era demands a rethink of how we store information. Instead of writing, publishing, indexing, reading, interpreting, and paraphrasing, we can imagine a more direct pipeline.

We can build a more fluid, responsive knowledge ecosystem – built not just for AI, but with it. Around it. And around humans.

And that’s the first step towards a true agentic era.

As always, I’d love to hear from you!