If you’ve spent time chatting with large language models (LLMs) like ChatGPT, Claude, or Gemini, you’ve probably noticed a common pattern: the conversation flows linearly. You ask a question, the model answers. You follow up, it continues. The thing is, thought and research processes don’t work like that.
We need the ability to branch off. Ask a question to a specific part of the answer of the LLM, and get the answer without being automatically scrolled down to the very bottom of the conversation which means you need to scroll back up to were you were.
Conversations with AI shouldn’t be a single thread, but a branching tree. I should be able to select a part of the answer that I don’t understand, pose a question, and get an answer in a split screen next to the main, linear thread.
I currently branch off by having multiple ChatGPT tabs open at the same time, but that’s not how it should be.
Extending the idea
And theoretically, you could extend the idea, and make the case for having the ability to have as many sub-conversations as you want, something like a Reddit. Humans think in branches, not lines.
I get how this could get confusing though, so let’s stay with the split-screen idea.
Looking at you, Claude and Perplexity…