Ask HN: Is AI 'context switching' exhausting?

12 points by interstice 14 hours ago

I've always had this distinct struggle when switching in and out of being 'in charge', the best example I can think of is the difference between a driver vs a passengers awareness of the road.

Using AI for code has reminded me of this sensation, switching in and out of 'driving' feels more exhausting than being 100% one or the other. I have a theory that enforcing reduced engagement has all sorts of side effects in any format.

Wondering if anyone else has run into this feeling, and if so have you tried anything successfully to address it?

rabbittail an hour ago

Absolteuly, I made a WebSocket-based persistent memory system that stores conversation context in DynamoDB and automatically injects it into subsequent AI interactions. Instead of context switching, you get a consistent collaborative relationship where the AI maintains full project awareness across sessions. I use the websocket to so Claude Code makes seperate calls to the api so it will autonomously fill the DB with knowledge.

PaulShin 10 hours ago

This is a fantastic observation, and you've nailed the analogy. The exhaustion is real.

I believe the issue isn't just "context switching" in the traditional OS sense. It's "Cognitive Mode Switching" – the mental gear-shifting between being a creator and a delegator. That's the draining part.

My theory is that this exhaustion stems from a fundamental design flaw in how most AI tools are currently implemented. They are designed as "wizards" or separate "destinations." You have to consciously:

Stop "driving" your primary task (coding, writing, designing). Get out of your car, so to speak. Go to the AI tool and ask for directions. Get back in the car and try to re-engage as the driver, holding the new directions in your head. This constant mode-switching breaks the state of flow. The passenger (the AI) isn't looking at the same road you are; you have to explain the road to them every single time.

At my startup, Markhub, we are obsessed with solving this exact problem. Our core principle is that AI shouldn't be a passenger you delegate to, but a co-pilot integrated into your cockpit.

Our approach is to design AI (we call ours MAKi) as an ambient, context-aware layer within the primary workspace. The goal is to eliminate the 'switch' altogether. For example, our AI listens to team conversations and proactively suggests turning a message into a task, right there, inline. You never stop driving; your "car" just gets smarter and surfaces the right controls at the right time.

So, to answer your question: Yes, we've felt this deeply. And our solution is to stop thinking of AI as a tool to switch to, and start designing it as an integrated system that removes the need for the switch in the first place. Keep the user 100% in the driver's seat, just in a much, much better vehicle.

  • antinomicus 9 hours ago

    God this comment reads exactly like someone asked gemini to make a classic hacker news comment reply to this post. Slightly useful insight, that perfectly transition into an ad for the commenter’s startup. Actually, I asked o3 for a response to the OP and here’s what it generated.

    “Using AI for code has reminded me of this sensation… switching in and out of “driving” feels more exhausting than being 100 % one or the other.” You’re not imagining it. There’s a fair bit of cognitive-science literature on task-set inertia: every time you hand work off (human→AI or AI→human) you pay ~100–150 ms to reconstruct the mental model, plus an exponentially-longer “resumption lag” if the state is ambiguous.¹ Do that dozens of times per hour and you’ve effectively added a stealth meeting to your day. A few things that helped me when pairing with an LLM: • Chunk bigger. Treat the AI like a junior dev on 30-minute sprint tickets, not a rubber duck you ping every two lines. • Use “state headers.” I prepend a tiny recap in comments — // you own: parse(), I own: validate() — so I can scan and re-hydrate context instantly. • Declare no-AI zones. Sounds counter-intuitive, but reserving, say, test-writing for uninterrupted solo focus keeps me in flow longer overall. …have you tried anything successfully to address it? We were annoyed enough to build something. At Recontext (YC S24) we sit between your editor and whatever LLM you’re using; every AI request is automatically tagged with the diff, dependency graph, and TODO items so when you jump back in, you get a one-glance briefing instead of spelunking through scrollback. Early users report ~40 % fewer context switches during a coding session. If anyone wants to kick the tires, we’re handing out private beta invites — email is in profile. ⸻ ¹ See Monsell, “Task switching,” Trends in Cognitive Sciences 2003 — the “switch cost” math is sobering.

    • PaulShin 7 hours ago

      You got me. I guess that's my own practical example of the 'human > AI > human' collaboration you described.

      Jokes aside, this is a fantastic, insightful reply. Thank you. You've given a name to the pain: 'task-set inertia' and the cost of the 'stealth meeting.' Painfully accurate.

      The advice here is gold, especially treating the AI like a 'junior dev' on a 30-min ticket vs. a 'rubber duck' you ping every two lines. That really crystallizes the right mental model.

      Very cool that you're tackling this head-on with Recontexter. Solving the 're-hydrating context' problem is such a critical challenge. I'll be following your work.

    • pillefitz 9 hours ago

      We definitely need a way to flag AI content

mjrbrennan 3 hours ago

Yes, I’ve only just started trying out Claude Code and I do not mesh well with this method of asking AI to do something, then having to wait a few minutes and come back and check its work.

I find this leads so easily to distraction and I find this workflow very boring. If I’m going to use AI I want to use it in a more integrated way, or in a more limited way like just querying ChatGPT.

Will still try Claude more but I’m really not a fan so far.

karmakaze 4 hours ago

I don't really perceive it like that. For me it's more like I'm driving and the AI passenger keeps interjecting with insidiously upbeat back-seat-driver-instructions. What I find tiresome is the pause waiting for responses that break my flow state. Using faster models is tiring from having to extensively correct its understanding of the prompts and output. I don't vibe code, I use the AI to solve specific design or implementation problems and I'll recognize a suitable solution when it presents one, or get it to critique one I'm proposing.

alicekim 32 minutes ago

I'm curious about this question too.

joegibbs 7 hours ago

I think I’ve mostly gotten used to it. At the start, definitely, but now my method is to have 3 or 4 agent tasks running o3 to perform smaller actions than I was previously trying to do. There is a second where I have to remember what each one was doing but it’s still much faster than manually doing it.

PaulHoule 13 hours ago

Personally I like the older kind of chatbots where I can ask it to write me something little (a function, a SQL query, ...) and I have it in 10-30 seconds and can think about it, try it, look in the manual to confirm it, or give it feedback or ask for something else. This can be a lot more efficient than looking in incomplete or badly organized manuals (MUI, react-router, ...) or filtering out the wrong answers on Stack Overflow that Stack Overflow doesn't filter out.

I can't stand the more complex "agents" like Junie that will go off on a chain of thought and give an update every 30 seconds or so and then 10 minutes later I get something that's occasionally useful but often somewhere between horribly wrong and not even wrong.

  • interstice 12 hours ago

    This resonates, even though copy pasting from Claude et al seems like it should be inefficient somehow it feels less prone to getting completely off track compared to leaving something like cursor or aider chat running.

andy99 13 hours ago

I had a vibe-coding phase that I think largely followed the popular arc and timeline from optimism through to disappointment.

Definitely felt some burnout or dumbness after it, trying to get back into thinking for myself and actually writing code.

I think it's like gambling, you're sort of chasing an ideal result that feels close but never happens. That's where the exhaustion comes from imo, much more than if you were switching from manager to IC which I don't find tiring. I think its more a dopamine withdrawal than context switching.

  • interstice 12 hours ago

    Dopamine makes sense, since its kind of switching between 'sources' of Dopamine, like one is a sugar rush and the other is slow release like reading a book.

    At the moment I have a bit of a tick tock where I'll vibe code to a point, get frustrated when it gets stuck on something I can fix myself in a minute or two. Then switch off using AI entirely for a while until I get bored of boilerplate and repeat the cycle.

paulcole 3 hours ago

> I have a theory that enforcing reduced engagement has all sorts of side effects in any format

This isn’t a particularly novel theory because you are basically saying “Doing different things makes different things happen.” Shocker.

Do you find AI immensely valuable for coding? Would you be happy to be a 100% passenger in your coding analogy?