Failover to Human Intelligence Subscribe to my feed

written by Max Chernyak on 11 Aug, 25

There’s no denying that AI is getting very capable, but one thing keeps bothering me: what happens if something goes wrong?

Right now, self-driving cars still require human monitoring and intervention (outside of specially-designated areas). Isn’t this also true of a sufficiently complex system where you might need to intervene quickly in case AI fails to resolve an issue? Worth considering, right?

You might say — so what? AI-written code is arguably better, often with more comments and docs, humans would understand it faster anyway. And that may be true, but with human-written code you can usually find a human who wrote it and ask them questions. If AI gets mixed up in too much context, and can neither fix nor successfully explain what’s going on, there might be nobody else familiar with the codebase. Are we saying that we should try to maintain some level of familiarity just in case?

You might say — well, we already have AIs capable of storing large permanent context, so it will know the codebase better than any human would. At some point AI will just become strictly better in every way. And that may be true, but I will keep asking the same question:

Can we forego human intervention? What if AI servers are down? Can we ever completely rely on a technology caring for itself?

If the answer is a no”, even a tiny no”, then doesn’t it kind of negate the entire full AI takeover” narrative? As we start to unpack this chain of thought back down from AI perfection” to but human intervention might still be needed in rare cases”, aren’t we inevitably back at the question: What should we do to help humans intervene?

Once you start answering this question, you might find yourself back at square one:

It’s like one of those little snags” that you find which seems insignificant at first, but as you drill down on it, it may have way bigger implications than you realized.

You might say — well, most projects out there are not that critical. That’s true, but what if it grows into a bigger, more critical one later?

Even the tiniest possibility of human intervention leads me to think that it’s always going to be better for software developers to work together with AI, and never simply be replaced by it. Otherwise, failover is going to fail in a situation where it’s needed most.

Where am I wrong?