I watch a lot of Air Crash Investigation, which means I've seen the same pattern play out almost every time: most crashes happen when pilots take over from autopilot, not when the automation fails. Now I'm seeing the same pattern in AI-assisted coding. Aviation is incredibly safe BECAUSE of automation. But there's a dark side. When pilots who rarely hand-fly need to suddenly take control, disaster strikes. They've lost their edge. They trust the system too much. Software development is speedrunning this exact problem. Devs are letting AI write more code every day. But what happens when they need to debug something they didn't write and barely understand? In aviation: Accidents spike when pilots override autopilot. In software: Bugs emerge in the gaps between AI-generated code and human understanding We're creating the same dangerous knowledge gap. I honestly worry a bit less about the developers. You write tests, the app isn't super critical, whatever. However I think users, particularly in critical industries have no idea their apps are built on this hybrid human-AI foundation. They will think it's all carefully crafted by humans, like passengers assuming the pilot is actually flying the plane. The risk isn't that AI fails constantly. It's that when it does fail, nobody's prepared. We're building a world where neither developers nor users understand what happens when the automation breaks.
the pattern @caminmc & @fedebusato identified isn't about comprehension. it's about convenience, atrophy, and the voluntary surrender of agency: pilots can understand autopilot perfectly. they just rarely fly manually anymore. then when they must, their atrophied skills fail them. developers can understand ai-generated code completely. they just increasingly choose not to examine it. why debug what works? the danger isn't building agi that we can't comprehend. it's humans who gradually stop trying. this isn't about capability vs interpretability. it's about convenience vs agency. we're not being forced into irrelevance. we're being seduced into it, one autocomplete at a time. the wall-e future isn't imposed by robot overlords. it's chosen by humans who found thinking too inconvenient.