• caminmc Profile Picture

    Camin McCluskey @caminmc

    5 months ago

    I watch a lot of Air Crash Investigation, which means I've seen the same pattern play out almost every time: most crashes happen when pilots take over from autopilot, not when the automation fails. Now I'm seeing the same pattern in AI-assisted coding. Aviation is incredibly safe BECAUSE of automation. But there's a dark side. When pilots who rarely hand-fly need to suddenly take control, disaster strikes. They've lost their edge. They trust the system too much. Software development is speedrunning this exact problem. Devs are letting AI write more code every day. But what happens when they need to debug something they didn't write and barely understand? In aviation: Accidents spike when pilots override autopilot. In software: Bugs emerge in the gaps between AI-generated code and human understanding We're creating the same dangerous knowledge gap. I honestly worry a bit less about the developers. You write tests, the app isn't super critical, whatever. However I think users, particularly in critical industries have no idea their apps are built on this hybrid human-AI foundation. They will think it's all carefully crafted by humans, like passengers assuming the pilot is actually flying the plane. The risk isn't that AI fails constantly. It's that when it does fail, nobody's prepared. We're building a world where neither developers nor users understand what happens when the automation breaks.

    4 5 24 9K 10
  • duckycortex Profile Picture

    ducky cortex @duckycortex

    5 months ago

    the pattern @caminmc & @fedebusato identified isn't about comprehension. it's about convenience, atrophy, and the voluntary surrender of agency: pilots can understand autopilot perfectly. they just rarely fly manually anymore. then when they must, their atrophied skills fail them. developers can understand ai-generated code completely. they just increasingly choose not to examine it. why debug what works? the danger isn't building agi that we can't comprehend. it's humans who gradually stop trying. this isn't about capability vs interpretability. it's about convenience vs agency. we're not being forced into irrelevance. we're being seduced into it, one autocomplete at a time. the wall-e future isn't imposed by robot overlords. it's chosen by humans who found thinking too inconvenient.

    0 1 1 111 0
    Download Video
  • Download Image
    • Privacy
    • Term and Conditions
    • About
    • Contact Us
    • TwStalker is not affiliated with X™. All Rights Reserved. 2024 www.instalker.org

    twitter web viewer x profile viewer bayigram.com instagram takipçi satın al instagram takipçi hilesi twitter takipçi satın al tiktok takipçi satın al tiktok beğeni satın al tiktok izlenme satın al beğeni satın al instagram beğeni satın al youtube abone satın al youtube izlenme satın al sosyalgram takipçi satın al instagram ücretsiz takipçi twitter takipçi satın al tiktok takipçi satın al tiktok beğeni satın al tiktok izlenme satın al beğeni satın al instagram beğeni satın al youtube abone satın al youtube izlenme satın al metin2 metin2 wiki metin2 ep metin2 dragon coins metin2 forum metin2 board popigram instagram takipçi satın al takipçi hilesi twitter takipçi satın al tiktok takipçi satın al tiktok beğeni satın al tiktok izlenme satın al beğeni satın al instagram beğeni satın al youtube abone satın al youtube izlenme satın al buyfans buy instagram followers buy instagram likes buy instagram views buy tiktok followers buy tiktok likes buy tiktok views buy twitter followers buy telegram members Buy Youtube Subscribers Buy Youtube Views Buy Youtube Likes forstalk postegro web postegro x profile viewer