Updated May 20, 2025 • 1-min read
Posted by Anonymous
May 19, 2025
1 answer
Posted by Anonymous - May 19, 2025
This is something I've wondered about a lot. Giving unmanned jets like Anduril's Fury artificial intelligence sounds cool, but I worry about things getting out of control. Like, what if the AI makes a call that a person would never make, and it turns into a serious accident or even starts a fight by mistake?
From what I’ve read and what my uncle (who’s a lawyer) says, governments are nervous about letting machines make life-or-death calls with barely any human input. Even though Anduril's jets are supposed to help pilots and save lives, there's always the risk that AI messes up—either by reading a situation wrong or just not having the "human touch." Kinda freaks me out that stuff could move too fast for people to step in if something goes wrong.
But then again, there are rules and officers who are supposed to keep track of every operation, and the AI gets tons of testing. It just feels like, once you start letting robots handle the fighting, the chance for unexpected problems goes way up. I'd say it's like self-driving cars but way scarier, since one mistake could start a war. I'm just glad people are thinking about the downside, too.
Sign in to share your knowledge and help others.