When AI Steering Goes Rogue: A Crash Course
Raffi Krikorian learned the hard way that AI isn't infallible after his Tesla crash. A reminder that our tech trust issues are far from over.
Besties, buckle up because we've got some wild AI drama. Raffi Krikorian, who once led Uber's self-driving car gig, just opened up about a crash that wrecked his Tesla while it was in Full Self-Driving mode. Yup, his Model X decided walls were the new roads, and it's a whole thing.
So Krikorian was on a boy scout drop-off run in San Francisco when his Tesla pulled an unhinged move: swerving like it was auditioning for a Fast & Furious reboot. He tried to take the wheel, but nah. The car was already on its wall-hugging mission. Result? Totaled Tesla, but thankfully just a minor concussion for him. He spells it out in The Atlantic, and it's a must-read.
Tech Ain't Perfect, And That's Scary
Here's the kicker: Krikorian's got history with autonomous tech. Remember, he ran Uber's self-driving show from 2015 to 2017. So if anyone knows the stakes, it's him. Yet he fell into the tech trap because when AI drives like it’s got something to prove, it’s tough not to trust it. But the lesson? Even 'almost perfect' AI needs a hawk-eye watch. No cap.
“A machine that works perfectly needs no oversight. But a machine that works almost perfectly? That's where the danger lies,” he said. And he's not wrong. How many of us are ready to snap from Netflix mode to emergency driver in a blink? Exactly.
FSD's Sketchy Track Record
Let's not pretend this is a one-off. Tesla's Full Self-Driving system has been living rent-free in the regulatory hot seat. From driving on the wrong side of the road to ignoring red lights, it's like this tech thinks it's in a video game. And hey, Elon Musk's marketing ain't helping. FSD's been sold like it’s a main character moment, but in reality? It's not the autonomous dream we were promised.
And it's not just Tesla. Ford's BlueCruise tech also faced scrutiny after two fatal crashes. So, what now? How do we strike a balance between AI innovation and basic road safety?
Why This Matters
Okay, real talk. If tech leaders like Krikorian can get caught out by AI overtrust, what hope do we mere mortals have? Self-driving tech might be the future, but right now, it's like having an unpredictable co-pilot. We need the systems to be genuinely flawless or train humans better to take back control in a snap. Until then, maybe keep your hands on the wheel and your eyes on the road. It's not just about the cool factor. it's about keeping us alive.
Get AI news in your inbox
Daily digest of what matters in AI.