Monday, April 08, 2019

On Safety

All of the Trolley Problem wankery about self-driving cars was always hilarious. How will we program their ethics??? As if this would be a priority or if actual human drivers ever encounter these trolley problem decisions in any meaningful way. In a split second you do what you do and there is not going to be much philosophical musing about who is better to kill.

Stay away from Teslas, because Musk is a carnival barker lunatic who is trying to convince people that his slaughterpilot driver assist features mean you can drive around with your hands off the wheels even as Tesla Legal is, of course, screaming NO NO NO KEEP YOUR HANDS ON THE WHEELS I mean, seriously, stay away from them. If you see a Tesla on the road get way from it. But otherwise I don't really worry about "actual" self-driving cars being safe. If they work they'll be safe enough, even if I don't buy the "safer than human drivers" crap.

But trolley problem nonsense aside, what will safety mean? If you're a car company, safety means prioritizing:

1) minimizing legal liability
2) minimizing popular perception that they are unsafe (enhancing perception that they are safe)
3) keeping the drivers of your own cars alive (related to 2), but not precisely the same)

...

999) actual safety

In practice what this means it that the cars will be safe enough for the drivers, but might drive in such away as to diminish the safety of people around them. What is legal is not always safe, and what maximizes my safety might not be best for you.

The simplest example is that slamming on the brakes is almost always without liability, and it's probably generally better to be hit from behind than to slam into something in front of you. But slamming on the brakes can cause problems for people around you (they hit you, they swerve to avoid and hit someone else, etc.). Technically "safe," but havoc causing nonetheless.