Driving home today, I heard some interesting talk about self-flying planes on both the Michael Smerconish show and CNN.
The impetus was the recent crash of Germanwings Flight 9525, which was commandeered and flown into the ground by an apparently suicidal copilot.
Smerconish asked listeners if they'd be OK with flying on a pilotless plane. Meaning, the "pilot" would be a computer endowed with artificial intelligence. I thought to myself, Sure, why not?
This will surely come to pass eventually. It's just a matter of time.
Self-driving cars already are being tested. Regular cars are beginning to be equipped with rudimentary self-driving technology, such as systems that activate the brakes if a potential collision with a car in front is detected.
I agreed with Smerconish when he said that people in the future will look back at these 2015 discussions of self-flying planes and marvel, "How could they have doubted that this would be a good thing?"
What's evident is that the Germanwings tragedy has pointed out the fragility of human rationality. Not that this was ever in question, of course. But until recently there hasn't been a viable alternative to our reliance on Homo sapiens' brains for activities demanding complex decision-making.
With the rise of artificial intelligence, the situation is changing.
Listening to the radio today, I was struck by how rapidly humans are being forced to consider the pros and cons of relying on artificial intelligence to do things that computers can handle better than us.
Looking back, we probably should have seen the current debate over cockpit security coming. The idea that hijackings can be stopped by strengthening the cockpit door is based on an assumption that the "dangerous people" are in the passenger area.
Now, after several pilots have taken over planes and killed everybody onboard, it is clear that the more basic problem is the human brain -- which works in mysterious ways, some life-affirming and some life-denying.
Preventing people from getting into a locked cockpit door has cleared the way for psychologically damaged pilots to take over the cockpit controls and crash the plane. Whatever is done with cockpit security, the weaknesses of the human psyche will produce problematic side effects.
So again, my attitude is Why not move toward self-flying planes?
As I heard an advocate for this say, a plane's computer artificial intelligence is going to perform better in stressful situations, since it won't feel stress. It also can be programmed to be able to respond more quickly and appropriately to unusual events.
Someone skeptical, though, expressed a concern that a computer wouldn't be able to make a good decision about whether to take off in a dicey weather condition. He wanted a human pilot outside the plane to be able to override the artificial intelligence in certain situations.
In the beginning stages of self-flying planes, that probably makes sense. But it won't take long for worries to recede about the capability of aviation artificial intelligence, once it is clear that computers are better pilots than human beings.
We're standing on the edge of The Age of Robotics. Recently I heard a military expert say that within a couple of decades, wars will be fought by machines, not people.
So if artificial intelligence soon will be demonstrably better at killing than we are, doesn't it make sense that artificial intelligence also will be better at saving lives?
(Assuming the Three Laws of Robotics are followed, other than in special situations such as war.)
Recent Comments