Our history with technology has always been simple: our creations have universally amplified our uniquely human characteristics. Our desires for food, sex and domination were more easily sated with every advance from stone axes to hydrogen bombs. If you wanted another man’s wife, you could make that position clear by pitching a well-made spear through his chest. If you wanted another man’s country you could request it via bombs rained from the sky. For all our Star Trek vision of technology making us more noble (The Wright brothers, for instance, believed aircraft would make war impossible because both sides could too easily monitor each other’s troop movements) the reality reveals technology slavishly serving imperfect human masters to their basest whims. It was George Carlin who reminded us the flamethrower was invented by a guy who thought, “That guy over there? I want him on fire.”
Interestingly, though, this is beginning to change. There are places, corners and pockets, where the technology has been given the power to say, “no.” I find this to be a fascinating and potentially hopeful development.
Let us discuss drones (when do we not?). Modern drones are what we call “semi-autonomous” it’s what differentiates them from the remote-controlled aircraft of the past. Instead of using a controller to direct their every action as we would with a TV or a toy car, we instead send a signal of our desire and the device interprets our request based on the internal priorities of the software. The drone has important things its already doing: keeping itself level, compensating for wind, understanding its position and monitoring on-board systems such as battery, distance from origin point and control signals. In every case a modern drone in its normal configuration will override commands from its human operators if those orders conflict with its programmed imperative for self-preservation.
This is kind of amazing.
If it finds itself running out of battery, the drone will abort its mission and fly to where it took off from and land all on its own. If its ordered to travel out past where it knows the signal will get lost, it will not proceed. If it loses signal from the control station it flies home (sometimes with hilarious results). Drones are “smart” technology and as their sensors get better they will have more and more conditions under which they will ignore our orders. Soon they won’t fly into walls and trees. They won’t smash into the ground at high speed and there is even talk of them knowing where airports and sensitive areas such as government buildings and military operations are, and they will refuse to fly there.
These drones, when perfected, will be better than us in some ways. They won’t fly onto the White House lawn. They won’t smash into the groom at a wedding.
Of course you can bypass all this stuff. You can set it to fly it miles away with no account for the battery power it will need to get home and crash it to your heart’s content. And the tech is still young enough that it doesn’t always work so great- but we are so at the early stages. Already we find pilots crashing real planes because they ignored or overrode the aircraft’s warning system which knew better what was going on than they did. Very soon we’re going to be able to apply fail-safes to more and more technology. Should we? And if not, who accounts for those hurt or killed where we selfishly choose to keep our autonomy?
An incredibly short time ago historically we rode around on horses, a similar technology to drones. Under most conditions they did what we said, but they’d look out for their own necks as well. No matter how drunk you were, a normally-trained horse wouldn’t walk off a cliff. Horses know about human limitations and frequently disobey human commands they don’t favor. Dogs are the same way. Humans and dogs have been together for perhaps hundreds of thousands of years, but you still won’t get a standard pet dog to come in from the yard if there is a deliciously decayed dead squirrel carcass out there. As much as it loves you, the dog has priorities built into its software. Our autonomous technology is getting like this.
Very soon your car will resist merging into a lane if there is already a vehicle there. This technology is already being deployed on Mercedes in Europe. They’re deploying it because unlike us, technology doesn’t get tired. Its boss doesn’t chew it out, it never worries about the mortgage. The tech won’t have one too many at its niece’s wedding and plow into a minivan in the rain. The tech can handle getting a text and steering at the same time. In terms of driving cars it won’t be too long until the tech is better than us.
Think about where this will go: You could build a gun that won’t shoot anyone in your family. Small HD camera, facial recognition software, Arduino microprocessor, electronic trigger system. I can almost sketch the circuit out in my head. Is this a good idea? Would you be more or less likely to buy a gun you could program not to shoot the people you care about?
This is a thought experiment, of course. Things called “ski masks” exist and the need for the processing in super-short time frames and chaotic conditions like darkness make this impractical today. But five years from now? Ten? Add to facial recognition a series of identifying features like body mass and heat signature and the question is quickly begged: should your gun warn you if it’s more likely the person crawling in through the window at 3 AM is your idiot son home from a drunken high school party than a member of the Zeta narcogang? Should your gun pause even though you’re desperately squeezing the trigger? Should it vibrate? Give an audible warning? Because very soon guns will be better at identifying targets than we are. They don’t get scared and unlike us their imaginations are profoundly non-vivid. Guns could be made to be better than we are at not shooting the wrong people.
Guns could be made to be better than us.
For all our brilliance at creatively solving problems, we humans suck at the boring stuff. We are inconsistent. We vastly overestimate our abilities and delight in fooling ourselves on topics granular and grand. In contrast, machines excel at the dull. They can do the same job over and over till their servos wear out. The more complex tasks they become capable of, the more they will best their human creators in completing those tasks without faltering.
Already there are robot pharmacists outperforming human ones. There are software radiologists who can find tumors in slides better than humans with the added benefit of being able to run 24 hours a day, 7 days a week and the first analysis on Monday morning has the same quality as the one at six on the Friday before a long weekend. The planet Mars is inhabited entirely by robots as the conditions there are unspeakably difficult for us meatsacks, with our need for not only atmosphere but water AND food AND sleep making us a less-than-ideal choice for exploration. Leave it to our bots. They are better at exploring the solar system than we are.
Algorithms will get better at identifying risks for negative human behaviors as they have more and more access to “big data.” For instance, recognizing certain patterns of obsession online coupled with some keywords and purchases identified from an IP address could have stopped Adam Lanza and Anders Behring both. Our sense of freedom boils at this, but talk to the parents of the victims and see if they feel the same way. A not-very-sophisticated analysis could have told anyone the 2008 financial crash was a disaster waiting to happen. What responsibility do we have in creating a reporting structure in our bots? Whom do they tell? What actions do we give them the power to execute, knowing our propensity for self-delusion?
There will come a time, not far off, when our devices start telling us what to do, and not because they are evil in a Dr. Frankenstein’s Monster sort of way or a Skynet/Matrix sort of way.
But because we are wrong.
Please post this over at Cape Ann Connection (-talking of controlled explosions…)
Technology is pretty good stuff, but it’s still a product of profit driven companies. There’s going to be some planned obsolescence involved. I’m curious to see what you think of that, Jim!
I’m asking because I bought my daughter the cheapest laptop for her graduation yesterday, and it feels like that useful and new small perfectly adequate thing is just NOT going to be ENOUGH! So where does consumerism fit in your view of upcoming technology?
I have a long rant on this- but essentially we’re going to grow out of consumerism as we master different kinds of manufacturing. There will still be things of value, but they will be land and services, not products which can be manufactured ever more cheaply. That’s one direction things might take, anyway.
Equal parts intrigued and terrified by the implications of all this…