Manufacturing Dissent Since 1996
New interviews throughout the week

Moment of Truth: We Just Need To Know Our Limits

Welcome to the Moment of Truth, the thirst that is the drink.

After decades of failing to perfect it, humans still can’t admit that artificial intelligence is pretty stupid. Artificial intelligence is about as intelligent as artificial flowers are floral or artificial fruit is fruity. My favorite thing about humans trying to create artificial intelligence is our penchant for denial. We’re great at denying that things are going terribly wrong. Witness our reaction to global warming.

Have you seen HBO’s Chernobyl? All the young Stalinists today are calling it “anti-Soviet propaganda,” as if propaganda were necessary to find a totalitarian thought-policing bureaucracy unpalatable. Anyway, whether it hews to reality or not, it’s a great story of idiots in denial finally brought face-to-face with their hubris of thinking they can control technology.

Now, I don’t want to be a knee-jerk alarmist. People are so worried about artificial intelligence controlling us. I notice no one is worried about artificial legs walking all over us, or dentures biting us to death. And, listen, we’re going to need artificial intelligence as our natural intelligence rots and falls off. Which it seems to have been doing forever. You know who thought so? Seneca. Or Cicero. One of those bastards. Diogenes!

But we must recognize when enough is sufficient.

I saw a video of a robot being hit by humans with rods, wobbling a bit unsteadily but regaining its stability, taking the rods away from its assailants, then threatening to thrash them if they tried to attack again. One woman’s comment to this video, “Can we stop being mean to robots?” My comment, “Can we stop improving robots?” Because people act like it’s inevitable that robots are going to get more agile and effective. It’s not inevitable. We can say, “I don’t want to live in that Black Mirror episode where the robot dogs hunt people down. Don’t build those!”

It’s not whether you can think your way out of the paper bag, it’s with what style you get out of the paper bag, and what origami shape you fold the leftover bag into. Wait, no it’s not. It’s best to avoid getting trapped in the paper bag in the first place.

Think about the Turing test. Just think about it. There, you’ve already done more than the Turing test requires from a computer. All the Turing test requires is that a person not be able to tell whether they’re have a conversation with another person or with a computer. I think Turing would agree at this point that it’s a stupid test. Writing a program that mimics one side of a conversation turns out to be a completely different effort from creating an artificial mind. Turing himself always suspected his friends of being elaborate computer simulations, so he wouldn’t have been a reliable judge of the Turing test. The jury is still out on whether he would have been able to judge the Bechdel-Turing test, in which the point is to write a computer program able to convince a woman she’s having a conversation with another woman about something besides a man.

I have three points here: 1, the ability to create an electronic mind that can do what the human mind does is beyond us; 2, our mediocre attempts to do so will produce nightmarish results which our public policy authorities will deem acceptable; and 3, we have the choice not to go down this pathway of doom into the black forest of monstrous horror.

Point 1, We can’t even create an electronic chicken mind, let alone a human one. There’s nothing sacred about the human mind that prevents us. We simply don’t know what a mind is. 10,000 years or so of hanging around with the current model and we still can’t describe it, where it comes from, what it does, and how it does it. We don’t know where consciousness comes from. We don’t know what dreams do or why we need them. When it comes to creating even a model of the mind, we’re superstitious primates making images of gods out of mud. So, that’s point 1. We don’t even know why it’s such a difficult thing to understand, but it might have something to do with the fact that the thing we’re using to try to understand it is the thing itself. It might not be the right tool for the job.

Point 2, pretending our failure is success will lead to trouble. Take facial recognition software. It has trouble doing its one job, recognizing faces, particularly those of non-white people. This leads to all kinds of problems, one of which is injustice. Now, for most of our history, our justice systems have led to injustice. They’re very flawed. We know this, yet we continue to be shocked when someone like Ava DuVernay illustrates the flaws of our justice system in a streaming docu-drama. The only way bad artificial intelligence could make our justice system worse is by creating more injustice in it than we already currently tolerate. We can only foresee this: artificial intelligence, or our version of it, which we can confidently call “artificial stupidity,” will lead to previously unimagined opportunities for new, more thrillingly Kafkaesque, miscarriages of justice resulting in people being imprisoned, made to suffer, and put to death in novel situations marked by capricious cyberpunk cruelty. Our time-honored tolerance for our own society’s hypocrisy and inhumanity will really be put to the test.

Point 3, we have a choice. We think we don’t, but we do. After centuries of coming up with new ideas to make money, or profitable misery, we have come to assume that no one likes to put the genie back in the bottle, or the toothpaste back in the tube. “Look at the pretty genie,” we say, or, “look at all the sparkly toothpaste!” But recently some lunatic in China used Crispr gene editing technology to create genetically altered twin human beings. And the genie-bottle rubbers stopped rubbing their genitalia long enough to say, “Whoa. Not cool, dude.”

There was blanket medical and scientific condemnation from around the world. I like to imagine that this is the first shove to move the capitalist reflex off its pedestal, the reflex of developing every technology as soon it appears in the hopes of becoming the next rich person made of 99% perspiration. You know, people made of that much perspiration are bound to have some glandular issues.

The recent recognition that some types of technology are too immoral to pursue makes me irrationally exuberant. Yeah, we don’t need to capitalize on that new thing! It’s going to put the wrong people in prison, it’s going to put people out of work, it’s going to destroy irreplaceable manifestations of non-human creativity like forests and oceans, it’s going to create unintended genetic defects in our experimental subjects, like turning them into 100% perspiration, so we are going to choose not to pursue that activity. We are going to consider the consequences before they happen, which is how we tell our children to approach things like drugs or potentially dangerous behavior. We will be able to point to concrete examples. Yes, kids, we’ll say, we split the atom because we could, we dropped the bomb, we nearly China syndromed an entire continent, but when it came time to make mutant children in a lab, or robot policemen, or robot witnesses, or robot juries and judges, we used our common sense and said, No, we’re not going to go there. We have more sense than that. Our lives are about more than finding the next iPhone. Our lives are worth more than someone else’s ability to profit or some magical thinking about wealth concentration improving the economy for everyone. We are the human species and we will take control of our destiny. To the best of our ability.

This has been the Moment of Truth. Good day!

Moment of Truth

 

Share Tweet Send