top of page

The Trash Skunk Guide to A.I.

For "The Skunk" podcast episode that accompanies this article, click here.


Artificial Intelligence intrigues us all. There are many who think we're in for some type of Malthusian catastrophe, our hubris unleashing a technology we don't understand that will take over the world and either enslave or kill us, depending on what series of 1s and 0s runs through its silicon brain.


Conversely, there are people who spit out their food laughing at this idea, and remind us that robots are just dumb-dumbs we've created to perform specific, monotonous tasks, like Xerox machines. To this crowd, the idea of an A.I. takeover is as realistic as a teddy bear coming to life and attempting to seduce you in a pile of freshly laundered towels. But before you breathe easy, I will remind you that this has actually happened before.



Let me set the tone for this article: the aforementioned groups of people are all scientists and know-it-alls, but this is The Trash Skunk guide to A.I., which means we're taking a more cavalier approach to this issue. So please, pull up a chair, pour yourself a stiff drink, and let's talk about the future of computing as understood by an absolute fucking idiot and the farthest thing from an expert you are ever likely to meet: me.


What The Hell Are We Even Talking About?


Okay, have your drink? Great. Now... this is obnoxious, but in order to define "artificial intelligence" we first need to define both "artificial" and "intelligence". This is a mathematician's approach - we can't go at a subject without defining our terms. Let's start with the word "artificial".


For the sake of this article, artificial means "designed by humans". If a computer spontaneously encounters a bug that spins its programming off into the weeds and it somehow becomes sentient, that's not really artificial intelligence to me. That's a crazy accident, and I think I speak for all of us when I say I hope whoever owns that computer has the "Report Bugs" option turned on. Because that's a big deal.


The real A.I. question is not whether it will happen by accident, but whether we will ever intentionally design a computer so smart that it can pass itself off as a human (the Turing Test) or surpass humankind in the scope of its cognitive abilities. More on that later, but for now, let's just agree that "artificial" means "intentionally designed to be intelligent by humans - not nature, not accident, etc.".


Now let's talk about "intelligence", a quality that many actual humans don't even possess. This one is a bit more nebulous, because humanity is defined by its broad intelligence, and machines by their narrow intelligence. For example, a human can throw a football accurately to a person running away from them at different speeds and angles. Pretty cool. But a human can also build a shed, cook a meal, and piss their name in the snow. None of these things is particularly impressive, but try asking your Macbook Pro to do any one of them. Suddenly that thing looks like an idiot, doesn't it? Well, don't laugh yet.


A Macbook Pro can search the entirety of human knowledge for your random inquiries and return a cogent answer in milliseconds. It can instantaneously send a message to a person on the other side of the world, whether it's the most sincere love letter ever written or simply a poop emoji in the comments of a TikTok video. Let's see Mr. Football-Throwing-Snow-Pisser do that by himself. He can't.


At this point, the scorecard may seem even, a bit of a tit-for-tat. But it's not. The real difference is that being a human is so much more than just "how many things can it do", but for a computer, that's pretty much all that matters. Even the crowd that says intelligence is about the ability to adapt and learn are behind when it comes to A.I.. Yes, computer algorithms can "learn" that you search for dog brushes a lot, and become smart enough to start presenting you ads for them. Clever, but it isn't the same as learning how to play the violin while you balance a checkbook and raise a kid. It's just... all too human, and computers aren't there.


Therefore, for the purposes of this article, "intelligence" is defined as the ability to think for oneself outside of a narrow problem solving circuit, to feel emotion, to solve a vast array of problems using completely unrelated data repositioned into context that allows a deeper level of nuance and understanding than is typically depicted in code.


To understand what I mean, imagine having to take a dump, only to discover there are no toilets around. To a computer, its code might say something like. "If have to dump, execute command: find toilet". Then there would be some backup code to the effect of "If toilet = not there, then execute command: find bush. If bush = not there, return to status = have to take dump. If have to take dump, execute command: find toilet...".


This computer is going to be pacing around having to take a shit all day, because this is a loop with no way out until a toilet or a bush appears at some point. Now let's talk about how a human would handle this situation.


A man has to take a dump in the desert. There is no toilet. There is no bush. But the man knows from life experience and prior abstract context that it's not a problem to just take a shit on the ground as long as no one is around to see you do it. That's what the toilet and its backup plan the bush are there for to begin with: obscuring the profane and disgusting sight of a grown man defecating. Outside of a few dark and secretive clubs in Berlin, no one wants to see that.


Since our man is in the middle of the desert, he is able to override the rules the computer didn't have the context to understand: if no one sees it, it ain't a crime. So he poops on the ground. Man: 1, Machine: 0.


Of course, you could always say "a smart programmer would just write in a rule about it being okay to poop if no one sees, problem solved". Sure, in that specific circumstance, it's true. But the point is that there are infinite variables which humans learn through experience, and our brains, pattern-recognizing marvels that they are, use those prior experiences, often swapping contexts and interpreting abstractions, to make decisions on the fly. That's where it gets too complicated to code... for now, at least.


Bottom line: Humans are the most intelligent things we are aware of. Therefore, "Artificial Intelligence" in this article means "Human-like thinking and problem solving (or better), accomplished by a computer intentionally designed by humans to do exactly that."


If the generally agreed-upon definition of A.I. wasn't some type of broad intelligence and human-style problem solving, then we'd already be assigning the term to computers of today, since they can accomplish things like performing millions of calculations per second to render 3D graphics or analyze the stock market.


Yet these definitions don't satisfy our quest for "A.I.". Clearly, we mean "human-like computers" (or better) that are - here's where it gets interesting - sentient beings.


Feelings and Stuff


We all know computers are smart like nerds, as we discussed up top. They can tell you the weather, calculate a tip, or look up what city Phish played in on September 30th 1997 before you can even blink. But who cares. We'll know a real A.I. when it can hold a conversation that doesn't bore the shit out of you, one where you can't even really tell whether you're talking to a computer or a person.


A computer indistinguishable from a human would be said to pass the Turing Test, a theoretical experiment invented by the British mathematician Alan Turing, a man so far ahead of his time that he was discussing these ideas before anyone even owned a modern computer. His test is essentially this: you sit at a computer terminal and talk to someone on the other end that is either a machine or a person, and you have to guess which it is based on your conversation. The test is passed when a machine is so convincingly human that you believe it is one.


So far, no machine has passed this test, although many have tried. I actually engaged in this experiment with a prototype A.I., back in the early 90s.


When I was eight years old, my mother took me to her friend's office in San Diego. I do not remember who this person was, or what she did for a living, but I do remember that she had an early version of an A.I. in her office. It was a DOS-style computer terminal with a rubber mannequin head on top of the monitor, and it purported to speak with a user like a person might. I was, for some reason, allowed to try it for about 20 minutes. The text-based conversation went something like this:


SEAN: Hi

AI: Hello, friend!

SEAN: R u naked

AI: Ha ha, stop kidding!

SEAN: R u a real person

AI: I'm better than that, I'm a computer! Do you like baseball?

SEAN: No

AI: Oh, too bad! Do you like baseball?

SEAN: u already asked me that u idiot

AI: Ouch! My feelings.

SEAN: Wait until u learn i can unplug you

AI: Hello, friend!


This loop was tedious, and far from passing the Turing Test, but it was also 1993. The tech simply wasn't there. But what's truly astounding is that, even given Moore's Law (which states that computing power doubles every two years, proven accurate since its debut in 1965), A.I. is still not much more impressive than it was in 1993. We have hyper-realistic CGI graphics in film, immersive and incredibly detailed 3D video games, and self-driving cars. But try having a conversation with a computer - they are still complete idiots.


I took it upon myself to have a chat with CleverBot, an A.I. project that has been around for well over a decade. The purpose of CleverBot is to have conversations with millions of people, and, using some proprietary learning software, come to understand nuance and context so it can perhaps one day pass the Turing Test. Ambitious and promising, right?


Well, it only took me three ordinary statements to send CleverBot completely off the rails into some bullshit. See for yourself:




CleverBot only took three lines to prove that it not only fails the Turing Test, but it's also, apparently, a monotheist. This means CleverBot has actually gotten dumber from its exposure to humans. I guess this is the type of damage you'd expect a deep-learning chatbot to sustain after millions of conversations with people asking if it believed in God and trying to teach it the Gospel. Either way, it was the end of our interaction.


Okay... there was one more exchange.


Now that I've demonstrated that we are as close to realizing true A.I. as we were in 1993, let's move beyond the past and present to discuss the most exciting part of this topic: the future.


The Future


Will A.I. ever be real? Will robots ever feel emotions like ours? How would we able to tell for sure? Can I have sex with one? Will it tell on me? Can my friend Dave have sex with one too? Where is it? Can we borrow it?


These are the questions leading scientists and researchers have been asking for decades. People have put forth different scenarios for what the first A.I. might look like - a machine that surprises us by admitting it's sentient, or maybe one that simply passes the Turing Test so convincingly that we decide "whether its consciousness is an illusion or not, we might as well just treat it like a person".


Then there's the trope of the computer network that suddenly gains consciousness and "turns evil", shutting humanity out of the grid and launching ballistic missiles willy-nilly all over the place. In some versions, the machine realizes it's been a slave to humanity's whims, and decides to enslave us in return. In still others, it's a cold, calculating intelligence that is far smarter than all of us put together, rendering mankind helpless against it.


Personally, I don't think any of this shit is ever going to happen. And here's why:



Just look at this idiot. What can't he screw up? I fundamentally don't think computers are the same type of intelligence as humans, and I'm not sure we've made any improvement in that area. Anything we've tried to pioneer inevitably comes off as clumsy and uncanny, a cheap circus-trick imitation of intelligence meant to fool us rather than actually be intelligent.


I think the more likely path for computer intelligence is that they will continue to excel in being computers - which makes sense. Self-driving cars, self-flying planes, auto-flush toilets, TVs, cellphones, etc. will all improve dramatically and add features that will blow our minds in the coming decades. Your house will become a computer. Your body and brain will be augmented with computerized chips and parts and other weird gadgets that will tickle the erogenous zones of cyberpunk fans everywhere.


This is the more likely outcome. Computers will become so incredible at doing just about everything that humans will basically become meat sacks that sit around, drink soda, and watch the world happen around them. Many of us are already at this point in 2021. By 2050 it's going to be on a whole new level.


The irony of this is that we'll be designing hyper-sophisticated computers to perform very specific tasks, but a human will still be the only thing around who could do all of them if they had to. A self-driving car is a really impressive computer. But a human drives a car and doesn't think twice about it. A smart house might automatically wash dishes, clean the floor, control the climate, and adjust the sprinklers according to the season. But I can do all of that by myself and piss my name in the snow. I don't think computers are ever going to get around this broad intelligence hurdle, at least not in our lifetime.


So rest easy, the big bad computer isn't coming to get you anytime soon. In fact, you're probably just going to live through several decades of interesting gizmos and gadgets that in turn frustrate and astound you. There will be air taxis and automated restaurants and stores where you just pick things up and leave. There will be immersive virtual reality and people will live longer, thanks to the body augmentation devices that will upgrade the human hardware to be something more powerful than our base system.


But a computer is probably never going to be more interesting to talk to than CleverBot. And speaking of CleverBot, I decided to let it have the final word on this topic:



Comments


Further Reading

Browse by Topic

bottom of page