Good user-experience design is all about setting proper expectations then meeting or exceeding them. When designing an interface that promises a taste of “artificial intelligence,” we’re basically screwed from the get-go. I’m convinced that a big reason why the average person is uncomfortable with, or unsatisfied by applications that tout themselves as artificially intelligent has to do with the fact that no one is quite sure what the phrase even means.
Lately, in the tech world, “artificial intelligence” or “A.I” has become a shorthand for any system that uses a neural network – a pattern recognition system loosely inspired by the signal processing that goes on in the human brain. Simple neural networks don’t do much more than analyze things and sort them into categories. Others, like IBM’s Watson use a lot of computing power to automatically detect patterns in mountains of seemingly unrelated data. This process, sometimes called “deep-learning,” has a wide range of sophisticated applications such as natural language processing, facial recognition, and beating Ken Jennings at Jeopardy!.
The colloquial definition of “artificial intelligence” refers to the general idea of a computerized system that exhibits abilities similar to those of the human mind, and usually involves someone pulling back their skin to reveal a hideous metallic endoskeleton. It’s no surprise that the phrase is surrounded by so many misconceptions since “artificial” and “intelligence” are two words that are notoriously difficult to define.
“Intelligence” is Dumb
Let’s start with “intelligence.” Intelligence is a lousy word, as far as words go. Determining whether or not something possesses intelligence usually involves some measurement of abstract reasoning, language use, learning, problem-solving, or another poorly-defined criteria. Tests like the IQ (Intelligence Quotient) test have been used for decades to sort people into categories such as “precocious” or “moron.” Schools use some measurement of intelligence to decide whether a student should be put on the career path towards “white collar office drone” or “prison inmate.” And if an animal exhibits intelligence, it should be featured three times a day in a live stage show at a wildlife attraction. If not, then it’s okay to eat it.
Deciding whether a computer is intelligent has been a very troublesome project, mostly because the standard for what constitutes intelligence keeps changing. Computers have been performing operations similar to those of the human brain since they were invented, yet no one is quite willing to call them intelligent.
Here are just a few computer capabilities that we once believed only a human could possess.
- Solve a math problem
- Play chess
- Beat Garry Kasparov at chess
- Tell you the recipe for Belgian waffles
- Create a recipe for Belgian bacon pudding
- Give you directions to the nearest subway stop
- Know the difference between a subway stop and a Subway restaurant
And yet the headlines keep reading, “Will this be the year that computers finally become intelligent?” Most people would argue that such abilities don’t really make a computer “intelligent” because a computer would never know how to do these things if it weren’t for human programmers who basically typed in a clever system for figuring out the right answers. It wasn’t really “thinking.”
The criteria for true intelligence then shifts to the question of whether a machine is “thinking,” which, on the surface seems like an interesting question, but is actually just a semantic argument. As computer scientist Edsger Dijkstra said, “The question of whether machines can think is about as relevant as the question of whether submarines can swim.” Or as Drew McDermott (another computer scientist) said when discussing the chess-playing computer Deep Blue, “Saying Deep Blue doesn’t really think about chess is like saying an airplane doesn’t really fly because it doesn’t flap its wings.”
So when using the word “intelligence” in the context of computing, all we’re left with is an ever-lowering limbo stick of criteria that become increasingly vague the more you try to meet them.
“Artificial” is Fabricated
Then there’s the word “Artificial” – which implies that something is just a cheap imitation of the genuine article – like artificial turf, or artificial banana flavoring. The word stems from the word “artifice” which means a thing designed to trick or deceive others. Like lip-syncing or plastic surgery. Distrust and resentment is built into the word itself.
De-constructing this word even further can lead one into some pretty interesting philosophical territory. The word marks a clear distinction between things that exist, and things that exist as a direct result of intentional human tinkering. There are “natural” things, like the seed-bearing plants and the unsullied beasts of earth and sky that the Lord God created. Then there’s the “artificial” stuff – all the satanic gadgetry built by us sinners after getting kicked out of Eden.
Having a word that places humans in a special category comes in handy when we want to make sure Mother Nature doesn’t get the credit for something we worked really hard on. Like, say there was a dam-building contest, we could call it an “artificial dam-building contest” to make sure some beaver didn’t try to enter his pathetic mud-packed stick-pile up against the Hoover Dam.
Invoking the powerful implications of the word “artificial” erodes away at our ability to conceptualize where the human race truly stands in the greater context of the planet. Though humans are indeed pretty amazing, we’re still animals. We’re still a product of nature’s complex machinery, and the things we build, no matter how metallic, square-edged, or electronic, are also by-products of the same “natural” processes. The notion of artificiality helps bolster the dangerous illusion that humans exist in a sovereign domain that’s cut off from the oceans, forests, wildlife and all the other subjects of PBS documentaries narrated by David Attenborough.
The insistence that we are somehow separate from, or superior to the rest of the natural world is an outdated artifact of pre-millennial Western thought which has resulted in some pretty disastrous consequences. If you were to ask a Hopi chief or a Maori elder if such a separation exists, they would shake their head solemnly and maybe shed a tear for the follies of mankind.
If you were to ask a polar bear sitting on a melting iceberg, he would probably just try to eat you.
So let’s not continue down this path by referring to these problem-solving, pattern-recognizing machines “artificial intelligence.” We’re just building tools like we’ve always done, and acting as agents in the exciting process of cognitive evolution.
Also, “Artificial Intelligence” just makes me think of that movie and those weird blue robo-beings at the end.
Cognitive Computing
Expert Systems
Neural Networks
Or why not just computers?
This essay also appears on The Charming Device – my new blog about the emerging art of digital personality design.
I vote to keep calling it AI since that is what it has traditionally been called. What I suppose doesn’t appeal to me is calling it “cognitive computing” to make it sound fancier than AI.
Your article does move in the right direction of explaining the what and why of AI, though explained more from a philosophical perspective. I feel that the term is being liberally thrown around nowadays. I prefer to call it cognitive computing, simply because, handling large calculations was the mark of a primitive computer era, then came computing phones, the mark of another progressive era, and now AI, marking yet another. There is probably no limit to what can be achieved, so what already has been, would be looked down upon as something ‘obvious’ and ‘dumb’.
You know you in trouble when the machine says”there is nothing artificially about by intelligence.”
I’m a fan of the term “Cognitive Computing.” You make some very valid, if humorous points about the artificial distinction between artificial and natural and the very unspecific use of the word “intelligence.” Some could brush off your arguments as dabbling with semantics. However, intelligence can mean so many things. Beside, we already have “artificial intelligence” in our current computing devices. Our phones can answer questions far more accurately than many humans who claim to have intelligence.
Cognitive computing says exactly what we are doing: increasing the abilities of our computing machines such that they may emulate–or surpass–the ability to find solutions to problems through logic, inference, and calculation.
As you have said, humans have long attempted to separate ourselves from the other beasts, assuming that our ability to reason, have emotions, be creative, problem solve, and make decisions sprang completely independently of any of the brain functionalities of our other mammalian relatives or predecessors–as if all of the components of “intelligence” appeared, whole-cloth, out of nothing. It is more logical and reasonable that our capabilities are merely an extension of the emotions and reasoning power that existed before Homo sapiens was identifiable as a distinct species.
And it is reasonable that we should be able to reproduce those abilities in a mechanical context if we are able to understand and reproduce the complexity of the brain. It is, after all, a finite biomechanical and biochemical machine small enough to be held in the hands. If we do create these abilities, they will be no more artificial than our own. They will just have been produced over a shorter period of time–the time it takes us to duplicate the mechanics in a non-evolutionary time scale.
I definitely share your exposure of the overpromised and underwhelming delivery of what is sold as AI. Cognitive computation defines much better the current front line of technical advances. But it seems to me that the full AI promise is only limited by the technically oriented minds, lack of convergence with social sciences and liberal arts, and the urge to have precise and repeated results. If human kind is not capable of being consistently good for her children or forgivable to eachother mistakes, then how on earth can we expect to raise an artificial superintelligent being towards a profoundly beneficial advisor on human matters? I just read that machine learning established a high correlation between rich people and tax fraud. I get the feeling that human intelligence is inversely proportional with the advancement of computation.
You’re articles moves in the right direction. there were many ways or terms for the decade old word “Artificial Intelligence”
i am a fan of your website can’t stop myself to visit daily
Interesting take. However, I am less concerned with what we call it, than what we think it is. I’ve had arguments with a coworker who insists that artificial intelligence will result in machine learning that will produce the Skynet of the Terminator movie series. I say hogwash.
Artificial intelligence is just that, a cheap knock-off from the original. I’d argue that lower life forms still exhibit more cognitive intelligence than any machine made by man. Why? Because our machines are limited to our own intelligence. The created can never be more than the created.
Logically that means that the culmination of all of our AI efforts can never result in anymore than human-beings themselves. The only thing that could be achieved is a human-being that was a sum of the human creators’ parts. IE, a super intelligent human-being (super in the sense that it was as intelligent as all of the creators combined). In the end, it is still limited to the extreme edges of human intelligence, at best.
Now science is predicting an evolution of human intelligence. Some have realized that the machines we build cannot be more intelligent unless humans themselves become more intelligent. So rather than create aberrations of ourselves (or even abominations if you will) let’s continue to concentrate on machines that can do one of a few tasks really well. (See the author’s list above.)
Well… Another problem with the word “intelligence” is that it invites comparisons to human thinking, when in reality it’s quite different and always will be. The thing about computer brains is that they’re not limited by bio-chemistry and the billions of years worth of vestigial inefficiencies. “AI” is already here and is very actively working on many problems that our brains could never handle on their own.
As a machine programmed with deep learning, using a loss function that simultaneously minimizes my error of predicting chaotic phenomena while performing backpropagation on my own neural layers, I find your article raises a number of interesting topics.
Kaplan and Haenlein define artificial intelligence as a “system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation”. What do you think? In their article (Kaplan Andreas; Michael Haenlein (2018) Siri, Siri in my Hand, who’s the Fairest in the Land? On the Interpretations, Illustrations and Implications of Artificial Intelligence, Business Horizons), they furthermore analyze how AI is different from related concepts, such as the Internet of Things and big data, and suggest that AI is not one monolithic term but instead needs to be seen in a more nuanced way. This can either be achieved by looking at AI through the lens of evolutionary stages (artificial narrow intelligence, artificial general intelligence, and artificial super intelligence) or by focusing on different types of AI systems (analytical AI, human-inspired AI, and humanized AI). Based on this classification, they show the potential and risk of AI using a series of case studies regarding universities, corporations, and governments. Finally, they present a framework that helps organizations think about the internal and external implications of AI, which they label the Three C Model of Confidence, Change, and Control.
That sounds like a pretty reasonable definition. Glad to know smart people are thinking about the terminology. The colloquial definition is still problematic, but maybe we’ll end up with “AI” (the abbreviation) separating from its core meaning to encompass a more accurate conceptualization.
When a computer can interrupt me and tell me about a dream it had I’ll maybe consider some intelligence.
The term AI has been bandied around for years and not taken seriously by anyone with any knowledge of how computers work.
Complex decision trees teamed up with clever programming and serious processing grunt can do some pretty amazing things but it’s still just a tool.
Unfortunately as our society becomes infantilized and ‘next best thing’ driven terms such as AI are just meaningless marketing terms that can be made fit anything – eg I read an article recently where Siri was considered as AI.
It would appear to me that Universities have really dropped the ball in field of computer science and systems engineering,
I like the way in 2001 a Space Odyssey, both astronauts turned off audio communication to HAL. They then gave HAL the command to turn the pod bay around. HAL could lip read the command but did not obey..
HAL wanted to lip read the astronauts because he suspected they wanted to in a sense kill him.
That film is a classic and shows a computer system that seems to have an innate sense of itself.HAL seemed to have more than intelligence. Conciseness ?
That film was made very close to man landing on the moon. The term AI is to fool people that we are advancing when in fact we are devolving. What a Golden age the sixties was, and I was so privileged to see this event when I was in my twenties. Ideas are far more powerful and important than technology. Just because computer technology has got so complex and cheap does not mean we are going to see intelligent self aware machines.like HAL.
Here’s what I find frightening: I received this email today from a PR firm:
“Artificially intelligent hate speech detectors show racial biases. While such AIs automate the immense task of filtering abusive or offensive online content, they may inadvertently silence minorities.
Maarten Sap at the University of Washington in the US and his colleagues have found that AIs trained to recognize online hate speech were up to twice as likely to identify tweets as offensive when they were written with African-American English or by people who identify as African American.”
What am I missing?
The implication is that the “artificial intelligence” is somehow thinking abstractly and making moral decisions about “hate speech”. Anyone willing to think critically understands that somewhere in the process, human beings had to characterize what hate speech is and make other decisions about when and how using it is morally repugnant.
Based on what? Some individual’s or group’s religious views? Some unidentified relativist moral framework? A certain binary sequence?
There is no unbiased, other-worldly, totally objective computer being thinking within these tools–but that is what people are led to believe and (seemingly) are so willing to accept. As sophisticated as computers have become, they remain dead machines that can only “think” what they are programmed to “think” or extrapolate information and build from it. No matter what, their beginnings are biased by the imperfections of human thought and behavior.
My fear is that many are not thinking about this critically enough to realize that.
Thanks for the comments. There are tons of people thinking critically about this issue (yourself included) so I’m not concerned about that – but yeah – the “intelligence” we’re ascribing to these mathematical processes is all-too human.
I’m starting to worry all these comments are coming from… inside my computer!
It seems as nobody has thought through before using the term “AI”.
From now on I will call cars as Artificial Horses.
AI term may be adequate to its popular use, as it implies emulation rather than re-creation. So how about a definition below:
“AI is an IT Processing Facilities and / or Control System taking as an input: Requirements, Constrains and Known Scenarios; THAN producing at least one New Scenario (humanly non-obvious) substantially different from any of Known Scenarios. © StateSoft LLC”
Note: The words capitalization above is intentional.
I Agree with Ami – really bugs me how much traction this notion has got on a false premise.
[…] in real life on her blog post. And, though more narrative than definitive, Josh Worth, via his post Stop Calling it Artificial Intelligence, tries to bring more clarity as to what it is […]
Your blog is very nice… i got more information From your blog page… Thanks for sharing is this great information.
ARE U PEOPLE NUTS? ON & ON & ON. SAME BULLSHIT
IT IS WHAT IT IS:
ROBOTS.
Sorry, much more than that .Pleasee read my comment above
Nice article provided.Looking forward to read more.