In the sixty-five years since John McCarthy first coined the term “artificial intelligence,” one of the most surprising discoveries in the field has been that things that people do that are easy, and that should be easy for a computer to do, actually turn out to be very, very difficult for machines.
The average adult can, with relative ease, perceive a doorknob, reach out and grasp it, turn it, and open the door. The same task is still incredibly hard to engineer for even the most sophisticated robotic armatures using cutting-edge deep learning AI approaches.
By contrast, and equally surprising, machines have surpassed scientists’ expectations at tasks that are hard for a human. After about five-hundred years of development by humans of the game of chess, a machine built by DeepMind called AlphaZero was able, in a matter of a couple of years, to develop to a point where it could defeat all human grandmasters, and also beat them at an even older game, Go.
That divide between the easy and the hard typically defines the debate over what’s known as “human-level” AI, also known as “artificial general intelligence,” or AGI, the quest to make a machine equal to a human. Many people think that the divide means that AGI won’t be achieved for decades, if it is ever achieved.
AI scientist Melanie Mitchell has written, “AI is harder than we think because we are largely unconscious of the complexity of our own thought processes.”
But what if the very definition of what is “human-level” intelligence is changing? What if AI is no longer measured against the quality of human thought and action in the real world, but rather the all-too-predictable behavior of spending all day staring into a smartphone?
Also: AI in sixty seconds
Increasingly, humans spend their time doing stuff that a machine could do better. One of the many achievements of modern software is to occupy people’s time with easy tasks, such as the busy-work you do on social media, things like posting, commenting, “liking,” and Snapping.
People spend hours each day typing missives of 240 characters into Twitter. They incessantly click the like button on images they see on Instagram. At every crosswalk, they blithely stroll into oncoming traffic while scrolling. They spend hours doing up a list of stuff to buy on Amazon that they don’t really need. Humans have blown through untold hours replaying the same level on Xbox games to reach the high score. God knows how long we’ve all spent imbibing Netflix videos in epic couch-potato sessions.
And all this scrolling and clicking is enhanced by an entire sub-structure of software beneath the veneer of Web sites. For example, the technology of Apache Pinot, a program for speeding up queries to a database, manages the feature “Who has viewed my profile?” on LinkedIn. As one of the creators of Pinot, Kishore Gopalakrishna, has said, the software is designed to respond with answers in a split-second for people who check to see dozens of times a day just who has viewed their profile.
Pinot, and related middleware such as the open-source streaming data program Kafka, are built to feed a habit of constantly clicking, liking, typing a bit, tweeting, typing some more, pinning, scrolling, etc. Habits become addictions when there is a reward for repetition, and the feedback loop of modern Web platforms provide that reward by responding to human clicks with more and more opportunities to click some more. Pinot and Kafka mean that human online activity is an endless process of pressing on buttons, much like the classic lab rat pushing the lever for a pellet.
Humans have become superb at pushing the lever, but the bad news for humans is that all of these tasks could still be done far more easily, and perhaps better, by a machine.
While a natural language processing AI program of the first rank such as GPT-3 cannot engage in a long-form discussion on philosophical topics, it is more than adequate to spontaneously generate short-form posts such as tweets on a given topic. For AI to automate the posting of images on Instagram is probably just as easy.
Humans are able to call to mind memes to post on social networks with a kind of intuitive grace, recalling the perfect GIF image to suit the moment. While their is something impressive in that, machines, via brute-force search, could arguably come up with novel solutions to posting memes that are more optimal.
Certainly the act of checking to see who has viewed someone on LinkedIn, or any other act that boils down to consuming signals via an API from a program such as Pinot, could be automated to be more efficient than the way humans do it.
It’s true that humans still do plenty of other stuff, like go to the gym and nurture their mammalian young. Increasingly, though, they spend their time at the gym taking breaks to stare into a phone, and their young are increasingly parked in front of a screen. Natural human activities have become processes punctuated by a screen.
The upshot is that modern society has arrived at a strange place. A lot of human behavior nowadays happens in and around the computer, where human faculties lag the best computer programs.
Jensen Huang, the co-founder and CEO of Nvidia, the company that dominates chips for AI processing, has framed the turn of history in stark terms. Humans are too slow, Huang has said:
There are only a few billion of us, and it takes nine months to gestate, and then it takes years to raise them, and then when you finally get them to a point where they’re somewhat intelligent, they poke at these things called smartphones, and every time you poke at them, it creates queries in the cloud.
Which is all really slow, undesirably so. The solution is clear, as Huang frames it:
Well, in the future, it won’t take nine months to build a new intelligent being, and it won’t take years to raise them; it literally takes an hour to manufacture a BMW car, it takes seconds to download the artificial intelligence, and it’s right on the Internet within seconds after that. There’ll be trillions of these things.
Those things that Huang is talking about are something after Homo sapiens. The paradigm to which Huang is pointing is that of the automaton. An automaton, devoid of human weaknesses, is very capable of handling tasks that can be well-defined — “well-scoped” in engineering terms. While driving a car isn’t there yet for AI, driving a Web site, meaning, clicking and “liking” relentlessly, is doable right now.
Also: Ethics of AI: Benefits and risks of artificial intelligence
For, much of society today, given its constant emphasis on optimizing within a constrained set of digital rules, resembles a video game, an effort to reach an optimal score. And who better than a machine to know how to win at the game of playing against a machine?
The same program that beat humans at chess and Go, AlphaZero, in 2019 lead to a new program, MuZero, which is able to achieve good results playing Atari video games such as Ms Pac Man. It’s a short trip from Ms Pac Man to excelling at posting memes.
The divide still exists between the easy and the hard stuff that humans and computers do, and perhaps it will never be conquered by AI. But human thought is increasingly sub-optimal to navigate a digital world. Long before AGI happens, the world of humans will probably arrive at a place where most activities are far better carried out by machines.
At that point, humans become redundant. Should they then be replaced? The only reason not to replace them would be if humans still matter as entities who are not a means to an end but an end in and of themselves. That’s not a technology question, it is the essential question of humanity.
In a world more and more obsessed with speed, efficiency, and optimal outcomes above all else, the answer to that question is anything but certain.