Kevin Kelly’s — The INEVITABLE- Technology Is Compelled to Get Smarter
You don’t need a magic crystal ball anymore to predict the future. Instead, an hour in a curved hall stocked with a couple hundred people in chairs facing a raised platform becomes a stage for revelations and future shocks. That’s because we all are sharing the room with Kevin Kelly, Wired’s founder and maverick editor, and, on this night, the compleat wizard of tomorrow.
Not a god, just a prophet, Kevin Kelly explores the future in real time.
Kevin Kelly knows what he knows, he knows what he doesn’t know, and he comes to both these levels of certainty because over the last few decades, he has searched out what he didn’t know he didn’t know and moved it into consciousness. A former hippie, Kevin didn’t let himself get lost in the mind-altering world of hallucinogens back in the day. A brain crackling with curiosity led him to the Whole Earth Review and the world of persistent discovery.
Nearly 30 years ago, Kelly explored new domains in much the same way as the sailors on the Nina, Pinta, and Santa Maria gazed out from their ship’s deck, across the sapphire waves lapping the shore and onto the beach of an island no one had seen or heard of. In 1989, Kevin slid down the rabbit hole with Jaron Lanier and reported that “for the first time someone created an instant fantasy world and crawled into it. I went in after him.” That rabbit hole was a virtual kingdom constructed of ether and wires, and, while drugs never hooked Kevin Kelly, that virtual reality and the computing behind it did.
The INEVITABLE, Kelly’s newest book, is a page-turning tour of what’s possible, what’s likely,
The INEVITABLE — Published by Viking, June 2016
and what is absolutely inevitable. Technology is now a part of our cultural DNA. I can remember back to the era of rotary dial phones and black and white TVs, but my students can’t. The evolution of telephones is an apt illustration of how one can be living in the midst of a cultural hurricane and never feel it.
- 1949 — the first phone to combine a ringer and a handset
- 1956 — the first transatlantic phone cable
- 1963 — the touch-tone phone is introduced
- 1973 — the first portable cell phone call is made (it weighed over 2 lbs.)
- 1993 — the first smart phone, called Simon, arrived from IBM and Bell South
- 2007 — the iPhone was introduced by Steve Jobs
Less than fifty years:
If you conjure a timeline from the black rotary phones of the 50s, the Princess phone of the 60s, the first cell phone weighing in at over two pounds, and Steve Job’s contribution, the iPhone, Kelly’s point is made — that technology becomes normal within months of its introduction. Artificial Intelligence, that murky field of technology where the machines take over the planet, is already here. If you don’t believe it, look at your key card the next time you stay in a hotel. Slide it in to the slot and voila, the door opens. AI is here, now. It just doesn’t quite feel like AI because Artificial Intelligence ought to feel strange and other-worldly.
Yet, because it is human nature to adapt and adjust, that sense of the strange evaporates almost on contact. GPS is AI. Siri on our iPhone is AI. The websites we visit daily have features of AI, particularly for reviewing products and purchasing. Banks have installed AI algorithms capable of spotting credit card fraud in 40–60 milleseconds. Hundreds of thousands of humans have disappeared from manufacturing employment since the introduction of robots. AI has made them obsolete.
“The realm of the born — all that is nature — and the realm of the made — all that is humanly constructed — are becoming one,” wrote Kelly in his 1995 book, Out of Control. “Machines are becoming biological and the biological is becoming engineered.”
“Technology has a bias,” asserts Kelly, even though technology relies on the physical reality of wires, chips, radio waves, and other components. The bias he speaks of demands innovation, improvement, and optimization. The long term trends of these variants are inevitable, but the specifics are not. Once voice could be recorded and waves could be transported, telephones were inevitable. The iPhone, specifically, was not. Therein lies the distinction governing technology — something will definitely be created, but what it will look like is unknown.
Looking back at the telephones mentioned above, consider what occurred in each incarnation. The device got smarter and took on an increasing number of tasks. The word for that process is cognification — to make smarter. In the modern world and its future, technology will become smarter. It will perform those functions for which humans are not skilled or which are time intensive and laborious.
Consider the first super computers. They performed calculations that would take a team of humans decades, if not a lifetime, of constant adding, subtracting, multiplying, and dividing. In fact, the most recent generation of super computers has gotten smarter by being faster, enabling them to perform 10 quadrillion calculations per second. That’s 10 plus fifteen zeros in basic arithmetic. In terms of time, 10 quadrillion seconds ago equals almost 32 million years. Very smart indeed.
The internet has gotten smarter as well since it began to bloom as long ago as
1962, when an MIT professor, one J.C.R. Licklider, initially promoted the idea of a galactic network. He saw the possibilities hundreds, thousands of computers linked to each other in a way that allowed users to share information and access data. From that kernel arose the networking world we live in today, one that is so much smarter than Licklider’s burgeoning notion that he mightn’t recognize it.
From numerical vastness to informational black holes, technology has given us far more than any one person needs. Yet, in spite of the faint echoes from a fading chorus of concerned philosophers and psychologists mumbling “Enough!”, the call of the committed, curious, and consuming is “More!”.
Where sits humanity amid the wires? Kelly likens human intelligence of a series of notes played by different orchestral instruments. Over here, amid the violins and joined in a duet with the percussionists, we find the motor cortex, where action notes are sent to muscles. Buried a bit deeper, the hippocampus and its range of spatial clarity is reflected in the downbeat of the conductor and the clarifying tone of the trumpet. All humans have these notes and neural centers, but in differing degrees of impact based on use. We even share certain notes with the animal kingdom, which is how our specie comes to relate to so many of theirs.
Ironically, it is this very versatility, this very human ability to blend emotion and logic that proves a disadvantage in relation to technology, according to Inevitable. AI, driven by impassive, unemotional algorithms, is not subject to the vagaries of human highs and lows. It simply does what it is programmed to do over and over and over again. Automatic pilot, for example, is not just a switch that human pilots turn on when it is convenient. Automatic pilot is a system that takes action, adjusts for conditions, and monitors the process.
Passengers getting on a plane and flying to New Jersey, perhaps for a family reunion or business meeting, feel comfortable and confident in their personal safety due to the skill of the captain and co-pilot. They do not know that the pilot is most active at the beginning and end of the flight, for take off and landing. Pilots actually fly the plane for as little as 8 or 9 minutes, says Kelly, while the rest of the time he or she manages the flight in tandem with the technology, or the automatic pilot. Capt. Chesley “Sully” Sullenberger, in an interview with CNN, cautions passengers against thinking that the pilot is purposeless during those long hours in the sky.
“In the old days, you had your hand on the wheel and you pushed the nose down and adjusted the power accordingly,” he explains. “Now, you’ve got to hold a different set of buttons and dials and switches, but in the end, you’re still doing the same thing — you’re still flying the plane.”
“None of it is easy,” he said. “In a lot of ways, it’s more difficult because airplanes are so much more complex now.”
That complexity, seen as a gap in function, is where technology fits. Driver-less cars, speeding down the highways will become, not vehicles of the future, but vehicles in fact. A driverless car can see ahead, to the side and in back, far exceeding human vision’s capabilities in terms of 360 degree performance and analytical powers. In addition, the AI driver will not get tired, have to stop to pee, or engage in a fight with his or her spouse via cell phone. It won’t speed, putting the lives of many in danger. Nor will the driverless ignore transmitted warnings of traffic, an accident ahead, or weather issues.
In short, driverless cars, with their enhanced cognification, will be safer and more efficient — in the long run. Don’t expect perfection if you drive one of these devices off the showroom floor in the next year or so. Comparing the development of driverless cars today to what they will inevitably become in the future is like trying to discuss communication via smoke signals and Skype in the same breath. Getting smarter is an ongoing process, and technology, once unleashed, drives humans to improve on what they have created.
“But what about the downside?” queried one member of Kevin Kelly’s book lecture audience. It is then he shares a humbling thought. “There are no experts in AI.” The field is too new and too instantly transformative to have wise ones who know it all. Technology itself occupies so much human and natural terrain that even the most knowledgeable understands only pieces of the whole.
Ironically, the people who are doing the inventing, innovating, and design are not the people who will research and report on the impact, long term advantages or disadvantages, and what will come next. Thus, while MRIs can slice the mind into a million photographic slivers, scientist don’t yet know how to remedy injury or flaws that may be found. For example, it was not coders, techies, or developers who first learned that too much screen time for children may lead to issues relative to their ability focus or the truncation of short term memory.
Yet, Kelly points out that the remedy for any of these drawbacks is not to ban the device, it is to design ways to use them smarter. We need to cognify our culture in order to make the best possible use of the inevitable tsunami of technology that is only a .1 away.