Why Elon Musk and Stephen Hawking Are Afraid of Artificial Intelligence

If you have been paying attention to the news lately, you may have seen something that surprised you: Elon Musk, Stephen Hawking, Steve Wozniak, Bill Gates, and others have publicly voiced fears about Artificial Intelligence–existential fears.

Cue Sci-Fi references.

These otherwise unabashed proponents of tech have become the new vanguard of sorts: a voice of in the wilderness urging caution. Like most people, you may be tempted to write this fear off as the bi-product of living in a tech echo chamber for too long, and you wouldn’t be completely wrong, after all a lot of us were around during tech scares of the past such as Y2K, which were enormously overblown; however, there’s a reason we should all be afraid, or if not afraid at least a little wary.

The Foundation

These fears are not new. They have been around prior to Otto Binder’s iRobot (published in 1939) and can trace their roots to literary works like Mary Shelley’s Frankenstein and even older Greco-Roman myths.

All these stories suffer a similar strain of anxiety based in our perpetual fear of creating something more powerful than ourselves (a god complex of sorts). The difference between the fear of A.I. today and the fear of Frankenstein is that these fears are founded in computational principle called Moore’s Law.

Moore’s Law originated in the 1970s and thus far has been remarkably accurate in predicting the future of computing. Moore’s Law states that the number of transistors on a computer chip would double every two years. The simplified version of Moore’s Law is:

Computational power will double every two years.

Though there is a theoretical limit to Moore’s Law (when transistors are shrunk to the size of an atom, some say they could go even smaller) and some indication that the doubling of computational power has slowed down (but still kept largely in tact by the use of multiple processors in a single computer). What this means for you is that if you were to buy a computer today, two years from now you could buy a computer for the same price that would be twice as powerful.

Now initially this doesn’t sound to frightening. Computers get smaller and faster, we all know that, we’ve witnessed it on our desktops and in our handhelds. But what we often fail to take into account is the rate at which this is occurring. Moore’s Law is an exponential increase rather than linear. A mathematic visualization from middle school:

A linear increase adds the same amount over and over again.

2 + 2 + 2 + 2 + 2 + 2 + 2 + 2 + 2 + 2

or

2 + 2 = 4 + 2 = 6 + 2 = 8 + 2 = 10 + 2 = 12 + 2 = 14 + 2 = 16 + 2 = 18 + 2 = 20

The increase is steady but consistent. Moore’s Law, however, is not linear but an exponential doubling.

2 * 2 * 2 * 2 * 2 * 2 * 2 * 2 * 2 * 2

or

2 * 2 = 4 * 2 = 8 * 2 = 16 * 2 = 32 *  2 = 64 * 2 = 128 * 2 = 256 * 2 = 512 * 2 = 1024

commonly written as

2^10

The difference between an exponential doubling–Moore’s Law–and a linear increase is the difference between 20 and 1024 after twenty years, the difference between, 40 and 1,048,576 after another twenty years, and the difference between 60 and 1,073,741,824 after another 20 years, and just remember this rate of difference doesn’t slow but grows each twenty year period.

Once again, you are probably unperturbed as exponential relationships are something you covered in middle school, but allow me to make an attempt at perspective for how powerful computers were, how powerful they are, and how powerful they will become.

Computational Power Then, Now, and Tomorrow 

When we refer to computational power, what are we talking about? What does that mean?

The simplest way to measure computing power is by instructions per second (IPS). Your processor sends out electronic pulses every second and each time a pulse goes out the computer can pull data and perform calculations. Hertz refers to the amount of pulses per second. If you have a 4.4 gigahertz (GHz) processor, your processor sends out roughly 4.4 billion pulses per second.

Processors for general use computers are generally measured by MIPS (million instructions per second) and Hertz (Hz). Supercomputer processors are measured in FLOPS or floating-point operations per second. Here we’re going to try and keep them in MIPS and Hz for comparison.

A timeline:

  • 1951 – One of the first computers, the UNIVAC I could run at .002 MIPS (or two thousand instructions per second) at .00225 GHz.
  • 1971 – Twenty years later the Intel 4004 could run at .092 MIPS  at .00074 GHz.
  • 1991 – Another twenty years the Namco System 21 Galaxian could run at 1,660.386 MIPS at .04 GHz on 96 cores.
  • 2011 – Another twenty years the Tianhe-1A can run at 2,670,000,000 MIPs at 2.93 GHz on 186,368 cores.

In a nutshell, the average laptop in 2015 has more computational power than entire governments did 40 years ago.

But why does this matter?

The Tipping Point and the Technological Singularity

All this talk of processors as they relate to computational power is relevant in comparison to how much computational power the human brain has.

Here is a scale of computing:

  • Hecto-scale computing (10^2)
  • Kilo-scale computing (10^3)
  • Mega-scale computing (10^6)
  • Giga-scale computing (10^9)
  • Tera-scale computing (10^12)
  • Peta-scale computing (10^15)

This scale goes on forever but we’ll stop it at Peta for now.

Megabytes, Gigabytes, and Terabytes are all words you’ve heard. The prefixes, Mega, Giga, Tera, etc, not only apply to memory (computing data is stored in memory as bits, short for binary digits, or zeros and ones, and one “byte” is 8-bits) but also to computational power.

Rough estimates put the computational power necessary to simulate the human brain in the Peta Scale, 36.8 x 10^15 FLOPS (similar to MIPS). Your brain has enormous computational power in a very small space.

So the question becomes: How close are we to mimicking the human brain in computational ability?

The Tianhe-2‘s Linpack performance, hit 33.86 x 10^15 in June of 2013.

They’re here.

Don’t panic yet. That rough estimate of the human brain’s computational power may be low, and there’s a difference between having the computational power of the human brain and having legitimate Artificial Intelligence.

Limited Artificial Intelligence is already all around us: your car automatically adjusts the temperature, social media sites suggest friends, robots perform specific tasks on an assembly line. A.I. as we know it is almost exclusively task-oriented. We have produced some general purpose A.I. that may be useful in the future, but for the time being they are mostly. Where A.I. becomes legitimate and an a real existential threat is when it reaches something called the Technological Singularity.

The Technological Singularity refers to an A.I. able to implement recursive learning strategies and self-improvement. An A.I. that will learn without human intervention and improve on itself is scary because of exponential relations. An A.I. who reaches the technological singularity would leave the human race behind within weeks.

Here’s a rough timeline of how the A.I. would progress:

  • Birth – begins with the knowledge of a child. Clumsy, rash, eager to learn, and much faster at learning than a human.
  • Days later – exhibits the equivalent intelligence of an adult human.
  • Hours later – has more knowledge and intelligence than the entire human race combined solving problems we aren’t even aware exist.
  • Minutes later – unknown.

The Technological Singularity is so frightening because the moment the singularity is reached we become bystanders, we are no longer involved in the process, all we can do is watch and hope. The A.I. has become something unto itself exponentially more powerful than a human, more intelligent than anything we know of.

As I said we aren’t there yet, and we may not be for a while. But remember that supercomputer with enough computational power to mimic the human brain? If Moore’s Law holds up people like Ray Kurzweil estimate that the average desktop computer will have that much computing power in 2029, a mere 14 years from now.

The reason very intelligent people are worried now is because by the time we realize it’s time to panic, it will be too late.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s