Human Intelligence and Beyond: Ray Kurzweil on the future of technology and humanity

For those of you who enjoy a little futurism or secretly wish that science fiction would become a reality, have a look at Lev Grossman’s interview with Ray Kurzweil in the latest edition of TIME magazine.
Author, inventor, entrepreneur and futurist Ray Kurzweil discusses his predictions for the future with Grossman, most of which center around a formidable-sounding event known as The Singularity– simply put, a time when technology becomes so advanced that the nature of humanity becomes qualitatively and fundamentally different and impossible to predict (for the long version, hop on over to Wikipedia, though the discussion page may be more telling than the article itself!).  Kurzweil thinks:

We will successfully reverse-engineer the human brain by the mid-2020s. By the end of that decade, computers will be capable of human-level intelligence. Kurzweil puts the date of the Singularity — never say he’s not conservative — at 2045. In that year, he estimates, given the vast increases in computing power and the vast reductions in the cost of same, the quantity of artificial intelligence created will be about a billion times the sum of all the human intelligence that exists today.

So this guy must be crazy, right?  Perhaps.  But he isn’t so crazy that he hasn’t authored a number of highly successful books, been honored with nineteen and counting honorary doctorates, and made a whole lot of money on ventures as diverse as the first text-to-speech tool and a new generation of synthesizers that effectively mimick a wide range of orchestral instruments.  Many of his ideas have been pretty on-the-mark in the past– but does that mean that his predictions for the future will be accurate?

There is no doubt that technology advances quickly, exponentially quickly: if you rewind 35 years and consider how far we have come, it seems likely that 2045 will be at least be wildly different that 2011.  But human-level intelligence?  Is that possible?

The first thing we have to consider is that human brains are pretty different in terms of the stuff that makes them tick: maybe a supercomputer would need tissues and firing synapses to think in a way that is qualitatively similar to a human.  Experts are divided on this claim, but it may be a moot point: after all, who is to say that computers have to be shiny metal boxes working in zeros and ones?

The question that seems more central is the idea of training the system.  Humans think, reason, and experience the way we do because our brains take years and years to learn about the world.  And this learning doesn’t occur after the system is “built”–just consider the fact that by the time a baby becomes a elementary school student with a skill set of language, reading, and growing social and cultural knowledge, his or her brain and body aren’t even the same size anymore!

So maybe the key to human intelligence is building a computer or a robot that grows as it learns.  Of course, this is assuming that the ultimate goal is human intelligence, which may be rather silly– after all, we already have human intelligence.  Extant technology already surpasses human intelligence in certain ways, especially when it comes to memory storage and mathematical computation (we’ll see what this means for Ken Jennings tomorrow!), although in certain realms it has a lot of catching up to do: maybe Kurzweil thinks that the computers of 2010 will outsmart us in terms of social intelligence.  Right now, humans still have computers beat when it comes to customer service, and the day that a robot is a more effective car salesman, cult leader, or concierge seems a long way away.

So what does it mean when Grossman says that Kurzweil thinks ” In that year… …the quantity of artificial intelligence created will be about a billion times the sum of all the human intelligence that exists today.”?   How does one “sum” human intelligence, anyway?  One gets the idea that Kurzweil is not speaking in terms of literal equations, though I imagine that if anyone would be motivated to develop a mathematical model of all human intelligence, this would probably be the guy.

Intrigued?  Check out Kurzweil’s most recent book: The Singularity is Near: When Humans Transcend Biology
He has another on the way that promised to explore the issues raised here, titled How the Mind Works and How to Build One

Or, for several more perspectives on how minds might work (remember the jury is still out!) and further discussion of artificial intelligence, I highly recommend Paul Churchland’s Matter and Consciousness: A Contemporary Introduction to the Philosophy of Mind.  I find this to be a great introductory text, although some of the discussion is already a bit dated– if any fellow mind enthusiasts care to recommend more recent texts, I would be glad to hear it!