AI is Religion Not Science

bigstock-Mechanism-1026141A few weeks ago, I posted an explanation of why it’s highly unlikely that that anybody will anytime soon replicate the human brain on a digital computer. In that post, I explained that a single human brain is an analog computer that’s enormously more complex than any digital computer or combination of digital computers.

In this post, I’ll explain why AI is essentially a set of religious belief rather scientific concepts.

AI as “Intelligent Design”

Wikipedia defines “intelligent design” as “the proposition that “certain features of the universe and of living things are best explained by an intelligent cause, not an undirected process such as natural selection.”

The entire concept of “intelligent design” is based upon the concept of “irreducible complexity”–that certain biological structures are so complex that they could not have evolved naturally.

No reputable scientist believes that “intelligent design” is science, because pawning the complexity of biological structures onto a designer simply raises the question of who designed the designer.

More importantly, as Richard Dawkins explains in The God Delusion, “far from pointing to a designer, the illusion of design in the living world is explained with far greater economy and with devastating elegance by Darwinian natural selection.”

In other words, natural selection is not some kind of “second best” way to create biological complexity, where the presence of a designer would make things easier or more likely.

Instead, natural selection is not just the only way that something as complex as the human brain has ever come into existence, it is also the most efficient and elegant way for something as complex as the human brain to come into existence.

The notion that something as complex as the human brain could be designed using components entirely different from the components of the human brain falls into the same fallacy as “intelligent design.”  It’s assuming that intentional design can “do the job” faster and better than natural selection.

AI Through “Natural Selection”

Some AI theorists believe that it may be possible to evolve AI from a seminal “singularity” in a process that would be similar to the way that organic life evolved.  However, that concept is simply restating the “intelligent design” concept in another form.

The human brain evolved from single-celled organisms over billions of years in response to an environment that was infinitely more complex, in total, than anything within in, including the human brain.

So unless you’re planning to have your “singularity” take billions of years to evolve, you’d need to replicate the environment that led natural selection to create the human brain and “speed up the cycles.”

However, designing a model of the environment is the problem of designing the human brain but infinitely more complex.  Anything that you could “design” would be absurdly simple compared to the real world.

Rather than solving the problem of “intelligent design” as a vehicle to create AI, the “let’s evolve AI” simply makes the same mistake. Rather than designing the brain, you’re now trying to design the environment that created the brain, which is even more unlikely.

AI Through “Mind Transference”

Finally, there’s the concept, currently being popularized by Ray Kurzweil, that people will be able to transfer their minds from their organic, analog brains into mechanical, digital brains.

The idea, of course, is that the mind is like a software program that can be loaded and replicated from one computer to another computer.  However, that idea that “the mind” exists independently of “the brain” is a religious concept rather than a scientific one.

The entire concept of “life after death” assumes a mind/brain dichotomy.  And while that dichotomy shows up frequently in genre fiction in the form of ghosts, mind-transference, and so forth, it is not a scientific concept.

The scientific consensus is that the mind is not software running on your brain, but rather that your mind and your brain are the same thing.  When your brain stops working, your mind ceases to exist. In other words, there is no “software” to transfer.

Even if you were somehow able to create an exact copy of your brain, the result would no more be “you” than two identically-manufactured automobiles are the same automobile.

But with AI we’re not even talking about an exact copy. Instead, we’re talking about emulating an insanely complex analog brain (which evolved through natural selection) on a comparatively simple digital computer (which was designed by a human being).

So, once again, you’re back in the land of “intelligent design.”

AI as “Inevitable in 20 Years”

The final way that AI is religious rather than scientific is in its prophetic character.  AI proponents keep predicting that the “singularity” will be achieved two or three decades in the future, indeed that such a breakthrough is “inevitable.”

Scientists–real ones, at least–do not generally make time-based predictions that are dependent upon breakthroughs that haven’t yet taken place. Scientists either base their predictions on the likely outcome of research in progress, or they discuss possibilities without stating a time-frame.

For example, a scientist might predict that a certain type of gene therapy is likely to be possible within ten years, since research into genetics and DNA has been progressing at a rapid pace.

However, while a scientist might speculate that faster-than-light space travel is possible, he or she would never predict that it will be a reality within a certain number of decades.

AI proponents, however, do exactly that, even though there have been no breakthroughs in the field for decades. Even the most sophisticated AI programs are just refinement of the same basic algorithms that have been in place since the 1970s.

However, that hasn’t stopped AI proponents from regularly predicting that machines that can think like humans are only a decade or two away.  Here’s a quick summary:

  • 1950: Alan Turing predicts that computers will pass the Turing Test by “the end of the century.”
  • 1970: A Life Magazine article entited “Meet Shakey, The First Electronic Person” quotes several distinguished computer scientists saying that within three to fifteen years “we will have a machine with the general intelligence of a human being.”
  • 1983: Authors Edward Feigenbaum and Pamela McCorduck in The Fifth Generation predict that Japan will create intelligent machines within ten years.
  • 2002: Handspring co-founder Jeff Hawkins predicts that AI will be a “huge industry” by 2020 and MIT scientist Rodney Brook predicts machines will have “emotions, desires, fears, loves, and pride” by 2022.
  • 2005: Ray Kurzweil predicts that “mind uploading” will become successful and perfected by the end of the 2030 decade.

Do you notice how the threshold keeps getting pushed out as the promised breakthrough never seems to happen?

If that seems familiar, it may be because that’s the way that some fundamentalist churches (like the Jehovah’s Witnesses) behave when they predict the end of the world.  They set a date and then, when the date gets close (or passes), they simply move the date farther into the future.

So, next time you talk with somebody who is “absolutely convinced” that “The Singularity is Near,” listen to the tonality.  If you’re sensitive to these things, you’ll hear the voice of somebody for whom faith carries more weight than facts.

Please take a moment to support Amazing Stories with a one-time or recurring donation via Patreon. We rely on donations to keep the site going, and we need your financial support to continue quality coverage of the science fiction, fantasy, and horror genres as well as supply free stories weekly for your reading pleasure. https://www.patreon.com/amazingstoriesmag

Previous Article

A Blog Horde Interview with C.E. Martin

Next Article

Forgotten Books

You might be interested in …