Singularity is Not So Scary

by Melissa Karnaze

Compared to a machine, the human body is feeble because it is organic. And as many apocalyptic science fiction tales warn us, a superior AI race could wipe us out with no difficulty, and possibly no care, depending on the circumstance of AI-Human relations. Sounds scary, huh?

Following this same logic…

Compared to an augmented human, an unenhanced one has less chances of surviving as the fittest. Sound familiar?

Transport yourself to the future, where cybernetic limbs are only the beginning of cyborgization. Do you start to wonder where the eugenicists are hiding out?

The Singularity Looms Near

The Singularity refers to the projected future when artificial and organic intelligence become increasingly merged as one—as the cyborg or some type of man-machine symbiosis. Some say that people who resist the merge with technologies of tomorrow will perish as a subhuman race, but the debate over who’ll be a sub race of whom is over stepping what we actually know.

The Singularity signifies a co-existence between two different species—two starkly different species: man and machine. But because the machines haven’t been born yet (at least not on this planet and not the self-conscious ones being referred to), all we have to work from are our preconceived models of who and how they will be.

Robots Imagined Now and Then

The term “robotics” was first used by Isaac Asimov in his 1942 science fiction short story, “Runaround.” The visionary Asimov pictured robots of the future with positronic brains, bound by the Three Laws of Robotics. Today, when we think of a robot we think metal box man, robowalk and monotone, based on the early sci fi renditions. But we’re imagining this.

In the October 1982 issues of Isaac Asimov’s Science Fiction Magazine, Asimov writes in his editorial: “…the robots now in existence are by no means the positronic robots of my fiction. They are to my fictional robots rather as a slide-rule is to a pocket-computer.”

The main reason they are simplified, he explains, is because they are specialized… “so specialized that they are fit only for the most limited tasks and have absolutely nothing in the way of ‘brains.’” But, if robots enter the home, he goes on to say, they will be required to do a wide range of tasks, like cleaning dishes, serving food, and taking out the trash.

“Even among life-forms,” says Asimov, “extreme specialization obviates the need for intelligence, since the more constricted the capabilities, the less the need for judgment or any capacity for making shrewd decisions.”

Robots Thinking and Feeling

Our imagined robots are non-human, in that they don’t feel—but maybe robots of the future will have some means of emotional-type thinking that is uniquely human.

They might not have an endocrine system that can implement feelings in the way we are used to experiencing them, but they will need to be able to make meaning of information (which is what emotions are all about) in order to possess intelligence—to have the broader capabilities that Asimov points to.

In the first chapter of his book, The Emotion Machine, artificial intelligence expert and MIT professor Marvin Minsky defines emotion simply as: a thinking mode which recruits specific neural functions that are optimal for a specific set of tasks. So humans are highly resourceful because they have different modes of thinking (different moods and emotional states) which are each in their own right the most effective for achieving certain goals. (Think love and social bonds, anger and perseverance, fear and protection, etc.)

According to this perspective, artificial sentience would require different thinking modes, because it’s those very thinking modes that allow for high-level intelligence faculties such as creativity, innovation, and resilience, just to name a few—and not to mention all the stuff needed to be a house servant.

It’s probable that the spiritual machines of the future are not going to be starkly non-human and unsympathetic to the human way of life. Instead, they will already be merging with us, in that they will develop faculties of emotion—or some variation on the evolutionary heritage of our ancestors—which I believe to be necessary for self-consciousness. They will strive to become more human, because human means flexible thinking modes, adaptation, cognitive plasticity, and resilience.

So What’s to Fear?

Machine intelligence may take over most of our human tasks when the Singularity nears (as if this isn’t already the case), but it’s not going to undermine the importance of emotional intelligence, which is integral to any intelligence at all.

Sure, our power in the future will be challenged, but we humans are not a lesser half; we hold a key to what consciousness and intelligence are, because we are already endowed with them at birth.
Anyone trying to pin the Singularity down as a racial or sub-racial issue is missing the point. It won’t be a race or a competition, but co-evolution. As I see it, Singularity is not so scary.*

*That is, if we as a species will conduct ourselves maturely enough to respect the proposed future sentience robot race, rather than foolishly try to enslave or exploit it.

{ 2 comments… read them below or add one }

Elea December 8, 2008 at 2:14 am

The question really is how much you want to depend on technical aid. I don’t think that any of this is doing us much good. And when it comes to cyborgization… there is probably no human part at all that would be worth keeping… this could actually turn out to be quite funny, insofar that scientists that experiment on themselves notice that they are the better off the more they replace until they finally realize that there is absolutely no need for any part of them to exist at all.

Melissa December 8, 2008 at 3:37 pm

Thanks for your comment Elea.

Well, so far, the human brain is quite worth keeping. :p As in, there is no replica to date…which is more feasible to do with other body parts.

Leave a Comment

By clicking "Submit" you understand that your submission may be edited or rejected at my discretion, and/or used in upcoming articles or publications. Unconstructive criticism, personal attacks, and requests for personal advice likely will not be published. Please refer to my Disclaimer if you have any questions.

{ 1 trackback }