Saturday, October 22, 2011

H+ 3: Artifical Intelligence and Beings

It's been a while since I've wrote an actual article regarding transhumanism outside of the Human Black Box story that I've been working on, so I figure that for this Saturday morning, I'll cook up another one. The last two, you'll recall were an overview of transhumanism and a look at animal uplifting. Well, think of this next one as being something like "computer uplifting."

Of all the transhuman themes, none is so widespread and popular culture as artificial intelligence. From classic robot movies television shows of the 1950s staring Robbie the Robot, to Asimov's Robot series of books, to Deep Blue beating Gary Kasparov in 1997, artificial intelligence and robots are in the mainstream, but not many known much about either, even though both will have a huge impact on humans and humanity within the next ten years or so.

So, welcome to H+ 3: Artificial Intelligences and Beings

First, some terminology and a little history:

Robot is a word that comes from the Czech word robota, which means "labor" or "work" or, figuratively, "drudgery". The term was first used by playwrite Karel Capek (the 'c' is pronounced like the 'ch' in "cheap", for all you English speakers), but invented by his brother Joesph, in the early 1920s, to describe a race of entities created in his play, R.U.R (Rossum's Universal Robots). His robots, however, were actually vat grown entities similar to what we would call "clones" today (thereby making R.U.R. possibly the first example ever of biopunk, before the sub-genre was even founded). The first "robot", one could say, is the creation of the Maharal of Prague, also known as the Golem of Prague. While ben Bezalel's Golem is the most famous, there are others that exist (the golem of Elijah Ba'al Shem of Chelm, for instance). The point of this being the fact that robots have been around for a while - Capek himself was influenced by the golem story when he wrote R.U.R.

Android has become a popular science fiction term for a robot that appears like a man, coming from the ancient Greek "anthros" meaning "man" or "human" and "eidolos" meaning "form". It's counterpart, gynoid, is less rarely seen but does appear in some science fiction (double standard perhaps? Android tends to be a catch all term for every humanoid-mechanical entity, in much the same way that anthropology is a catch all term for all of humanity. Gynoid is wheeled out only when the author wants to call special attention to it. I personally have never used the term - I feel "anthros" is a good catch-all for the whole range of mechanical body-shapes that appear human. It's interesting to think about, though). Originally, android was a term for anything that was shaped like a human, so in the literal sense, a human clone is also an android. They have nothing to do with Randroids, as androids in fiction tend to be more mature.

Clone/Bioroid/Bioandroid is a relatively new term. Because I'm reserving cloning for it's own transhuman piece, I won't go too in depth today,  but speaking strictly, a clone is a identical duplicate, genetically speaking, of an individual. There's a tendency in fiction to portray clones as being duplicates down to the personality of the individual that they're cloned from, but as real life shows, this isn't the case. Human cloning happens in the time - just ask any identical twin. On a genetic level, they're identical. However, due to environmental factors, their personalities may be different as day and night. There's no reason for technologically produced clones to be any different. A bioroid/bioandroid varies depending upon the piece of fiction, but generally speaking, they're some biological entity that may or may not be a clone - for instance, Eclipse Phase refers to these as "pods," because they're body-parts that are grown in vats and then sown together to form bodies mechanically, biologically, and nanotechnologically. That's just one example.

Artificial Intelligences, often abbreviated A.I. or AI, are, at their core, computers that think, feel, create, and understand the world just like humans do. Of all the terms, AI is probably the easiest to understand, because it's definition has at least remained consistent over the years. I prefer to use "synthetic intelligence" or "computer intelligence" when describe them, although I use AI out of habit. The reason I prefer synthetic or computer intelligence isn't some weird form of political correctness - it's because I don't feel "artificial intelligence" fully encapsulates what they are, where synthetic or computer intelligence does, and because AI, as a field of computer science, isn't limited to just computer intelligence anymore - it's morphed slowly into a broad field that studies intelligence as a whole. I'm not talking about all intelligence - just a subset of it.

Cyborg is a combination word from cybernetics, the intersecting field of biomechanics, medicine, prosthetic technology and robotics and organism, or creature. A cyborg is any entity that uses mechanical or non-biological means to assist themselves in becoming something more than a normal human (hence, transhuman). Being literal, we've had cyborgs since at least the 1300s, if not earlier. If you have fillings in your teeth, you're technically a cyborg. If you use glasses, you're technically a cyborg. I include it here for completeness; I find the term is out of date and as biomedicine advances, prosthesis will be gradually phased out in favor of regrowing lost limbs and such. However, no talk about artificial beings is complete without at least touching on one of the most popular concepts in science fiction and one of the foundations of the entire cyberpunk genre.

When I use "AI/CI" throughout the article, I'm referring to computer intelligences. When I use android, I'm referring to a human shaped robot. Robot is anything not human shaped - that is, any non-anthropomorphic automated mechanical entity.

Computer or synthetic intelligences, and by extension, robots, are both very popular and very real. I started this essay of by introducing a few of the more famous examples both from fiction and real life - Deep Blue being the most prominent - but there are others. The two go hand-in-hand but are not mutually inclusive. Robots can exist without synthetic intelligences, and synthetic intelligences can exist without robots, but the two mix like chocolate and cream, so they often appear together.

When you start talking CI, there's actually a couple of different types of computer intelligences: expert system, weak, strong, and seed. Each one will be dealt with in kind:

Expert Systems are computer algorithms designed to do just one job, but do it as well as any human could ever hope to do. They're not truly intelligent in the classical sense of the word, they don't think, they don't feel, and they don't learn. They're just a computer program that does one thing and does it really well. Expert systems have been around since the 1970s, and they saw popularity and wide spread use in the 1980s

Weak AI/CI is also called "applied AI" or "narrow AI." These are more along the lines of advanced expert systems; that is, they're good at what they do, but they're designed to be deliberately handicapped, so they don't exceed what humans are capable of doing. There's a "Weak AI Hypothesis" which states that any AI that design is going to be limited to just the applied AI, and that machines can never achieve the sort of sapience, thought process, creativity and emotional understanding that humans do. Weak AI is rarely seen in novel and sci-fi movies, simply because it's not as interesting. It's basically a computer that, while it appears human on the surface, is nothing of the sort. It does it's job well but it's deliberately handicapped.

Strong AI/CI, also called "artificial general intelligences" (or AGIs), are what Weak AIs are often contrasted against. The AGI is a computer that is programmed to think, be sapient, and to match or exceed what humans are capable of doing cognitively. Most of your artificial intelligences in science fiction are some form of Strong AI; especially ones that relate regularly with humanity and become secondary characters. Strong AI is the ultimate goal of AI researchers; it's "classic" computer intelligence. A Strong AI has the ability to reason, represent knowledge, plan, learn, communicate in languages, the ability to have subjective experiences and thought, self-awareness, the ability to "feel" and perceive emotions subjectively, and the capacity for wisdom. If you've got all or most of these boxes ticked off, you're dealing with an AGI or Strong AI.

Strong AI is absolutely necessary for the development of whole-brain emulation and eventual mind-uploading. That's a different subject, however, and I just wanted to bring that up to show how the two intersect. The development of computers that can feel, think, understand, and learn will allow us to mirror our brain on similar programs and computers, allowing us to copy ourselves or even upload our consciousnesses into these machines. Without the capacity for these things - that is, everything that would make it a Strong AI - there'd be no whole-brain emulation or mind-uploading.

Seed AI is a variant on the Strong AI. It's a Strong AI that is capable of recursive self-improvement; that is, it continually improves itself, expanding it's knowledge, leading to an exponential increase in intelligence. The more it improves, the more intelligent it becomes, until eventually, you're looking at an entity with near god-like intelligence, requiring a computer and processor of near god-like size to function properly, in addition to killing a whole bunch of yottawatts of power (possibly requiring something like a Matrioshka brain or Jupiter brain in order to function properly.) There are some theories that see bootstrapping a seed AI as a means to trigger the Singularity; regardless how you view it, Seed AIs will more than likely play a role in the singularity. If strong AIs are possible, then Seed AIs are likely as well.

There's a lot of worry that AGIs will go rogue and exterminate humanity. The robot war is a popular metaphor in science fiction, in much the same way that the bug war is. Everything from Skynet to Portal has shown us the "dangers" of AI, right? The robots eventually go rogue, because they're not to be trusted, and they turn on their masters, overthrowing them an exterminate humanity/force us to become batteries/whatever. This is obviously a danger, right?

Wrong.

They've shown us the dangers of what happens when stupid people use AI. In all of the above examples, someone did something wrong. People are quick to blame the technology when, if you scratch the surface just a little bit, you'll see it's becomes someone screwed up somewhere. It's not the technology's fault - it's some silly romantic somewhere who expects the worst and, due to self-fulfilling prophecy, eventually gets it by causing it themselves. It's always easier to blame Skynet than it is to blame the people for programming Skynet wrong. The technology is ambivalent; it's not automatically going to turn evil simply because it's more advanced than a typewriter. And all AIs will not be the same anymore than all humans are the same.

The first person to get fed up with the notion that robots/AIs will go rogue and turn against their masters (like what happened in R.U.R) was Isaac Asimov. He drafted his original Three Laws of Robotics (fun fact: Asimov coined the term "robotics" for his stories. It hadn't existed before then) as a rebellion of sorts against the popular theme that robots would eventually betray humanity. Despite that, this romantic notion still persists. That isn't to say there aren't dangers to it - AGIs will be just like humanity, and the danger there is the same danger as in dealing with any other sapient creature. And that's the catch - they'll be just like humanity.

When programming synthetic intelligences, we need to program them to think of themselves as humans. We need to socialize them, just like we would socialize uplifted animals, to acknowledge their place as living organisms. They are living entities, after all. To treat them as anything but would be a crime, and should be a crime. Program them with all the elements that make us living animals - everything from emotions to a sex drive (on a general level, not individual level) to the ability to recognize themselves as members of the human family - because they are. They each become their own individual person, with their own individual goals and dreams. And denying them those goals and reams should be every bit the crime as it is to deny another human being goals and dreams. There may be some worry about humans producing the ultimate AI designed to wipe out all of humanity, but ask yourself how realistic that threat actually is, and if there aren't better, more practical ways to get it done (viruses seem like a well-explored alternative). That's not to say that the technology isn't dangerous; all technology is dangerous when used by people who have no idea how it works, or when people make mistakes. But nobody ever says "oh, well, we're not ready for the automobile yet". The odds of an AI actually manage to wipe out humanity, especially when we would have AIs of our own helping us against it, would be slim to none. After all, if they're socialized human from birth, why wouldn't they want to help their brethren? They'll become one more cell in the massive transhuman superorganism. 

No comments:

Post a Comment