Monday, January 30, 2012

"Friendly AI?"

New Transhuman post.

I haven't done my post on Artificial Intelligence yet, but I've touched on the subject. I should likely do my AI post before I do this one, but - eh.

A quick primer on some AI terms, for those who don't know or missed one of my posts where I defined them:

AI - Artificial Intelligence. I've also seen this rendered as Artilect - Artificial Intellect. AIs/Artilects come in several different "flavors", depending upon what they do and the amount of labor put into creating them. In escalating order of self-awareness, they are:

Expert System: An expert system isn't a true AI. It's a program that has been developed that possesses a mastery of a particular subject and then uses that mastery in that subject area. ESes are not self-aware, they are not sapient, and they cannot learn. We've been using ESes since the 1970s and 80s, and they've only gotten better.

Narrow AI: Also called Weak AI. This AI is sorta like what we imagine when we think of AI, but it's intentionally handicapped so that it can't reach human/transhuman levels of intelligence (I imagine a Narrow AI would likely be as smart as a human is today, compared to the transhumans of tomorrow). Narrow AIs are the most plausible right now, and depending upon how one interprets  Godel's various theorems, maybe the only type of AI possible. I disagree, and I'm not the only one.

AGIs:
Stands for Artificial General Intelligence. Also called Strong AI. These are AI systems that are programmed to be just humans. They learn, they feel, they understand, they have their own personalities, likes, dislikes, thoughts, etc. They are, for all intents and purposes, totally self-aware. An AGI would be really no different from a human being, with the exception that they live in a computer.

Seed AI: This is where people start to get nervous. Add one half pound AGI and two quarts infinite self-improving capabilities, stir vigorously and for God's sakes don't put in the bootstrapping. A Seed AI is an AGI on steroids; these are AIs that have infinite self-improvement capabilities, achieved through what I imagine is something like recursive programming. A Seed AI has the actual potential to become a god or God or something close to it - if Seed AIs actually can exist, they're likely the only thing that remains from highly advanced alien civilizations (providing those civilization didn't just downsize themselves into attotech computer - goodness only knows what they're doing down there).

Like all technology, AIs have their own risks and benefits. No technology is likely as misunderstood as AI research; this is partially thanks to Hollywood's tendency to display AIs as prone to go rogue at the drop of hat and try to kill off humanity for... whatever purpose. Who knows.

It makes for an awesome story. It's crap science, like everything from Hollywood and modern science fiction, though.

Overlooking Hollywood's stupidity on the matter, there's an interesting discussion to be made. Obviously we don't want our AI go and get itself corrupted, or to exposed to an alien hypervirus that operates on a femtotech level and kill us all, so the natural inclination is "well, program it to be friendly to us. Recognize us as it's masters, and definitely try not to kill us."

This notion is called "friendly AI." Were we program the AI so it is friendly towards us, so it doesn't go all Skynet and, y'know, kill us.

I'm glad to know I'm not the only person who disagrees with this immensely.

There's an article over on H+ - an opinion piece, actually - that discusses why the author is hostile towards the notion FAI. At first, I had to wonder why that is. I mean, FAI might just be the thing that ensures our survival and doesn't get us wiped off the map by our own creation.

Shame on me for holding such a biochauvinist and unethical Romantic position.

At first glance, this seems almost suicidal. "Why on Earth wouldn't you want your AGIs and your Seed AIs to be friendly? What the holy hell is wrong with you!? Do you even know what we're dealing with here? These aren't people!"

Actually, yes they are. We are creating humans here, and it's that attitude that might get us killed by them. If you other AIs, and treat them as something that's not human and program them as such, then no, they won't be able to relate to us. No, they won't be able to understand, and yes, all of the "prophecies" about AIs going rogue will likely come to pass, simply because they don't recognize themselves as humans. Now, if you program them to recognize themselves as humans, then you have to treat them like humans - and in doing so, let them have their own opinions of things. You can't force your neighbor to be friendly to you. Nor should you try. The same principles are at work here. We want them to be human, but we don't want to treat them as human and we want to force them to be friendly towards us. We can't have this both ways.

The minute you start treating AIs as something "else", and that biochauvinism sets in, we have problems. And that's exactly what FAI does. It's forcing that AIs to be friendly. It's unethical on the most basic levels; if we're going to create a sapient species, then we have no right forcing that sapient species to try and be friendly towards us. AIs would not be "things". AGIs and Seed AIs would be people, and they should be recognized as such.

FAI is immoral on the highest level in this regard, because you're stripping a sapience species of it's rights to have it's own thoughts and opinions.

What's the danger in this?

Well, what's the danger in not forcing humans to be friendly towards other humans? People do die - but people also care. We are a varied group. We each have our own opinions. We will be friendly towards some people and not-so-friendly towards others. If AIs are programmed to see themselves as human relatives - which is distinctly different from programing to be friendly to all humans, because we're programming them to be sapient and human sapience is the only type of sapience we know - then they should have this own freedom of opinion. Ian Malcolm, one of the most famous Romantics courtesy of the popularity of Jurassic Park, once said "Nature will find a way." This is true. We shouldn't take this as a warning not to do it, however; we should take this as a general rule, and plan for what happens when nature does find a way around something. If we force AIs to be friendly, are there any chances where they might stop? Possibly. And if they do stop, and look back and realize we never gave them a choice in the matter - well, would you be happy? Would you be willing to forgive? So perhaps, knowing nature will find a way, we should work with nature in this case, and let a sapient species be a sapient species? Seems like the thing that makes the most sense to me.

This is involuntary slavery, to a degree. You get no opinion. You get no say. We create you, and you have to worship us and be friendly to all of us.

It's true, some AIs may be hostile towards humans. Just like there are humans who are hostile to humans. But for every AI that's hostile, we'll have a few who are not. AIs will only wield the power that we give them; a home AI will be significantly less powerful than say, one designed by the military for war purposes, or one built built by the Library of Congress in an effort to keep things organized there. But this entire notion of "friendly AI" is an attempt to make AI more palatable for the fearful masses, who've been fed movie after movie about rogue AIs killing everyone from Hollywood, at the cost of stripping an entire sapient species of their right to an opinion. At the cost of othering that species, so that they're viewed as something not human. Likely at the cost of our own species should they ever find out - creating a self-fulling prophecy in the worst case scenario.

Which, ultimately, is a case of irony. We intentionally made these AIs friendly to keep them from killing us, and when they found out that we were keeping them from their own opinions and forcing them to like us, they learned to hate us and killed us. It's best to let them have the chance to develop themselves; if they decide "okay y'all, we're done here and we're off to bigger and better things - chill, folks" and they leave, that's fine. If one does attempt to destroy humanity, I'd ask who the hell taught it, and then we could use the other AIs who were taught better to try and fight back against it.

Friendly AI, and the whole notion of it, is othering to AGIs and Seed AIs. It needs to go away. Now. Before AGIs and Seed AIs actually come around, and it runs the risk of getting us killed.

1 comment:

  1. Y'know, that actually makes a lot of sense. This is not something I ever would've thought of. Eyes, Opened! Thanks, Mr (Ms?) Enigma.

    ReplyDelete