New Transhuman post.
I haven't done my post on Artificial Intelligence yet, but I've touched on the subject. I should likely do my AI post before I do this one, but - eh.
A quick primer on some AI terms, for those who don't know or missed one of my posts where I defined them:
AI - Artificial Intelligence. I've also seen this rendered as Artilect - Artificial Intellect. AIs/Artilects come in several different "flavors", depending upon what they do and the amount of labor put into creating them. In escalating order of self-awareness, they are:
Expert System: An expert system isn't a true AI. It's a program that has been developed that possesses a mastery of a particular subject and then uses that mastery in that subject area. ESes are not self-aware, they are not sapient, and they cannot learn. We've been using ESes since the 1970s and 80s, and they've only gotten better.
Narrow AI: Also called Weak AI. This AI is sorta like what we imagine when we think of AI, but it's intentionally handicapped so that it can't reach human/transhuman levels of intelligence (I imagine a Narrow AI would likely be as smart as a human is today, compared to the transhumans of tomorrow). Narrow AIs are the most plausible right now, and depending upon how one interprets Godel's various theorems, maybe the only type of AI possible. I disagree, and I'm not the only one.
AGIs: Stands for Artificial General Intelligence. Also called Strong AI. These are AI systems that are programmed to be just humans. They learn, they feel, they understand, they have their own personalities, likes, dislikes, thoughts, etc. They are, for all intents and purposes, totally self-aware. An AGI would be really no different from a human being, with the exception that they live in a computer.
Seed AI: This is where people start to get nervous. Add one half pound AGI and two quarts infinite self-improving capabilities, stir vigorously and for God's sakes don't put in the bootstrapping. A Seed AI is an AGI on steroids; these are AIs that have infinite self-improvement capabilities, achieved through what I imagine is something like recursive programming. A Seed AI has the actual potential to become a god or God or something close to it - if Seed AIs actually can exist, they're likely the only thing that remains from highly advanced alien civilizations (providing those civilization didn't just downsize themselves into attotech computer - goodness only knows what they're doing down there).
Like all technology, AIs have their own risks and benefits. No technology is likely as misunderstood as AI research; this is partially thanks to Hollywood's tendency to display AIs as prone to go rogue at the drop of hat and try to kill off humanity for... whatever purpose. Who knows.
It makes for an awesome story. It's crap science, like everything from Hollywood and modern science fiction, though.
Overlooking Hollywood's stupidity on the matter, there's an interesting discussion to be made. Obviously we don't want our AI go and get itself corrupted, or to exposed to an alien hypervirus that operates on a femtotech level and kill us all, so the natural inclination is "well, program it to be friendly to us. Recognize us as it's masters, and definitely try not to kill us."
This notion is called "friendly AI." Were we program the AI so it is friendly towards us, so it doesn't go all Skynet and, y'know, kill us.
I'm glad to know I'm not the only person who disagrees with this immensely.
I haven't done my post on Artificial Intelligence yet, but I've touched on the subject. I should likely do my AI post before I do this one, but - eh.
A quick primer on some AI terms, for those who don't know or missed one of my posts where I defined them:
AI - Artificial Intelligence. I've also seen this rendered as Artilect - Artificial Intellect. AIs/Artilects come in several different "flavors", depending upon what they do and the amount of labor put into creating them. In escalating order of self-awareness, they are:
Expert System: An expert system isn't a true AI. It's a program that has been developed that possesses a mastery of a particular subject and then uses that mastery in that subject area. ESes are not self-aware, they are not sapient, and they cannot learn. We've been using ESes since the 1970s and 80s, and they've only gotten better.
Narrow AI: Also called Weak AI. This AI is sorta like what we imagine when we think of AI, but it's intentionally handicapped so that it can't reach human/transhuman levels of intelligence (I imagine a Narrow AI would likely be as smart as a human is today, compared to the transhumans of tomorrow). Narrow AIs are the most plausible right now, and depending upon how one interprets Godel's various theorems, maybe the only type of AI possible. I disagree, and I'm not the only one.
AGIs: Stands for Artificial General Intelligence. Also called Strong AI. These are AI systems that are programmed to be just humans. They learn, they feel, they understand, they have their own personalities, likes, dislikes, thoughts, etc. They are, for all intents and purposes, totally self-aware. An AGI would be really no different from a human being, with the exception that they live in a computer.
Seed AI: This is where people start to get nervous. Add one half pound AGI and two quarts infinite self-improving capabilities, stir vigorously and for God's sakes don't put in the bootstrapping. A Seed AI is an AGI on steroids; these are AIs that have infinite self-improvement capabilities, achieved through what I imagine is something like recursive programming. A Seed AI has the actual potential to become a god or God or something close to it - if Seed AIs actually can exist, they're likely the only thing that remains from highly advanced alien civilizations (providing those civilization didn't just downsize themselves into attotech computer - goodness only knows what they're doing down there).
Like all technology, AIs have their own risks and benefits. No technology is likely as misunderstood as AI research; this is partially thanks to Hollywood's tendency to display AIs as prone to go rogue at the drop of hat and try to kill off humanity for... whatever purpose. Who knows.
It makes for an awesome story. It's crap science, like everything from Hollywood and modern science fiction, though.
Overlooking Hollywood's stupidity on the matter, there's an interesting discussion to be made. Obviously we don't want our AI go and get itself corrupted, or to exposed to an alien hypervirus that operates on a femtotech level and kill us all, so the natural inclination is "well, program it to be friendly to us. Recognize us as it's masters, and definitely try not to kill us."
This notion is called "friendly AI." Were we program the AI so it is friendly towards us, so it doesn't go all Skynet and, y'know, kill us.
I'm glad to know I'm not the only person who disagrees with this immensely.