Thursday, March 21, 2013

Schools of the Singularity

I was reading through the internet the other day and I came across this article by Eliezer Yudkowsky. Now, I'm not at all familiar with Yudkowsky; Kurzweil I've heard of (and do not support; as near as I can tell, Kurzweil has some major issues - the largest of them being the fact that he's a computer guy with zero knowledge about neurobiology or neurochemistry, who seems to think that neural networks are easily constructed and work like standard computer processors. I'm taking a class on the Brain, Synapses, and Neurons at the end of the month, so I'll get a close look at just how realistic that is. I can tell you right now, though, that it's not realistic at all), and Kurzweil is referenced in the article I read. I've gone further and read a number of articles by Yudkowsky and I like the thinking (especially the article about transhumanism as simplified humanism, which is a view point I've been championing in my own little corner of the internet for nearly three years now). That's not what I'm looking at today, though - today I'm looking at the singularity.

I've touched on the Singularity and the concept of a Singularity before, but I don't recall if I ever went into depth about what I think of it. So I plan to do that. But first, I'd like to take a look at the three different schools of the Singularity that Yudkowsky offers. So follow me down the rabbit hole into a long needed post about transhumanism.

Yudkowsky breaks the Singularity up into three separate schools of thought, the defining trait of which appears to be what events lead up to it, how it happens, and what happens after it (or how we might be able to tell what happens after it). These three schools of thought all deal with the technological singularity, or the Nerd Rapture, so let's take a look at each one.

Accelerating Change:
  • Core claim: Our intuitions about change are linear; we expect roughly as much change as has occurred in the past over our own lifetimes. But technological change feeds on itself, and therefore accelerates. Change today is faster than it was 500 years ago, which in turn is faster than it was 5000 years ago. Our recent past is not a reliable guide to how much change we should expect in the future.
  • Strong claim: Technological change follows smooth curves, typically exponential. Therefore we can predict with fair precision when new technologies will arrive, and when they will cross key thresholds, like the creation of Artificial Intelligence.
  • Advocates: Ray Kurzweil, Alvin Toffler(?), John Smart
The accelerated future can certainly seem like the most plausible of the futures. In my life, I've seen the rise and fall of CDs, MP3 players, cassette tapes (anyone remember these? I do), and VCR tapes (anyone remember these? I still remember what it was like to program a VCR). Technology appears to be projecting more linearly, and it seems to be rapidly improving. I remember cord phones. Not cordless home phones, corded home phones. I grew up without seeing a cellphone or a computer until I was at least in my teens. I didn't get my first iPod until I was in my mid-20s (I bought it myself; it was one of the first major purchases I made. I still have it, too; it's 4-5 years old, but it's still working, so I see no reason to change it). I remember a time when cars did have less wiring than the STS Enterprise. None of this is a reflection on "better days" in the past; it's merely an illustration of just how much it's changed in my relatively short lifespan of around 27 years.

I look at my grandparents, who are in their 80s both. They went from slide rulers and pocket calculators to pocket computers and cellphones with more processing power than the top-of-the-line computers of their day, back in the 1940s. From vacuum tubes to microprocessors to solid state drives. Who went from a time when it was a big deal to fly from one city to the next to a time when we're landing SUV-sized vehicles on Mars with incredible precision.

Do you know what else I see?

I see it slowing down. I see technology becoming less and less about innovation and more and more about protecting copyrights and protecting vested interests. This isn't to say that it's stopped completely, but it is slowing greatly. I see a population of people who, while being smarter than their ancestors were even 50 years ago, are not better educated, who don't have the opportunities for self-improvement, and who are being pushed further and further into poverty by our aristocratic overloads. The progress is still there, but it's consolidating, like my student loan needs to be doing. It's not moving as quickly as it was. To back and read the bit about my iPod; I bought it 4-5 years ago and I can compare it to a newer iPod and not see much of a difference. There are newer apps, but the core of it did not change. If I go back to the 1990s, the time gap here is between 1990 to 1995; that was around the time when the VCR started to phase out and when home computers started to catch on. Granted, the technology had been a while for a bit - for home computers it was nearly 20 years, since the first one appeared in the 70s, if not earlier - but if technology was really accelerating exponentially, wouldn't those gaps be getting smaller? It took 20 years for the home computer to become popular. It'd take far less than that for other, newer technological devices to be invented. Like I said, there is still innovation and invention, but it's not going as fast as the Accelerating Future says that it is. And the Accelerating Future also doesn't take into consideration that computers have a maximum processing power courtesy of being made of silicon; until we find something that can replace silicon, we won't be seeing AI overlords or digitally immortal human beings anytime soon.

I like the Accelerating Future. It's positive, and we need more positivity in this world. It's optimistic, and it proposes that things do get better - and they do. Violence is actually on a decline. Religion is on a decline, too. We're pushing back against the wealthy, and we're becoming more aware of our place in society, and the interlocking gears that can turn everyday interactions into a social minefield for some people. We are making strides towards defeating aging and defeating death, and we're making strides towards morphological freedom, and I sincerely hope that I'll see at least the ability to reverse aging happen in my lifetime. I want to go out on my own terms, not feeble, invalid, and needing someone to change my adult diaper. I think just about everyone feels that way. I don't want to have to live under the threat of getting cancer that is untreatable, and then suffering in agony as a result. I'd hate to think that I died naturally as part of the last generation to do so before we discovered this stuff, but even the most conservative of science fiction sees longevity treatments before the century is out. I'll be 100 in 2085; hopefully I'll live to see that, and live beyond that.

But the Accelerating Future isn't without its issues, and the optimisim and positivity I was lauding it for is the issue here. It's put a pair of blinders on, so we don't see that there are forces at work - both left (anti-vaxxers, anti-GMOs) and right (creationists, GW denialists) - that are attempting to undermine scientific progress, in addition to scientific progress not necessarily feeding into itself. Newton was required to build Relativity, but Relativity did not come around until much later. Following Relativity, we had the rapid birth of Quantum Theory, it diversified into plenty of weird, completing theories that are still competing with one another, and even though we have the illusive Higgs boson, there's no sign that we made much progress at all - the complete of the Standard Model only showed numerous problems with the Standard Model and may end up with the Standard Model being pitched entirely, for something far, far weirder (my personal favorite is the unparticle theory). Technology is not linear, progress comes in growth spurts after periods of brainy people scratching their heads. True, those growth spurts have had less and less time between them, but there's still growth spurts following periods of inactivity. The future is accelerating. But it's not accelerating as fast as Kurzweil and company think it is.

The accelerating future also overlooks social impacts on technology. Recall I brought up anti-GMO people. These are individuals who don't understand science, they run off to GreenHealthPlusTotallyNotBiasedGuys.com for their information, and they regurgitate it. Really not helping matters is that the current face for transgenic food is Monsanto, who might as well have "A Subsidiary of Omni Consumer Products" plastered on the front doors of all their businesses. They are the archetype for unethical business practices; and there's very little government involvement since, you know, money buys politicians easier than votes do, and and Monsanto has a lot of money. Thus, the two get connected (if the research were public, there would be no copyrighting of artificial genomes. It's not, though, so genomes invented by Monsanto are copyrighted). Monsanto is doing serious damage to the image of biotechnology, which science fiction has repeatedly attacked and skewered multiple times (Hello - Tyranids*, Yuuzhan Vong, the Republicans Bene Tlielax** - biotech has been shorthand for 'evil alien civilization' since science fiction became science fiction, especially since it always looks so gross and icky). Newer technologies are going to experience social push-back not only from those who don't understand but from those who have a vested interest in making sure their corporate rivals don't succeed. The technology will be beyond the reach of the poor, underclasses. My point here is that the technology will improve more slowly because there will be social resistance against it; either through manipulation of public perception or because it gets tied to the real-life cousin of Weyland-Yutani.The accelerating future, then, won't.

Let's take a look at the others:
  • Event Horizon:
    • Core claim: For the last hundred thousand years, humans have been the smartest intelligences on the planet. All our social and technological progress was produced by human brains. Shortly, technology will advance to the point of improving on human intelligence (brain-computer interfaces, Artificial Intelligence). This will create a future that is weirder by far than most science fiction, a difference-in-kind that goes beyond amazing shiny gadgets.
    • Strong claim: To know what a superhuman intelligence would do, you would have to be at least that smart yourself. To know where Deep Blue would play in a chess game, you must play at Deep Blue’s level. Thus the future after the creation of smarter-than-human intelligence is absolutely unpredictable.
    • Advocates: Vernor Vinge
I've heard this called the "Vingian Singularity" before, which just sounds all sorts of awesome. The concept isn't that bad, either, when you think about it: the future is almost certainly going to be weirder than we think it is, by virtue of the fact that science fiction is bound to something that resembles truth, while the future is not. Brain computer interfaces exist now; we're working on them to help paraplegic and quadriplegic people adapt to exoskeletons that can help them function like fully mobile individuals. We're working on nootropics right now, and one of the benefits for uplifting animals (that is, surgically altering them so that they're sapient just like we are) is that we get a firmer grasp on what the nature of consciousness is, so we can expand on that and apply it to our own consciousness. Humans today are smarter than humans even several decades ago; we have to be. We're under constant attack from information, and our consciousnesses - which are nothing more than an illusion created by the brain during the processing of this information - are adapting to the information overload by parsing information faster and better. As we inundate ourselves with more information, we'll see an improvement in our ability to handle that information, better memory recollection, and less linear thinking.

Now here's my problems with this type of singularity: define human intelligence.

What is human intelligence? It seems like such an easy task, until you start to think about it. Well, it's our smarts. But what does that mean? What does it mean to be intelligent? I'm a firm proponent of Gardener's theory of multiple intelligences; everyone is intelligent in their own way, in their own area, and some people are more intelligent than their peers in some areas while less intelligent than their peers in others. If we got his route, are we using the mean of all the different intelligences to gauge human intelligence? If that's the case, we've already got superhuman intelligences, since they're above the mean. Would a superhuman intelligence be stronger and Linguistic-Verb and Logic/Math but weaker in Naturalistic intelligence? What would their learning styles be? What is intelligence?

What does human intelligence mean? Now answer this - what does superhuman intelligence mean? What could a inhumanly intelligent thing think of? The best that I come up with is that it just thinks faster and it has a more structure consciousness that expresses better perception of the environment around it, and is a lot more fluid in how it uses ideas. But even then, there are people who will, inevitably, measure up to it. It might be faster, it might be more adaptable, and it might be more creative. But is that superhuman intelligence? Furthermore, we'll still be able to understand what they're talking about; they're bound by the same laws that we are. They won't be breaking the laws of physics anytime soon, so while they may cook up new axioms and develop new ways of seeing things, we will be able to use those new ways of seeing things as well. I'm not doing this to slam superhuman intelligence - whatever that term might entail - I'm doing it to set to rest a concern that people have when they think superhuman computers: God AIs are not going to be something that happens anytime soon. They will not become incomprehensible cthulhus. Writers use that trick because it's a work around for "I don't know how the hell FLT works; let's just say a God AI did it and be done with it" - they serve the same literary purpose in transhuman fiction that precursors like the Forerunners do in Halo, from a literary perspective. Since most people come into contact with Strong AIs through science fiction, this is the impression that they get - but it's not a right one.

And the Vingian singularity hinges on this. It hinges on the notion that superhuman intelligences will be able to think and grasp concepts beyond what we peons down here with the biological brains are capable of understanding. In order to understand Deep Blue, you have to be Deep Blue/be at the level of Deep Blue. Well, no. You can understand Deep Blue just fine; it might take you a little longer since you're not a computer, but you can still understand just fine, so long as you have the underlying knowledge of chess. Nothing that Deep Blue does is a mystery to any chess player. It follows the same rules, it's playing the same game that these people grew up playing. It's out-thinking, but it's method of out-thinking is not generating thoughts that are "beyond the human realm of comprehension". That isn't how this game works. Modern physics is already there (seriously, read about quantum theory sometime) and there are humans who understand it perfectly. The best that a "superintellgience" can do is spot a place where someone forgot to carry a one because we're only "human level intelligent", leading to the physicist saying, "Oh, that's why that didn't work out the first time! I forgot [x]!" But we still know what [x] is.

So while it's true that we're accelerating to a point where people are gradually becoming more intelligent in their ways of handling information and dealing with the information overload, we have no clue what a "superintelligence" would hold, or if they can exist, or if they don't already exist in the human population. The only real we we get there is through modification of the consciousness, and even then, that's not going to let them think beyond what we already understand as "baseline" humans. It may help us better define what we already know or discovered, or perhaps it might trailblaze an entirely new field of science - but we'll be following it, since we'll be able to understand what it's talking about, even if we can't understand the intelligence itself directly.

Mathematics is the language of the universe, regardless what type of consciousness you have. There is only one way to understand the universe and that's by studying math. Once you understand math, you can understand almost anything - subhuman, human, or superhuman.

  • Intelligence Explosion:
    • Core claim: Intelligence has always been the source of technology. If technology can significantly improve on human intelligence – create minds smarter than the smartest existing humans – then this closes the loop and creates a positive feedback cycle. What would humans with brain-computer interfaces do with their augmented intelligence? One good bet is that they’d design the next generation of brain-computer interfaces. Intelligence enhancement is a classic tipping point; the smarter you get, the more intelligence you can apply to making yourself even smarter.
    • Strong claim: This positive feedback cycle goes FOOM, like a chain of nuclear fissions gone critical – each intelligence improvement triggering an average of>1.000 further improvements of similar magnitude – though not necessarily on a smooth exponential pathway. Technological progress drops into the characteristic timescale of transistors (or super-transistors) rather than human neurons. The ascent rapidly surges upward and creates superintelligence (minds orders of magnitude more powerful than human) before it hits physical limits.
    • Advocates: I. J. Good, Eliezer Yudkowsky
This one is a combination of the previous two; you can see elements of the accelerating future and the Vingian singularity in it. The key element here is the positive feedback cycle that gets triggered when humanity hits a specific point on the intelligence scale, but this suffers from the same problems that the Vingian singularity suffers from: superhuman intelligences are inevitable result. Absent is the claim that they're unpredictable, but the problematic feature here is the definition of "superhuman intelligence." It's "minds orders of magnitude more powerful than human". What does this mean, exactly? What does it mean to be more powerful than a human brain? To process information faster and store it more efficiently? To arrange all of your inputs in such a way that the organization creates a situation where drawing connections through lateral thinking becomes easier? All of these things? This seems like a silly question, but it's not - what is intelligence? And from there, what would superintelligence look like?

So many singularities often hinge on the creation of superhuman intelligences, but there's no definition offered up for intelligences. If you accept Gardener's MI theory (and, actually, there aren't a lot who do), would a superhuman intelligence be "superhuman" in all of the different categories, or "superhuman" in one with deficits in another? And what would someone with a "superhuman" naturalistic or existential intelligence look like?

Any of these are possible; the only that I've seen that offers a real time scale is the Kurzweil singularity and it's not happening on the scale that he proposed. Vinge's singularity hinges on the new superintelligences being unfathomable to us, requiring us to ascend to their level to understand, while the Intelligence Explosion is a combination of the two (one may say more realistic of the two) but suffers from the fact that baseline intelligence isn't well defined. I have my on thoughts on the singularity, however, so let me share those.

A technologically singularity draws its terminology from physics. Basically, it refers to the center of a black hole/collapsar, the point at which big-world physics says, "you know what? screw this shit, I'm outta here" and quantum theory snorts milk out its nose when you suggest there's an easy solution. This is the point at which mathematics breaks down, our knowledge stops and says, "Okay, I give, you tell me", and we stare at the conclusions wondering where exactly we went wrong, when we did everything right.

This is a singularity. The point at which predictions based on existing are no longer applicable, and we have to design entirely new models to help us explain what's going on, and then cuss and swear because these new models, which work so beautifully by themselves, do not play well with existing theories.

So, in this light, what is the best definition for a technological singularity? Remember that technology doesn't exist independent of society; it's impact on society is primacy, since without society there would be no technology. So it makes sense to start looking at this from a social aspect, rather than a purely scientific one. Thus, a singularity is going to be an innovation that has a massive impact on society and technology. And it's going to impact them in such a way that it will be difficult, if not impossible, to predict how the two will change.

Working with this definition - that a singularity is something that impacts both society and technology in a way that it's difficult to see how the changes will play out - we've already had several. From a sociotechnological level, we have the following as major singularities:
  • The domestication of fire
  • The invention of the wheel 
  • The invention of metalworking
  • The invention of algebra and advanced mathematics 
  • The invention of agriculture and husbandry 
  • The invention of domestication 
  • The invention of the moveable type printing press
  • The invention of gunpowder
  • The invention of the steam engine
  • The invention of the internal combustion engine
  • The invention of the nuclear power 
  • The invention of the first computer 
  • The invention of the first microprocessor
And this is just a short list. Basically, I'm proposing that "singularity" is proper defined as every major invention in human history. See, the logic there being that these major inventions shaped the world in ways that we couldn't predict or foresee at the time. The invention of AIs, of mind uploading, of mind downloading, and of other advanced technologies will also become singularities, but they will be lowercase singularities, just like the above inventions were. It makes the singularity seem far more mundane and less majestic - but it also makes the singularity far more believable. Especially since it's already happened.
----
* It's debatable whether or not the Tyranids are a civilization
** It's debatable whether or not the Bene Tlielax are aliens 

No comments:

Post a Comment