Two Singularities

Posted by Spandrell on

I just finished Mass Effect 3, which I heartily recommend. Hell of a game. The ending was quite lame,  sort of a cross between Contact and Terminator, but still. I imagine it must be hard to come up with an ending for a story of this scale.

For those unaware of it, Mass Effect is a series of games set in space in the 22nd century. One of the most interesting parts of the script is how it portrays artificial intelligence in this sci-fi universe. There´s some robots around (they call them 'mechs'), but of course that's hardware, which is unimportant. What's important is software, and in ME there's two kinds of intelligent software. There's VIs, virtual intelligences, which are human-like interfaces programmed for some task in particular. They speak and respond to language, but they are limited to whatever task they are programmed to do. It might be as complex as running a ship or controlling a factory by itself. But it's still a program.

Then there is AIs, artificial intelligences, which are self-aware, self-modifiable, fully sentient intelligences. What we normally think of as an AI. Those are generally deemed to be dangerous, as they tend to go rogue, but you can still find them now and then. In this fictitious world both VIs and AIs are incredibly advanced, yet people aren't much different from real life humans. 'Organics' are also said to have lots of enhancements, genetic engineering and whatnot; but it really doesn't show in the casting. Yours is no team of uber-geniuses.

But somebody has to code that software. Today we have made great progress in programming, but we aren't anywhere close to a functional AI. Much less a self-programming, singularity-inducing super AI. We simply don't have the skills, and won't have them for at least decades to come. I mean Google can't even design a proper email layout. And they're pretty good, relatively speaking.

Which takes me to the most likely scenario before we get a Singularity. Before we reach the point in which we can code a sort of  human brain into a computer, we'll get complete knowledge on the genetic basis of intelligence. And we will be able to act upon that knowledge. Genetic engineering. Steve Hsu says it's easy. It certainly sounds so.

After we get that right, then we can go on with coding AIs and making fancy robots. I think we would need a 130 average IQ or so before we can have FTL travel and all that. But listening to the hype you would think it's just around the corner. We just need to wait. Meanwhile the 90% of software innovation today is spent on tracking people's browsing to sell them targeted ads. Or Groupon.

No, we're aren't going anywhere with software. The Singularity will be biological.

UPDATE: Greg Cochran seems to agree. If Cochran and Hsu say so, I'm in.

Switch to Board View

9 comments

Leave a reply
  • These aren't really the only options. Robin Hanson predicts that we'll kick off the singularity not by programming an AI, but by reverse-engineering human brains and then simulating them (without necessarily understanding all the details of how they actually work). He has written some interesting analysis of what might happen under this scenario, which is far more realistic and down-to-earth than the Kurzweilian stuff that unfortunately has much more public prominence. (Hanson's predictions are extremely grim -- he predicts a return to a Malthusian state because the simulations can be multiplied instantly at low marginal cost -- though his views are peculiar enough that he sees this development as optimistic.)

    reply
    • I've read Hanson's theory. I think we'll understand the genetic basis of intelligence way before we can reverse engineer a brain. Does he even know how that would work? .

      reply
      • "Does he even know how that would work?" No. I'm almost certain that the singularity idea is a pathogen in the Hanson host. Seriously, talk about unknown unknowns... A singularity would change everything, by definition. It will all be unexpected, or we could take a short cut and just do it now. There's more known unknowns than knowns, let alone... I suppose I should actually read those Hsu and Cochran links.

        reply
        • By all means do. Also read Vladimir's link below, it seems the emulator crowd do have a (vague) idea of what they are doing.

          reply
      • Basically, we need three things: deeper neuroscience insight into how neurons work (i.e. how exactly their functionality maps onto information processing), a technology to scan a brain with enough detail to reveal all the connections between neurons, and massive amounts of computer power to upload this information and simulate the brain. Of course, trying to guess the exact course of technological progress is a fool's game -- after all, if I had any extraordinary insight into it, I'd use it to get rich investing, not for idle speculation in blog comments. But it seems quite plausible that all three breakthroughs could happen before we make sufficient progress in genetic engineering to do the sort of stuff you write about. Consider that both neurons and genes work in extremely complicated ways, and it will take a long time before we can figure out their exact mechanism. (Yes, it's sometimes simple, like when a single bad gene is responsible for some defect, but for complex traits like intelligence, it really is convoluted.) However, with genes, we have to figure out the exact mechanism before we can intervene into it constructively, whereas with neurons, it may be possible just to emulate an existing configuration found in an actual brain without understanding how exactly it works. This paper by Sandberg and Bostrom has a lengthy discussion of the technical challenges of brain emulation: http://www.philosophy.ox.ac.uk/\_\_data/assets/pdf\_file/0019/3853/brain-emulation-roadmap-report.pdf I'm not really competent to judge their conclusions reliably, but they do seem plausible as far as I can tell.

        reply
        • There's the ethical problem of how to treat an AI which is based in an actual human. Knowing liberals we must allow them to form a union and limited work hours. And if they ask for a physical body the state will have to provide it. And it they want a sex change the state will have to provide the new genital parts and pay for its exchange. We're better off with genetics.

          reply
          • Oh man, spell checking the genome! Can of worms, or chest of Jormungandr? It would work too. Thirty points at least, I'd guess, what with height, muscle tone, immune function, vitamin efficiency...all of which would feed back into a more efficient brain. I wonder how fast new typos would accumulate? Though the main problem would be fixing the errors without causing more. A slow singularity though, it's not like we don't already have 130s. So slow it sounds reasonable, in fact.

            reply
            • My point is that to produce the technical singularity (kickass AIs) we need more smart people. An average 130 population could get AIs and fusion working in a decade. So it's a two-step singularity, so to speak.

              reply
              • Whole brain sims still need to know what they need to simulate. I found a potential unknown unknown, which gave me the perspective necessary to know how much ignorance is there. (It's mostly ignorant.) Physics is probably not causally closed. The primary assumption of probability is that events are independent. If you explicitly wire up a quantum decoherence event to feed back into itself, creating dependence, you can construct a device that theoretically has a zero percent chance of being in any particular state in the future. As the device can't tell how long it has been running without clairvoyance, it has a zero percent chance of being in a particular state right now. Pure physical causation is busted. Odds that they'll include that in their sim? Odds they'll find it after noticing the sims don't work?

                reply