The Singularity Swindle
First, men wanted to find the Garden of Eden, where milk, honey, spice and everything nice was to be found. It didn't happen.
Then men wanted to reach the Kingdom of God, where sins would be forgotten and peace and love reigned forever. Didn't happen.
Lately men want to achieve the Singularity, where Skynet does all the work and allow us to be free and idle to engage in polyamorous poetry readings with fat transexuals and a token negro here and there. This may or may not involve having our bodies hooked to the Matrix.
Probably not going to happen.
Don't take my word about it though. Edge Magazine, which is about one of the best places out there for Academics to actually debate each other and reach the public, asked this year in their annual question about intelligent machines. Which is just code for the AI singularity.
Understandably, 80% of the articles contributed were total fluff, as most people don't know crap about stuff besides their own discipline, and few of the people invited actually has any expertise on how the human brain works or if computers can ever do the same things.
A few academics though do know something about the human brain, and they had this to say:
The Singularity—an Urban Legend?The Singularity—the fateful moment when AI surpasses its creators in intelligence and takes over the world—is a meme worth pondering. It has the earmarks of an urban legend: a certain scientific plausibility ("Well, in principle I guess it's possible!") coupled with a deliciously shudder-inducing punch line ("We'd be ruled by robots!"). Did you know that if you sneeze, belch, and fart all at the same time, _you die_? Wow. Following in the wake of decades of AI hype, you might think the Singularity would be regarded as a parody, a joke, but it has proven to be a remarkably persuasive escalation.
Machines Won't Be Thinking Anytime SoonWhat I think about machines thinking is that it won't happen anytime soon. I don't imagine that there is any in-principle limitation; carbon isn't magical, and I suspect silicon will do just fine. But lately the hype has gotten way ahead of reality. Learning to detect a cat in full frontal position after 10 million frames drawn from Internet videos is a long way from understanding what a cat is, and anybody who thinks that we have "solved" AI doesn't realize the limitations of the current technology.To be sure, there have been exponential advances in narrow-engineering applications of artificial intelligence, such as playing chess, calculating travel routes, or translating texts in rough fashion, but there has been scarcely more than linear progress in five decade of working towards strong AI. For example, the different flavors of **"**intelligent personal assistants" available on your smart phone are only modestly better than ELIZA, an early example of primitive natural language processing from the mid-60s.We still have no machine that can, say, read all that the Web has to say about war and plot a decent campaign, nor do we even have an open-ended AI system that can figure out how to write an essay to pass a freshman composition class, or an eighth-grade science exam.Why so little progress, despite the spectacular increases in memory and CPU power? When Marvin Minksy and Gerald Sussman attempted the construction a visual system in 1966, did they envision super-clusters or gigabytes that would sit in your pocket? Why haven't advances of this nature led us straight to machines with the flexibility of human minds?Consider three possibilities:(a) We will solve AI (and this will finally produce machines that can think) as soon as our machines get bigger and faster.(b) We will solve AI when our learning algorithms get better. Or when we have even Bigger Data.(c) We will solve AI when we finally understand what it is that evolution did in the construction of the human brain.Ray Kurzweil and many others seem to put their weight on option (a), sufficient CPU power. But how many doublings in CPU power would be enough? Have all the doublings so far gotten us closer to true intelligence? Or just to narrow agents that can give us movie times?In option (b), big data and better learning algorithms, have so far gotten us only to innovations such as machine translations, which provide fast but mediocre translations piggybacking onto the prior work of human translators, without any semblance of thinking. The machine translation engines available today cannot, for example, answer basic queries about what they just translated. Think of them more as idiot savants than fluent thinkers.My bet is on option (c). Evolution seems to have endowed us with a very powerful set of priors (or what Noam Chomsky or Steven Pinker might call innate constraints) that allow us to make sense of the world based on very limited data. Big Efforts with Big Data aren't really getting us closer to understanding those priors, so while we are getting better and better at the sort of problem that can be narrowly engineered (like driving on extremely well-mapped roads), we are not getting appreciably closer to machines with commonsense understanding, or the ability to process natural language. Or, more to the point of this year's Edge Question, to machines that actually think.
All the while Yudkowsky, who has made a good living out of claiming that we need to give him money RIGHT NOW or Skynet is gonna be sexist and discriminate against your favorite porn genders, goes off on a tangent and doesn't talk about whether AI is actually feasible or not.
Speaking of which, I wanna give bonus points to this guy who doesn't have any credentials, but I like how he thinks.
The Great AI SwindleSmart people often manage to avoid the cognitive errors that bedevil less well-endowed minds. But there are some kinds of foolishness that seem only to afflict the very intelligent. Worrying about the dangers of unfriendly AI is a prime example. A preoccupation with the risks of superintelligent machines is the smart person’s Kool Aid.This is not to say that superintelligent machines pose no danger to humanity. It is simply that there are many other more pressing and more probable risks facing us this century. People who worry about unfriendly AI tend to argue that the other risks are already the subject of much discussion, and that even if the probability of being wiped out by superintelligent machines is very low, it is surely wise to allocate some brainpower to preventing such an event, given the existential nature of the threat.Not coincidentally, the problem with this argument was first identified by some of its most vocal proponents. It involves a fallacy that has been termed "Pascal’s mugging," by analogy with Pascal’s famous wager. A mugger approaches Pascal and proposes a deal: in exchange for the philosopher’s wallet, the mugger will give him back double the amount of money the following day. Pascal demurs. The mugger then offers progressively greater rewards, pointing out that for any low probability of being able to pay back a large amount of money (or pure utility) there exists a finite amount that makes it rational to take the bet—and a rational person must surely admit there is at least some small chance that such a deal is possible. Finally convinced, Pascal gives the mugger his wallet.This thought experiment exposes a weakness in classical decision theory. If we simply calculate utilities in the classical manner, it seems there is no way round the problem; a rational Pascal must hand over his wallet. By analogy, even if there is there is only a small chance of unfriendly AI, or a small chance of preventing it, it can be rational to invest at least some resources in tackling this threat.It is easy to make the sums come out right, especially if you invent billions of imaginary future people (perhaps existing only in software—a minor detail) who live for billions of years, and are capable of far greater levels of happiness than the pathetic flesh and blood humans alive today. When such vast amounts of utility are at stake, who could begrudge spending a few million dollars to safeguard it, even when the chances of success are tiny?Why do some otherwise very smart people fall for this sleight of hand? I think it is because it panders to their narcissism. To regard oneself as one of a select few far-sighted thinkers who might turn out to be the saviors of mankind must be very rewarding. But the argument also has a very material benefit: it provides some of those who advance it with a lucrative income stream. For in the past few years they have managed to convince some very wealthy benefactors not only that the risk of unfriendly AI is real, but also that they are the people best placed to mitigate it. The result is a clutch of new organizations that divert philanthropy away from more deserving causes. It is worth noting, for example, that Give Well—a non-profit that evaluates the cost-effectiveness of organizations that rely on donations—refuses to endorse any of these self-proclaimed guardians of the galaxy.But whenever an argument becomes fashionable, it is always worth asking the vital question—Cui bono? Who benefits, materially speaking, from the growing credence in this line of thinking? One need not be particularly skeptical to discern the economic interests at stake. In other words, beware not so much of machines that think, but of their self-appointed masters.
21 comments
[…] The Singularity Swindle […]
"We will solve AI when we finally understand what it is that evolution did in the construction of the human brain." Hah! Now there's a quote from a person who has never studied biology. Despite tons of effort, we can't even simulate the brain of c.elegans with any degree of accuracy, and elegans only has 900 neurons. Nor do we truly grasp the nature of the ubiquitous cell. GPCRs on cell membranes are extremely complicated -- they're still not understood by the pharmaceutical industry despite massive, billion-dollar R&D efforts -- and ion channels are even worse, they make GPCRs look positively sensible. To say nothing of kinases, and of the recently discovered microRNA pathways... There's a rule in biology: "Nothing is simple." The closer you look, the more you'll find -- and sometimes these new finds are massively significant. Given the current level of scientific understanding, we're lightyears away from understanding the construction of the human brain. (A) and (B) are far better, surer bets than (C). All that said, I agree with you. Is the AI singularity imminent? Probably not. Is the "singularity", in a more general sense, therefore a swindle? I'd argue very strongly that it isn't. There are many ways to attain something much like a singularity. For instance, direct brain-computer interfacing could markedly augment human capabilities. There may even be recursive potential to such self-enhancement. ...And of course there are other paths forward on the road to posthumanity. Should technological progress continue unabated, humanity's days as the dominant species on Earth are surely numbered.
That can also be understood as "only when", i.e. never. So he's making a good point. If as you say we can't even simulate the brain of c. elegans, what makes you think that direct brain-computer interfacing is remotely possible? And what are the other paths?
For one example of interesting research into brain-computer interfacing, see: DARPA: Cognitive Technology Threat Warning System. Also, this contains some interesting research and accounts: http://www.tandfonline.com/loi/tbci20 ...It's a field in its infancy, but it's definitely an active field. The fact that we understand so little about the brain's fundamental biology isn't much of a hindrance. In medicinal chemistry, you don't need to know everything about adrenergic GPCRs to figure out how to alter their function, e.g. with propranolol or clenbuterol. In BCI research, you don't need to know everything about the brain, you just need to know where to attach the wires. Other non-AI potential paths to a posthuman world may include genetic engineering, nootropics that actually work, and molecular nanotechnology. I'm sure there are more that I'm missing.
genetic engineering or nootropics doesn't make anything post-human. It makes it super-human. Which is awesome. Enhancement is cool. I don't see though how nootropics that work can be developed if we, again, don't understand how the brain works. If we understood how the brain work we could make computers that worked like them, we could make computer-brain interfaces, we could make nootropics that actually work. But we don't understand how the brain works, and aren't doing much progress, so that's off. Nanotech is vaporware if I ever saw one. Genetic engineering does look cool though. I loved Cochran's "spellcheck" idea. I wonder how feasible that is though.
The line between superhuman and posthuman is fairly indistinct, and it's certainly possible that genetic engineering can result in beings who are no longer "human" in any recognizable way. (But if, and only if, it is taken very far indeed.) Nootropic-induced cognitive enhancement might feasibly be recursive, but otherwise it's probably rather limited. Relatively speaking, anyway. With regards to your comment on nootropic design, all I can say is that drug design has been fairly straightforward for years. Take the GPCRs, for instance. To this day, we barely understand them at all, at least on a fundamental molecular level. There are tons of protein-protein interactions that we can't model very well -- e.g. receptor dimerization -- and with respect to protein-drug interactions there are not only straightforward agonists and antagonists, but now it turns out that there are also inverse agonists, allosteric agonists, allosteric antagonists, partial agonists, agonist potentiators, transctiptional repressors, and more. There are also strange GPCRs that are "always on" unless an inverse agonist turns them off. And this just scratches the surface; the complexity of GPCRs is off the charts. ...But this lack of fundamental biological knowledge didn't stop the development of propranolol, the first beta adrenergic receptor (GPCR) antagonist, in the 1960's. And of course ephedrine has been used as an adrenergic stimulant for untold centuries. Modern drug discovery is extremely good at finding drugs which hit GPCRs, inhibit or activate enzymes, block ion channels, and inhibit or activate nuclear receptors. Also, proteins, synthetic antibodies, and decoy receptors have become very easy to make. With this sort of "toolkit", and with enough trial and error, I'd say that truly effective nootropics are indeed possible. We don't need to know the fundamental, underlying biology. It would help, but I think that we can do enough with the blunt tools already at our disposal.
Erebus argument from biology is right on. Strong AI is Asimov's robots: autonomous beings moving about the world without direction. But this is obviously not enough. The autonomous robots must reproduce themselves, and Asimov's robots are manufactured. If one takes survival and reproduction in the world as the criterion, then modern attempts at AI, like Watson, do not even reach the level of functionality of the flu virus.
[…] Source: Bloody Shovel […]
Naturally it just makes me want to talk about decision theory. I begin to think post-rationalism is a term recognizing that rationalism plateaued hard, causing or caused by ossification of culture. Post rationalism is only non-rational in that it must reject the rationalist culture. (Note also, yet another reason every educated man needs to study physics in its native tongue.) Several resolutions of Pascal's mugger. The odds of the mugger having the money and being willing to give it fall as a function of how much he's offering, meaning the expected value rapidly falls to zero. The problem typically assumes the probability of payoff is constant as a function of size, which is absurd. Significant digits. After not very long, the probability of payoff rounds to zero and is lost in the noise. The utility the mugger can feasibly offer has a plateau. Similarly, once the mugger is offering more than all the money in the world, the probability isn't -like- zero, it just is zero. Or: system 1 has this situation handled. Making a good system 2 analysis of it is not worth the effort. Or, or: people make mistakes/there's noise in the system, which means if it were possible for a Pascal's mugging to work out, it would have happened by now, and it hasn't. I'm sure there's more. The problem isn't classical decision theory. The problem is the hubris of classical decision theorists and self-described rationalists. Their human frailty makes them unfit in the face of pure reason and the might of logic.
System 1 doesn't value money on a linear scale, there's a minimum acceptable and a good enough beyond which there is no change utility; and it takes accounts other things besides monetary payoff, such as not wanting to be a sucker, or the dishonor of having to haggle with muggers. People who think like Pascal and actually respond to the Nigerian scammer if the offered money is high enough are generally regarded as being stupid, i.e. having worse brains.
[…] Spandrell, with some inspiration from The Edge Question takes note of The Singularity that will probably never come. […]
[…] cold look at the kill list. Leftists of the right. Singularity skepticism. Why is ‘sexual orientation’ like phlogiston. Social justice and slave morality. NRx […]
[…] The singularity swindle. […]
You're assuming that understanding how the human brain works is necessary to build a thinking machine, but we didn't need to replicate how birds fly to build airplanes. My guess is that we'll reach ASI through genetic programming (self-correcting software, DeepMind is on this right now) and general improvements in hardware rather than biological brain emulation (which is certainly more complex than we think, because biology is always terribly complex, as the first commenter said.) In any event you should read Bostrom's latest book. It is really worth it.
I also agree that the Singulary is overhyped, but blame the mainstream media, not the scientists - reasonable ones don't say to expect anything before at least two decades, and that's a lot of time.
AI has always been two decades away. Marcus says it's because it's the best way to get funding without being expected to deliver results.
True enough ; Bostrom himself says the 2-decades timeframe is bullshit. In fact, nobody knows when it's going to happen: it could be tomorrow (quite unlikely), it could be never (quite unlikely too, unless you believe man's mind is something godly). More likely in the middle. I'm interested in your own personal estimation. The same could be said of any previously announced, delayed and finally implemented technological evolution though. I am not sure the argument "it didn't happen as predicted, so it will never happen" holds its ground. Scientists announce optimistic timeframes to keep getting funded: yes. They do that all the time.
Precisely because a man's mind is not godly, it'll be hard to impossible for a man's mind to develop an artificial mind. We are running out of smart people too, remember. A lot of things have never happened and they might very well be impossible. See flying cars, space colonies, fusion, world peace, chaste women.
*Singularity *say not to expect anything Sorry, very tired French dude here. :]
You're thinking in a linear fashion. I programmed 8080 IBM pc's at one time and I see how fast computers are now as compared to earlier. Speech recognition is here, self driving cars and it will get faster and faster. It's an exponential function and is difficult for humans to comprehend. Around 2025 you will be able to buy the equivalent of a humans brain processing power for $1000. Five years after that the same chip will have the computing power of a small village. It's very scary.
How much faster is a CPU today compared to 3 years ago? The exponential function has been declining for a while now. You are assuming that growth will go on forever.