I’m not very afraid of the government because I don’t think it’s a throw warm bodies at the problem and I don’t think it’s a throw warm computers at the problem.
I suspect the easiest path to AGI is to just throw a ton of bodies and computing power at the problem, build a Kludge AI, and let it stumble its way into recursive self-improvement. This is what Larry Page is trying to do. I don’t expect it to work this time, but if China or the NSA or Google or Goldman Sachs tries to do it with the computing power and AI researchers we’ll have 35 years from now, they very well might succeed, even without any deep philosophical insights. After all, this is how evolution built general intelligence: no philosophical insight, just a bunch of specialized modules kludged together, some highly general learning algorithms, and lots of computing power. The problem is that this approach is very unlikely to yield something capable of Friendliness, and yet there are massive nearer-term incentives for China and the NSA and everyone else to race towards it.
Still, you might be able to outpace the Kludge AI approach via philosophical insight, as David Deutsch suggested. I think that’s roughly Eliezer’s hope. One reason for optimism about this approach is that top-notch philosophical skill looks to be extremely rare, and few computer scientists are encouraged to bother developing it, and even if they try to develop it, most of what’s labeled “philosophy” in the bookstore will actively make them worse at philosophy, especially compared to someone who avoids everything labeled “philosophy” and instead studies the Sequences, math, logic, computer science, AI, physics, and cognitive science. Since hedge funds and the NSA and Google don’t seem to be selecting for philosophical ability (in part because they don’t know what it looks like), maybe MIRI+FHI+friends can grab a surprisingly large share of the best mathematician-philosophers, and get to FAI before the rest of the world gets to Kludge AI.
I suspect the easiest path to AGI is to just throw a ton of bodies and computing power at the problem, build a Kluge AI, and let it stumble its way into recursive self-improvement. This is what Larry Page is trying to do. I don’t expect it to work this time, but if China or the NSA or Google or Goldman Sachs tries to do it with the computing power and AI researchers we’ll have 35 years from now, they very well might succeed, even without any deep philosophical insights. After all, this is how evolution built general intelligence: no philosophical insight, just a bunch of specialized modules kludged together, some highly general learning algorithms, and lots of computing power. The problem is that this approach is very unlikely to yield something capable of Friendliness, and yet there are massive nearer-term incentives for China and the NSA and everyone else to race towards it.
Ah, yes, you expressed better than I could my other reason for thinking AI is most likely to be built by a big organization. I’d really been struggling how to say that.
One thought I have, building on this comment of yours, is that while making kludge AI safe may look impossible, given that sometimes you have to shut up and do the impossible, I wonder if making kludge AI safe might be the less-impossible option here.
EDIT: I’m also really curious to know how Eliezer would respond to the paragraph I quoted above.
I wonder if making kludge AI safe might be the less-impossible option here.
Yeah, that’s possible. But as I said here, I suspect that learning whether that’s true mostly comes from doing FAI research (and from watching closely as the rest of the world inevitably builds toward Kludge AI). Also: if making Kludge AI safe is the less-impossible option, then at least some FAI research probably works just as well for that scenario — especially the value-loading problem stuff. MIRI hasn’t focused on that lately but that’s a local anomaly: some of the next several open problems on Eliezer’s to-explain list fall under the value-loading problem.
I’m not sure how value-loading would apply to that situation, since you’re implicitly assuming a non-steadfast goal system as the default case of a kludge AI. Wouldn’t boxing be more applicable?
Well, there are many ways it could turn out to be that making Kludge AI safe is the less-impossible option. The way I had in mind was that maybe goal stability and value-loading turn out to be surprisingly feasible with Kludge AI, and you really can just “bolt on” Friendliness. I suppose another way making Kludge AI safe could be the less-impossible option is if it turns out to be possible to keep superintelligences boxed indefinitely but also use them to keep non-boxed superintelligences from being boxed, or something. In which case boxing research would be more relevant.
Wouldn’t it be a more effective strategy to point out to China, the NSA, Goldman Sachs, etc that if they actually succeed in building a Kludge AI they’ll paper-clip themselves and die? I would figure that knowing it’s a deadly cliff would dampen their enthusiasm to be the first ones over it.
The issue is partially the question of AI Friendliness as such, but also the question of AI Controllability as such. They may well have the belief that they can build an agent which can safely be left alone to perform a specified task in a way that doesn’t actually affect any humans or pose danger to humans. That is, they want AI agents that can predict stock-market prices and help humans allocate investments without caring about taking over the world, or stepping outside of the task/job/role given to them by humans.
Hell, ideally they want AI agents they can leave alone to do any job up to and including automated food trucks, and which will never care about using the truck to do anything other than serving humans kebab in exchange for money, and giving the money to their owners.
Admittedly, this is the role currently played by computer programs, and it works fairly well. The fact that extrapolating epistemically from Regular Computer Programs to Kludge AGIs is not sound reasoning needs to be pointed out to them.
(Or it could be sound reasoning, in the end, completely by accident. We can’t actually know what kind of process, with what kind of conceptual ontology, and what values over that ontology, will be obtained by Kludge AI efforts, since the Kludge efforts almost all use black-box algorithms.)
Wouldn’t it be a more effective strategy to point out to China, the NSA, Goldman Sachs, etc that if they actually succeed in building a Kludge AI they’ll paper-clip themselves and die?
We’ve been trying, and we’ll keep trying, but the response to this work so far is not encouraging.
Yeah, you kind of have to deal with the handicap of being the successor-organization to the Singularity Institute, who were really noticeably bad at public relations. Note that I say “at public relations” rather than “at science”.
Hopefully you got those $3 I left on your desk in September to encourage PUBLISHING MOAR PAPERS ;-).
Actually, to be serious a moment, there are some open scientific questions here.
Why should general intelligence in terms of potential actions correspond to general world optimization in terms of motivations? If values and intelligence are orthogonal, why can’t we build a “mind design” for a general AI that would run a kebab truck as well as a human and do nothing else whatsoever?
Why is general intelligence so apparently intractable when we are a living example that provably manages to get up in the morning and act usefully each day without having to spend infinite or exponential time calculating possibilities?
Once we start getting into the realm of Friendliness research, how the hell do you specify an object-level ontology to a generally-intelligent agent, to deal with concepts like “humans are such-and-so agents and your purpose is to calculate their collective CEV”? You can’t even build Clippy without ontology, though strangely enough, you may be able to build a Value Learner without it.
All of these certainly make a difference in probable outcomes of a Kludge AI between Clippy, FAI, and Kebab AI.
One reason for optimism about this approach is that top-notch philosophical skill looks to be extremely rare
Top-notch skill in any field is rare by definition, but philosophical skill seems more difficult to measure than skill in other fields. What makes you think that MIRI+FHI+friends are better positioned than, say IBM, in this regard?
Deutsch argues of “the target ability” that “the information for how to achieve it must be encoded in the relatively tiny number of differences between the DNA of humans and that of chimpanzees”.
That makes no sense. Maybe a bigger brain alone would enable cumulative cultural evolution—and so all that would be needed is some more “add brain here” instructions. Yet: “make more of this” is hardly the secret of intelligence. So: Deutsch’s argument here is not coherent.
Looking into the difference between human genes and chimpanzee genes probably won’t help much with developing machine intelligence. Nor would it be much help in deciding how big the difference is.
The chimpanzee gene pool doesn’t support cumulative cultural evolution, while human gene pool does. However, all that means is that chimpanzees are one side of the cultural “tipping point”—while humans are on the other. Crossing such a threshold may not require additional complex machinery. It might just need an instruction of the form: “delay brain development”—since brains can now develop safely in baby slings.
I don’t think Deutsch is arguing that looking at the differences between human and chimpanzee genomes is a promising path for AGI insights; he’s just saying that there might not be all that much insight needed to get to AGI, since there don’t seem to be huge differences in cognitive algorithms between chimpanzees and humans. Even a culturally-isolated feral child (e.g. Dani) has qualitatively more intelligence than a chimpanzee, and can be taught crafts, sports, etc. — and language, to a more limited degree (as far as we know so far; there are very few cases).
It is true that there might not be all that much insight needed to get to AGI on top of the insight needed to build a chimpanzee. The problem that Deutsch is neglecting is that we have no idea about how to build a chimpanzee.
The point is that, almost paradoxically, computers so far have been good at doing tasks that are difficult for humans and impossible for chimps (difficult mathematical computations, chess, Jeopardy, etc.), yet they can’t do well at tasks which are trivial for chimps or even for dogs.
Which is actually not all that striking a revelation when you consider that when humans find something difficult, it is because it is a task or a procedure we were not built to do. It makes sense then that programs designed to do the hard things would not do them the way humans do. It’s the mind projection fallacy to assume that easy tasks are in fact easy, and hard tasks hard.
Which is actually not all that striking a revelation when you consider that when humans find something difficult, it is because it is a task or a procedure we were not built to do.
But this doesn’t explain why we can build and program computers to do it much better than we do.
We are actually quite good at inventing algorithmic procedures to solve problems that we find difficult, but we suck at executing them, while we excel at doing things we can’t easily describe as algorithmic procedures. In fact, inventing algorithmic procedures is perhaps the hardest task to describe algorithmically, due to various theoretical results from computability and complexity theory and some empirical evidence.
Some people consider this fact as evidence that the human mind is fundamentally non-computational. I don’t share this view, mainly for physical reasons, but I think it might have a grain of truth: While the human mind is probably computational in the sense that in principle (and maybe one day in practice) we could run low-level brain simulations on a computer, its architecture differs from the architecture of typical computer hardware and software in a non-trivial, and probably still poorly understood way. Even the most advanced modern machine learning algorithms are at best only crude approximations of what’s going on inside the human brain.
Maybe this means that we are missing some insight, some non-trivial property that sets apart intelligent processes than other typical computations, or maybe there is just no feasible way of obtaining human-level, or even chimp-level, intelligence without a neuromorphic architecture, a substantially low-level emulation of a brain.
Anyway, I think that this counterintuitive observation is probably the source of the over-optimistic predictions about AI: people, even experts, consistently underestimate how difficult apparently easy cognitive really are.
But this doesn’t explain why we can build and program computers to do it much better than we do.
It absolutely does. These are things that humans are not designed to do. (Things humans are designed to do: recognizing familiar faces, isolating a single voice in a crowded restaurant, navigating from point A to point B by means of transport, walking, language, etc.) Imagine hammering in a nail with a screwdriver. You could do it.. but not very well. When we design machines to solve problems we find difficult, we create solutions that don’t exist in the structure of our brain. It would be natural to expect that artificial minds would be better suited to solve some problems than us.
Other than that, I’m not sure what you’re arguing, since that “counterintuitive observation” is exactly what I was saying.
After all, this is how evolution built general intelligence: no philosophical insight, just a bunch of specialized modules kludged together, some highly general learning algorithms, and lots of computing power.
The other thing evolution needed is a very complex and influenceable environment. I don’t know how complex an environment a GAI needs, but it’s conceivable that AIs have to develop outside a box. If that’s true, then the big organizations are going to get a lot more interested in Friendliness.
[I]t’s conceivable that AIs have to develop outside a box. If that’s true, then the big organizations are going to get a lot more interested in Friendliness.
Or they’ll just forgo boxes and friendliness to get the project online faster.
Maybe I’m just being pessimistic, but I wouldn’t count on adequate safety precautions with a project half this complex if it’s being run by a modern bureaucracy (government corporate or academic). There are exceptions, but it doesn’t seem like most of the organizations likely to take an interest care more about long term risks than immediate gains.
I suspect the easiest path to AGI is to just throw a ton of bodies and computing power at the problem, build a Kludge AI, and let it stumble its way into recursive self-improvement. This is what Larry Page is trying to do.
Oh, really. Both Google and MIRI are secretive organisations. Outsiders don’t really have much idea about what goes on inside them—because that’s classified. What does come out of them is PR material. When Peter Norvig says: “The goal should be superhuman partnership”, that is propaganda.
Classified information about supposedly leaked classified information doesn’t seem very credible. If you can’t spill the beans on your sources, why say anything? It just seems like baseless mud-slinging against a perceived competitor.
Note that this has, historically, been a bit of a problem with MIRI. Lots of teams race to create superintelligence. MIRI’s strategy seems to include liberal baseless insinuations that their competiors are going to destroy the world. Consider the “If Novamente should ever cross the finish line, we all die” case. Do you folk really want to get a reputation for mudslinging—and slagging off competitors? Do you think that looks “friendly”?
In I.T., focusing on your competitors’ flaws is known as F.U.D.. I would council taking care when using F.U.D. tactics in public.
Classified information about supposedly leaked classified information doesn’t seem very credible.
It’s not that classified, if you know people from Google who engage with Google’s TGIFs.
MIRI’s strategy seems to include liberal baseless insinuations that their competiors are going to destroy the world. Consider the “If Novamente should ever cross the finish line, we all die” case.
There’s nothing special about Goertzel here, and I don’t think you can pretend you don’t know that. We’re just saying that AGI is an incredibly powerful weapon, and FAI is incredibly difficult. As for “baseless”, well… we’ve spent hundreds of pages arguing this view, and an even better 400-page summary of the arguments is forthcoming in Bostrom’s Superintelligence book.
It’s not mudslinging, it’s Leo Szilard pointing out that nuclear chain reactions have huge destructive potential even if they could also be useful for power plants.
We’re just saying that AGI is an incredibly powerful weapon, and FAI is incredibly difficult. As for “baseless”, well… we’ve spent hundreds of pages arguing this view, and an even better 400-page summary of the arguments is forthcoming in Bostrom’s Superintelligence book.
It’s not mudslinging, it’s Leo Szilard pointing out that nuclear chain reactions have huge destructive potential even if they could also be useful for power plants.
Machine intelligence is important. Who gets to build it using what methodology is also likely to have a significant effect. Similarly, operating systems were important. Their development produced large power concentrations—and a big mountain of F.U.D. from predatory organizations. The outcome set much of the IT industry back many years. I’m not suggesting that the stakes are small.
most of what’s labeled “philosophy” in the bookstore will actively make them worse at philosophy, especially compared to someone who avoids everything labeled “philosophy” and instead studies the Sequences, math, logic, computer science, AI, physics, and cognitive science.
I’m reminded of when someone once told me there was no evidence against Mormonism, and I remain dumbfounded that anyone would expect such a statement to increase support for their position. It’s incredibly rare to encounter a controversy so one-sided that you can’t find any evidence for the other side, but it’s sadly common to find opinionated people who are either ignorant of or unreasonably dismissive of all evidence contrary to those opinions.
I don’t know where you are going with that. There is abundant evidence that studying X makes you better at X for most values of X, and someone who thinks there is an exception for some specific X, needs to provide evidence for the somewhat extraordinary claim.
I suspect the easiest path to AGI is to just throw a ton of bodies and computing power at the problem, build a Kludge AI, and let it stumble its way into recursive self-improvement. This is what Larry Page is trying to do. I don’t expect it to work this time, but if China or the NSA or Google or Goldman Sachs tries to do it with the computing power and AI researchers we’ll have 35 years from now, they very well might succeed, even without any deep philosophical insights. After all, this is how evolution built general intelligence: no philosophical insight, just a bunch of specialized modules kludged together, some highly general learning algorithms, and lots of computing power. The problem is that this approach is very unlikely to yield something capable of Friendliness, and yet there are massive nearer-term incentives for China and the NSA and everyone else to race towards it.
Still, you might be able to outpace the Kludge AI approach via philosophical insight, as David Deutsch suggested. I think that’s roughly Eliezer’s hope. One reason for optimism about this approach is that top-notch philosophical skill looks to be extremely rare, and few computer scientists are encouraged to bother developing it, and even if they try to develop it, most of what’s labeled “philosophy” in the bookstore will actively make them worse at philosophy, especially compared to someone who avoids everything labeled “philosophy” and instead studies the Sequences, math, logic, computer science, AI, physics, and cognitive science. Since hedge funds and the NSA and Google don’t seem to be selecting for philosophical ability (in part because they don’t know what it looks like), maybe MIRI+FHI+friends can grab a surprisingly large share of the best mathematician-philosophers, and get to FAI before the rest of the world gets to Kludge AI.
Ah, yes, you expressed better than I could my other reason for thinking AI is most likely to be built by a big organization. I’d really been struggling how to say that.
One thought I have, building on this comment of yours, is that while making kludge AI safe may look impossible, given that sometimes you have to shut up and do the impossible, I wonder if making kludge AI safe might be the less-impossible option here.
EDIT: I’m also really curious to know how Eliezer would respond to the paragraph I quoted above.
Yeah, that’s possible. But as I said here, I suspect that learning whether that’s true mostly comes from doing FAI research (and from watching closely as the rest of the world inevitably builds toward Kludge AI). Also: if making Kludge AI safe is the less-impossible option, then at least some FAI research probably works just as well for that scenario — especially the value-loading problem stuff. MIRI hasn’t focused on that lately but that’s a local anomaly: some of the next several open problems on Eliezer’s to-explain list fall under the value-loading problem.
I’m not sure how value-loading would apply to that situation, since you’re implicitly assuming a non-steadfast goal system as the default case of a kludge AI. Wouldn’t boxing be more applicable?
Well, there are many ways it could turn out to be that making Kludge AI safe is the less-impossible option. The way I had in mind was that maybe goal stability and value-loading turn out to be surprisingly feasible with Kludge AI, and you really can just “bolt on” Friendliness. I suppose another way making Kludge AI safe could be the less-impossible option is if it turns out to be possible to keep superintelligences boxed indefinitely but also use them to keep non-boxed superintelligences from being boxed, or something. In which case boxing research would be more relevant.
Wouldn’t it be a more effective strategy to point out to China, the NSA, Goldman Sachs, etc that if they actually succeed in building a Kludge AI they’ll paper-clip themselves and die? I would figure that knowing it’s a deadly cliff would dampen their enthusiasm to be the first ones over it.
The issue is partially the question of AI Friendliness as such, but also the question of AI Controllability as such. They may well have the belief that they can build an agent which can safely be left alone to perform a specified task in a way that doesn’t actually affect any humans or pose danger to humans. That is, they want AI agents that can predict stock-market prices and help humans allocate investments without caring about taking over the world, or stepping outside of the task/job/role given to them by humans.
Hell, ideally they want AI agents they can leave alone to do any job up to and including automated food trucks, and which will never care about using the truck to do anything other than serving humans kebab in exchange for money, and giving the money to their owners.
Admittedly, this is the role currently played by computer programs, and it works fairly well. The fact that extrapolating epistemically from Regular Computer Programs to Kludge AGIs is not sound reasoning needs to be pointed out to them.
(Or it could be sound reasoning, in the end, completely by accident. We can’t actually know what kind of process, with what kind of conceptual ontology, and what values over that ontology, will be obtained by Kludge AI efforts, since the Kludge efforts almost all use black-box algorithms.)
We’ve been trying, and we’ll keep trying, but the response to this work so far is not encouraging.
Yeah, you kind of have to deal with the handicap of being the successor-organization to the Singularity Institute, who were really noticeably bad at public relations. Note that I say “at public relations” rather than “at science”.
Hopefully you got those $3 I left on your desk in September to encourage PUBLISHING MOAR PAPERS ;-).
Actually, to be serious a moment, there are some open scientific questions here.
Why should general intelligence in terms of potential actions correspond to general world optimization in terms of motivations? If values and intelligence are orthogonal, why can’t we build a “mind design” for a general AI that would run a kebab truck as well as a human and do nothing else whatsoever?
Why is general intelligence so apparently intractable when we are a living example that provably manages to get up in the morning and act usefully each day without having to spend infinite or exponential time calculating possibilities?
Once we start getting into the realm of Friendliness research, how the hell do you specify an object-level ontology to a generally-intelligent agent, to deal with concepts like “humans are such-and-so agents and your purpose is to calculate their collective CEV”? You can’t even build Clippy without ontology, though strangely enough, you may be able to build a Value Learner without it.
All of these certainly make a difference in probable outcomes of a Kludge AI between Clippy, FAI, and Kebab AI.
I did. :)
Top-notch skill in any field is rare by definition, but philosophical skill seems more difficult to measure than skill in other fields. What makes you think that MIRI+FHI+friends are better positioned than, say IBM, in this regard?
AFAICS, they have redefined “good philosopher” as “philosopher who does things our way”.
The David Deutsch article seems silly—as usual :-(
Deutsch argues of “the target ability” that “the information for how to achieve it must be encoded in the relatively tiny number of differences between the DNA of humans and that of chimpanzees”.
That makes no sense. Maybe a bigger brain alone would enable cumulative cultural evolution—and so all that would be needed is some more “add brain here” instructions. Yet: “make more of this” is hardly the secret of intelligence. So: Deutsch’s argument here is not coherent.
I think some parts of the article are wrong, but not that part, and I can’t parse your counterargument. Could you elaborate?
Looking into the difference between human genes and chimpanzee genes probably won’t help much with developing machine intelligence. Nor would it be much help in deciding how big the difference is.
The chimpanzee gene pool doesn’t support cumulative cultural evolution, while human gene pool does. However, all that means is that chimpanzees are one side of the cultural “tipping point”—while humans are on the other. Crossing such a threshold may not require additional complex machinery. It might just need an instruction of the form: “delay brain development”—since brains can now develop safely in baby slings.
Indeed, crossing the threshold might not have required gene changes at all—at the time. It probably just required increased population density—e.g. see: High Population Density Triggers Cultural Explosions.
I don’t think Deutsch is arguing that looking at the differences between human and chimpanzee genomes is a promising path for AGI insights; he’s just saying that there might not be all that much insight needed to get to AGI, since there don’t seem to be huge differences in cognitive algorithms between chimpanzees and humans. Even a culturally-isolated feral child (e.g. Dani) has qualitatively more intelligence than a chimpanzee, and can be taught crafts, sports, etc. — and language, to a more limited degree (as far as we know so far; there are very few cases).
It is true that there might not be all that much insight needed to get to AGI on top of the insight needed to build a chimpanzee. The problem that Deutsch is neglecting is that we have no idea about how to build a chimpanzee.
Oh I see what you mean. Well, I certainly agree with that!
The point is that, almost paradoxically, computers so far have been good at doing tasks that are difficult for humans and impossible for chimps (difficult mathematical computations, chess, Jeopardy, etc.), yet they can’t do well at tasks which are trivial for chimps or even for dogs.
Which is actually not all that striking a revelation when you consider that when humans find something difficult, it is because it is a task or a procedure we were not built to do. It makes sense then that programs designed to do the hard things would not do them the way humans do. It’s the mind projection fallacy to assume that easy tasks are in fact easy, and hard tasks hard.
But this doesn’t explain why we can build and program computers to do it much better than we do.
We are actually quite good at inventing algorithmic procedures to solve problems that we find difficult, but we suck at executing them, while we excel at doing things we can’t easily describe as algorithmic procedures.
In fact, inventing algorithmic procedures is perhaps the hardest task to describe algorithmically, due to various theoretical results from computability and complexity theory and some empirical evidence.
Some people consider this fact as evidence that the human mind is fundamentally non-computational. I don’t share this view, mainly for physical reasons, but I think it might have a grain of truth:
While the human mind is probably computational in the sense that in principle (and maybe one day in practice) we could run low-level brain simulations on a computer, its architecture differs from the architecture of typical computer hardware and software in a non-trivial, and probably still poorly understood way. Even the most advanced modern machine learning algorithms are at best only crude approximations of what’s going on inside the human brain.
Maybe this means that we are missing some insight, some non-trivial property that sets apart intelligent processes than other typical computations, or maybe there is just no feasible way of obtaining human-level, or even chimp-level, intelligence without a neuromorphic architecture, a substantially low-level emulation of a brain.
Anyway, I think that this counterintuitive observation is probably the source of the over-optimistic predictions about AI: people, even experts, consistently underestimate how difficult apparently easy cognitive really are.
It absolutely does. These are things that humans are not designed to do. (Things humans are designed to do: recognizing familiar faces, isolating a single voice in a crowded restaurant, navigating from point A to point B by means of transport, walking, language, etc.) Imagine hammering in a nail with a screwdriver. You could do it.. but not very well. When we design machines to solve problems we find difficult, we create solutions that don’t exist in the structure of our brain. It would be natural to expect that artificial minds would be better suited to solve some problems than us.
Other than that, I’m not sure what you’re arguing, since that “counterintuitive observation” is exactly what I was saying.
The other thing evolution needed is a very complex and influenceable environment. I don’t know how complex an environment a GAI needs, but it’s conceivable that AIs have to develop outside a box. If that’s true, then the big organizations are going to get a lot more interested in Friendliness.
Or they’ll just forgo boxes and friendliness to get the project online faster.
Maybe I’m just being pessimistic, but I wouldn’t count on adequate safety precautions with a project half this complex if it’s being run by a modern bureaucracy (government corporate or academic). There are exceptions, but it doesn’t seem like most of the organizations likely to take an interest care more about long term risks than immediate gains.
Oh, really. Both Google and MIRI are secretive organisations. Outsiders don’t really have much idea about what goes on inside them—because that’s classified. What does come out of them is PR material. When Peter Norvig says: “The goal should be superhuman partnership”, that is propaganda.
I’m not basing my claim on publicly available information about what Google is doing.
Classified information about supposedly leaked classified information doesn’t seem very credible. If you can’t spill the beans on your sources, why say anything? It just seems like baseless mud-slinging against a perceived competitor.
Note that this has, historically, been a bit of a problem with MIRI. Lots of teams race to create superintelligence. MIRI’s strategy seems to include liberal baseless insinuations that their competiors are going to destroy the world. Consider the “If Novamente should ever cross the finish line, we all die” case. Do you folk really want to get a reputation for mudslinging—and slagging off competitors? Do you think that looks “friendly”?
In I.T., focusing on your competitors’ flaws is known as F.U.D.. I would council taking care when using F.U.D. tactics in public.
It’s not that classified, if you know people from Google who engage with Google’s TGIFs.
There’s nothing special about Goertzel here, and I don’t think you can pretend you don’t know that. We’re just saying that AGI is an incredibly powerful weapon, and FAI is incredibly difficult. As for “baseless”, well… we’ve spent hundreds of pages arguing this view, and an even better 400-page summary of the arguments is forthcoming in Bostrom’s Superintelligence book.
It’s not mudslinging, it’s Leo Szilard pointing out that nuclear chain reactions have huge destructive potential even if they could also be useful for power plants.
Machine intelligence is important. Who gets to build it using what methodology is also likely to have a significant effect. Similarly, operating systems were important. Their development produced large power concentrations—and a big mountain of F.U.D. from predatory organizations. The outcome set much of the IT industry back many years. I’m not suggesting that the stakes are small.
There is, of course, no evidence for that claim.
I’m reminded of when someone once told me there was no evidence against Mormonism, and I remain dumbfounded that anyone would expect such a statement to increase support for their position. It’s incredibly rare to encounter a controversy so one-sided that you can’t find any evidence for the other side, but it’s sadly common to find opinionated people who are either ignorant of or unreasonably dismissive of all evidence contrary to those opinions.
The special irony in this case is that even the reductio ad absurdum “There is no progress in philosophy” is itself a philosophically defensible claim.
I don’t know where you are going with that. There is abundant evidence that studying X makes you better at X for most values of X, and someone who thinks there is an exception for some specific X, needs to provide evidence for the somewhat extraordinary claim.
In fact, it seems the typical crackpot claim that mainstream education in a domain is counterproductive.