The ‘random mind design space’ is probably the worst offender.
My understanding was that this wasn’t any attempt to rigorously formulate the idea of a randomly chosen mind, just suggest the possibility of a huge number of possible reasoning architectures that didn’t share human goals.
There isn’t a solid consequentialist reason to think that FAI effort decreases chance of doomsday as opposed to absence of FAI effort. It may increase the chances as easily as decrease.
This is one of those points you really should’ve left out… If you got something to say on this topic, say it, we all want to hear it (or at least I do). Of course it’s not obvious that in FAI effort will certainly be helpful, but empirically, people trying to do things seems to make it more likely that they get done.
It appears to me that will to form most accurate beliefs about the real world, and implement solutions in the real world, is orthogonal to problem solving itself.
Have you heard of G?
Intelligent people tend to be impractical because of bugs in human brains that we shouldn’t expect to appear in other reasoning architectures.
Of course general intelligence is a complicated multifaceted thing, but that doesn’t mean it can’t be used to improve itself. Humans are terrible improving ourselves because we don’t have access to our own source code. What if that changed?
Foom scenario is especially odd in light of the above. Why would optimizing compiler that can optimize it’s ability to optimize, suddenly emerge will? It could foom all right, but it wouldn’t get out and start touching itself from outside; and if it would, it would wirehead rather than add more hardware; and it would be incredibly difficult to prevent it from doing so.
You seem awfully confident. If you have a rigorous argument, could you share it?
You might wish to read someone who disagrees with you:
(and why I am posting this: looking at the donations received by SIAI and having seen talk of hiring software developers, I got pascal-wagered into explaining it)
Thanks a lot; one of the big problems with fringe beliefs is that folks rarely take the time to counterargue since there isn’t much in it for them.
Even better is if such criticism can take the form of actually forming a consistent contrasting point of view supported by rigorous arguments, without making drama out of the issue, but I’ll take what I can get.
If you care for future of mankind, and if you believe in AI risks, and if a software developer, after an encounter with you, becomes less worried of the AI risk, then clearly you are doing something wrong.
That seems wrong to me. When I hear that some group espouses some belief, I give them a certain amount of credit by default. If I hear their arguments and find them less persuasive than I expected, my confidence in their position goes down.
I have certainly seen some of what I would consider privileging of the hypothesis being done by AGI safety advocates. However, groupthink is not all or nothing; better to extract the best of the beliefs of others then throw them out wholesale.
Some of your arguments are very weak in this post (e.g. the lone genius point basically amounts to ad hominem) and you seem to refer frequently to points that you’ve made in the past without linking to them. Do you think you could assemble a directory of links to your thoughts on this topic ordered from most important/persuasive to least?
e.g. the lone genius point basically amounts to ad hominem
But why it is irrational, exactly?
but empirically, people trying to do things seems to make it more likely that they get done.
As long as they don’t use this heuristic too hard to choose which path to take. If it can be shown that some non-explicitly-friendly AGI design is extremely safe, and the FAIs are a case of higher risk, but a chance at slightly better payoff. Are you sure that this is what has to be chosen?
If it can be shown that some non-explicitly-friendly AGI design is extremely safe, and the FAIs are a case of higher risk, but a chance at slightly better payoff. Are you sure that this is what has to be chosen?
This line of conversation could benefit from some specificity and from defining what you mean by “explicitly friendly” and “safe”. I think of FAI as AI for which there is good evidence that it is safe, so for me, whenever there are multiple AGIs to choose between, the Friendliest one is the lowest-risk one, regardless of who developed it and why. Do you agree? If not, what specifically are you saying?
I’m not sure exactly what you’re asking with the second paragraph. In any case, I don’t think the Singularity Institute is dogmatically in favor of friendliness; they’ve collaborated with Nick Bostrom on thinking about Oracle AI.
Why is it necessarily weak? I found it very instrumentally useful to try to factor out the belief-propagation impacts of people with nothing clearly impressive to show. There is a small risk I miss some useful insights. There is much lower pollution with privileged hypotheses given wrong priors. I am a computationally bounded agent. I can’t process everything.
I found it very instrumentally useful to try to factor out the belief-propagation impacts of people with nothing clearly impressive to show. There is a small risk I miss some useful insights. There is much lower pollution with privileged hypotheses given wrong priors.
It’s perfectly OK to give low priors to strange beliefs, like: “Here is EY, a guy from internet who found a way to save the world, because all scientists are wrong. Everybody listen to him, take him seriously despite his lack of credentials, give him your money and spread his words.” However, low does not mean infinitely low. A hypothesis with a low prior can still be saved by sufficient evidence.
For example, a hypothesis that “Washington is a capital city of USA” has also a very low prior, since there are over 30 000 towns in USA, and only one of them could be a capital, so why exactly should I privilege the Washington hypothesis? But there happens to be more than enough evidence which overrides the initially weak prior.
So basicly the question is how much evidence does EY need so that it becomes rational to consider his thoughts seriously (which does not yet mean he is right); how exactly low is this prior? So… How many people on this planet are putting a comparable amount of time and study to the topic of values of artificial intelligence? Is he able to convince seemingly rational people, or is he followed by a bunch of morons? Is his criticism of scientific processes just an unsubstantiated school-dropout envy, or is he proven right? Etc. I don’t pretend to do a Bayesian calculation, it just seems to me that the prior is not that low, and there is enough evidence. (And by the way, Dmytry, your presence at this website is also a weak evidence, isn’t it? I guess there are millions of web pages that you do not read and comment regularly. There are even many computer-related or AI-related pages you don’t read, but you do read this one—why?)
Oh please. There’s a difference between what makes a useful heuristic for you to decide what to spend time considering and what makes for a persuasive argument in a large debate where participants are willing to spend time hashing out specifics.
DH1. Ad Hominem. An ad hominem attack is not quite as weak as mere name-calling. It might actually carry some weight. For example, if a senator wrote an article saying senators’ salaries should be increased, one could respond:
Of course he would say that. He’s a senator.
This wouldn’t refute the author’s argument, but it may at least be relevant to the case. It’s still a very weak form of disagreement, though. If there’s something wrong with the senator’s argument, you should say what it is; and if there isn’t, what difference does it make that he’s a senator?
I found it very instrumentally useful to try to factor out the belief-propagation impacts of people with nothing clearly impressive to show.
If even widely read bloggers like EY don’t qualify to affect your opinions, it sounds as though you’re ignoring almost everyone.
No one is expecting you to adopt their priors… Just read and make arguments about ideas instead of people, if you’re trying to make an inference about ideas.
If even widely read bloggers like EY don’t qualify to affect your opinions, it sounds as though you’re ignoring almost everyone.
I think you discarded one of conditionals. I read Bruce Schneier’s blog. Or Paul Graham’s. Furthermore, it is not about disagreement with the notion of AI risk. It’s about keeping the data non cherry picked, or less cherry picked.
To say that a system of any design is an “artificial intelligence”, we mean that it has goals which it tries to accomplish
by acting in the world.
I had been thinking, could it be that respected computer vision expert indeed believes that the system will just emerge world intentionality? That’d be pretty odd. Then I see it is his definition of AI here, it already presumes robust implementation of world intentionality. Which is precisely what a tool like optimizing compiler lacks.
edit: and in advance of other objection: I know evolution can produce what ever argument demands. Evolution, however, is a very messy and inefficient process for making very messy and inefficient solutions to the problems nobody has ever even defined.
I’m not sure there is a firm boundary between goals respecting events inside the computer and those respecting events outside.
Who has made this optimizing compiler claim that you attack? My impression was that AI paranoia advocates were concerned with efficient cross domain optimization, and optimizing compilers would seem to have a limited domain of optimization.
By the way, what do you think the most compelling argument in favor of AI paranoia is? Since I’ve been presenting arguments in favor of AI paranoia, here are the points against this position I can think of offhand that seem most compelling:
The simplest generally intelligent reasoning architectures could be so complicated as to be very difficult to achieve and improve, so that uploads would come first and even future supercomputers running them would improve them slowly: http://www.overcomingbias.com/2010/02/is-the-city-ularity-near.html
I’m not sure that a “good enough” implementation of human values, that was trained on a huge barrage of moral dilemmas and the solutions humans say they want implemented that were somehow sampled from a semi-rigorously defined sample space of moral dilemmas, would be that terrible. Our current universe certainly wasn’t optimized for human existence, but we’re doing all right at this point.
It seems possible that seed AI is very difficult to create by accident, in the same way an engineered virus would be very difficult to create by accident, so that for a long time it is possible for researchers to build a seed AI but they don’t do it because of rudimentary awareness of risks (which aren’t present today though).
Progress so far has been mostly good. We see big positive trends towards improvement. Some measure of paranoia is usually justified, but excessive paranoia is often unhelpful.
It’s too bad he didn’t summarize and link to all of the stuff he’s written on this topic, and just give it an inflammatory headline if he felt the issue deserved more attention...
My understanding was that this wasn’t any attempt to rigorously formulate the idea of a randomly chosen mind, just suggest the possibility of a huge number of possible reasoning architectures that didn’t share human goals.
This is one of those points you really should’ve left out… If you got something to say on this topic, say it, we all want to hear it (or at least I do). Of course it’s not obvious that in FAI effort will certainly be helpful, but empirically, people trying to do things seems to make it more likely that they get done.
Have you heard of G?
Intelligent people tend to be impractical because of bugs in human brains that we shouldn’t expect to appear in other reasoning architectures.
Of course general intelligence is a complicated multifaceted thing, but that doesn’t mean it can’t be used to improve itself. Humans are terrible improving ourselves because we don’t have access to our own source code. What if that changed?
You seem awfully confident. If you have a rigorous argument, could you share it?
You might wish to read someone who disagrees with you:
http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf
Thanks a lot; one of the big problems with fringe beliefs is that folks rarely take the time to counterargue since there isn’t much in it for them.
Even better is if such criticism can take the form of actually forming a consistent contrasting point of view supported by rigorous arguments, without making drama out of the issue, but I’ll take what I can get.
That seems wrong to me. When I hear that some group espouses some belief, I give them a certain amount of credit by default. If I hear their arguments and find them less persuasive than I expected, my confidence in their position goes down.
I have certainly seen some of what I would consider privileging of the hypothesis being done by AGI safety advocates. However, groupthink is not all or nothing; better to extract the best of the beliefs of others then throw them out wholesale.
Some of your arguments are very weak in this post (e.g. the lone genius point basically amounts to ad hominem) and you seem to refer frequently to points that you’ve made in the past without linking to them. Do you think you could assemble a directory of links to your thoughts on this topic ordered from most important/persuasive to least?
But why it is irrational, exactly?
As long as they don’t use this heuristic too hard to choose which path to take. If it can be shown that some non-explicitly-friendly AGI design is extremely safe, and the FAIs are a case of higher risk, but a chance at slightly better payoff. Are you sure that this is what has to be chosen?
This line of conversation could benefit from some specificity and from defining what you mean by “explicitly friendly” and “safe”. I think of FAI as AI for which there is good evidence that it is safe, so for me, whenever there are multiple AGIs to choose between, the Friendliest one is the lowest-risk one, regardless of who developed it and why. Do you agree? If not, what specifically are you saying?
It’s not irrational, it’s just weak evidence.
I’m not sure exactly what you’re asking with the second paragraph. In any case, I don’t think the Singularity Institute is dogmatically in favor of friendliness; they’ve collaborated with Nick Bostrom on thinking about Oracle AI.
Why is it necessarily weak? I found it very instrumentally useful to try to factor out the belief-propagation impacts of people with nothing clearly impressive to show. There is a small risk I miss some useful insights. There is much lower pollution with privileged hypotheses given wrong priors. I am a computationally bounded agent. I can’t process everything.
It’s perfectly OK to give low priors to strange beliefs, like: “Here is EY, a guy from internet who found a way to save the world, because all scientists are wrong. Everybody listen to him, take him seriously despite his lack of credentials, give him your money and spread his words.” However, low does not mean infinitely low. A hypothesis with a low prior can still be saved by sufficient evidence.
For example, a hypothesis that “Washington is a capital city of USA” has also a very low prior, since there are over 30 000 towns in USA, and only one of them could be a capital, so why exactly should I privilege the Washington hypothesis? But there happens to be more than enough evidence which overrides the initially weak prior.
So basicly the question is how much evidence does EY need so that it becomes rational to consider his thoughts seriously (which does not yet mean he is right); how exactly low is this prior? So… How many people on this planet are putting a comparable amount of time and study to the topic of values of artificial intelligence? Is he able to convince seemingly rational people, or is he followed by a bunch of morons? Is his criticism of scientific processes just an unsubstantiated school-dropout envy, or is he proven right? Etc. I don’t pretend to do a Bayesian calculation, it just seems to me that the prior is not that low, and there is enough evidence. (And by the way, Dmytry, your presence at this website is also a weak evidence, isn’t it? I guess there are millions of web pages that you do not read and comment regularly. There are even many computer-related or AI-related pages you don’t read, but you do read this one—why?)
Oh please. There’s a difference between what makes a useful heuristic for you to decide what to spend time considering and what makes for a persuasive argument in a large debate where participants are willing to spend time hashing out specifics.
http://paulgraham.com/disagree.html
If even widely read bloggers like EY don’t qualify to affect your opinions, it sounds as though you’re ignoring almost everyone.
No one is expecting you to adopt their priors… Just read and make arguments about ideas instead of people, if you’re trying to make an inference about ideas.
I think you discarded one of conditionals. I read Bruce Schneier’s blog. Or Paul Graham’s. Furthermore, it is not about disagreement with the notion of AI risk. It’s about keeping the data non cherry picked, or less cherry picked.
Quoting from
http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf
I had been thinking, could it be that respected computer vision expert indeed believes that the system will just emerge world intentionality? That’d be pretty odd. Then I see it is his definition of AI here, it already presumes robust implementation of world intentionality. Which is precisely what a tool like optimizing compiler lacks.
edit: and in advance of other objection: I know evolution can produce what ever argument demands. Evolution, however, is a very messy and inefficient process for making very messy and inefficient solutions to the problems nobody has ever even defined.
I’m not sure there is a firm boundary between goals respecting events inside the computer and those respecting events outside.
Who has made this optimizing compiler claim that you attack? My impression was that AI paranoia advocates were concerned with efficient cross domain optimization, and optimizing compilers would seem to have a limited domain of optimization.
By the way, what do you think the most compelling argument in favor of AI paranoia is? Since I’ve been presenting arguments in favor of AI paranoia, here are the points against this position I can think of offhand that seem most compelling:
The simplest generally intelligent reasoning architectures could be so complicated as to be very difficult to achieve and improve, so that uploads would come first and even future supercomputers running them would improve them slowly: http://www.overcomingbias.com/2010/02/is-the-city-ularity-near.html
I’m not sure that a “good enough” implementation of human values, that was trained on a huge barrage of moral dilemmas and the solutions humans say they want implemented that were somehow sampled from a semi-rigorously defined sample space of moral dilemmas, would be that terrible. Our current universe certainly wasn’t optimized for human existence, but we’re doing all right at this point.
It seems possible that seed AI is very difficult to create by accident, in the same way an engineered virus would be very difficult to create by accident, so that for a long time it is possible for researchers to build a seed AI but they don’t do it because of rudimentary awareness of risks (which aren’t present today though).
Progress so far has been mostly good. We see big positive trends towards improvement. Some measure of paranoia is usually justified, but excessive paranoia is often unhelpful.
I think he was talking about this post.
It’s too bad he didn’t summarize and link to all of the stuff he’s written on this topic, and just give it an inflammatory headline if he felt the issue deserved more attention...