Personal and tribal selfishness align with AI risk-reduction in a way they may not align on climate change.
This seems obviously false. Local expenditures—of money, pride, possibility of not being the first to publish, etc. - are still local, global penalties are still global. Incentives are misaligned in exactly the same way as for climate change.
RSI capabilities could be charted, and are likely to be AI-complete.
This is to be taken as an arguendo, not as the author’s opinion, right? See IEM on the minimal conditions for takeoff. Albeit if “AI-complete” is taken in a sense of generality and difficulty rather than “human-equivalent” then I agree much more strongly, but this is correspondingly harder to check using some neat IQ test or other “visible” approach that will command immediate, intuitive agreement.
Which historical events are analogous to AI risk in some important ways?
Most obviously molecular nanotechnology a la Drexler, the other ones seem too ‘straightforward’ by comparison. I’ve always modeled my assumed social response for AI on the case of nanotech, i.e., funding except for well-connected insiders, term being broadened to meaninglessness, lots of concerned blither by ‘ethicists’ unconnected to the practitioners, etc.
Personal and tribal selfishness align with AI risk-reduction in a way they may not align on climate change.
This seems obviously false. Local expenditures—of money, pride, possibility of not being the first to publish, etc. - are still local, global penalties are still global. Incentives are misaligned in exactly the same way as for climate change.
Climate change doesn’t have the aspect that “if this ends up being a problem at all, then chances are that I (or my family/...) will die of it”.
Climate change doesn’t have the aspect that “if this ends up being a problem at all, then chances are that I (or my family/...) will die of it”.
Many people believe that about climate change (due to global political disruption, economic collapse etcetera, praising the size of the disaster seems virtuous). Many others do not believe it about AI. Many put sizable climate-change disaster into the far future. Many people will go on believing this AI independently of any evidence which accrues. Actors with something to gain by minimizing their belief in climate change so minimize. This has also been true in AI risk so far.
Many people believe that about climate change (due to global political disruption, economic collapse etcetera, praising the size of the disaster seems virtuous).
Hm! I cannot recall a single instance of this. (Hm, well; I can recall one instance of a TV interview with a politician from a non-first-world island nation taking projections seriously which would put his nation under water, so it would not be much of a stretch to think that he’s taking seriously the possibility that people close to him may die from this.) If you have, probably this is because I haven’t read that much about what people say about climate change. Could you give me an indication of the extent of your evidence, to help me decide how much to update?
Many others do not believe it about AI.
Ok, agreed, and this still seems likely even if you imagine sensible AI risk analyses being similarly well-known as climate change analyses are today. I can see how it could lead to an outcome similar to today’s situation with climate change if that happened… Still, if the analysis says “you will die of this”, and the brain of the person considering the analysis is willing to assign it some credence, that seems to align personal selfishness with global interests more than (climate change as it has looked to me so far).
Many people believe that about climate change (due to global political disruption, economic collapse etcetera, praising the size of the disaster seems virtuous).
Hm! I cannot recall a single instance of this.
Will keep an eye out for the next citation.
Still, if the analysis says “you will die of this”, and the brain of the person considering the analysis is willing to assign it some credence
This has not happened with AI risk so far among most AIfolk, or anyone the slightest bit motivated to reject the advice. We had a similar conversation at MIRI once, in which I was arguing that, no, people don’t automatically change their behavior as soon as they are told that something bad might happen to them personally; and when we were breaking it up, Anna, on her way out, asked Louie downstairs how he had reasoned about choosing to ride motorcycles.
People only avoid certain sorts of death risks under certain circumstances.
We had a similar conversation at MIRI once, in which I was arguing that, no, people don’t automatically change their behavior as soon as they are told that something bad might happen to them personally
Being told something is dangerous =/= believing it is =/= alieving it is.
Albeit if “AI-complete” is taken in a sense of generality and difficulty rather than “human-equivalent” then I agree much more strongly, but this is correspondingly harder to check using some neat IQ test or other “visible” approach that will command immediate, intuitive agreement.
This seems implied by X-complete. X-complete generally means “given a solution to an X-complete problem, we have a solution for X”.
eg. NP complete: given a polynomial solution to any NP-complete problem, any problem in NP can be solved in polynomial time.
(Of course the technical nuance of the strength of the statement X-complete is such that I expect most people to imagine the wrong thing, like you say.)
This seems obviously false. Local expenditures—of money, pride, possibility of not being the first to publish, etc. - are still local, global penalties are still global. Incentives are misaligned in exactly the same way as for climate change.
This is to be taken as an arguendo, not as the author’s opinion, right? See IEM on the minimal conditions for takeoff. Albeit if “AI-complete” is taken in a sense of generality and difficulty rather than “human-equivalent” then I agree much more strongly, but this is correspondingly harder to check using some neat IQ test or other “visible” approach that will command immediate, intuitive agreement.
Most obviously molecular nanotechnology a la Drexler, the other ones seem too ‘straightforward’ by comparison. I’ve always modeled my assumed social response for AI on the case of nanotech, i.e., funding except for well-connected insiders, term being broadened to meaninglessness, lots of concerned blither by ‘ethicists’ unconnected to the practitioners, etc.
Climate change doesn’t have the aspect that “if this ends up being a problem at all, then chances are that I (or my family/...) will die of it”.
(Agree with the rest of the comment.)
Many people believe that about climate change (due to global political disruption, economic collapse etcetera, praising the size of the disaster seems virtuous). Many others do not believe it about AI. Many put sizable climate-change disaster into the far future. Many people will go on believing this AI independently of any evidence which accrues. Actors with something to gain by minimizing their belief in climate change so minimize. This has also been true in AI risk so far.
Hm! I cannot recall a single instance of this. (Hm, well; I can recall one instance of a TV interview with a politician from a non-first-world island nation taking projections seriously which would put his nation under water, so it would not be much of a stretch to think that he’s taking seriously the possibility that people close to him may die from this.) If you have, probably this is because I haven’t read that much about what people say about climate change. Could you give me an indication of the extent of your evidence, to help me decide how much to update?
Ok, agreed, and this still seems likely even if you imagine sensible AI risk analyses being similarly well-known as climate change analyses are today. I can see how it could lead to an outcome similar to today’s situation with climate change if that happened… Still, if the analysis says “you will die of this”, and the brain of the person considering the analysis is willing to assign it some credence, that seems to align personal selfishness with global interests more than (climate change as it has looked to me so far).
Will keep an eye out for the next citation.
This has not happened with AI risk so far among most AIfolk, or anyone the slightest bit motivated to reject the advice. We had a similar conversation at MIRI once, in which I was arguing that, no, people don’t automatically change their behavior as soon as they are told that something bad might happen to them personally; and when we were breaking it up, Anna, on her way out, asked Louie downstairs how he had reasoned about choosing to ride motorcycles.
People only avoid certain sorts of death risks under certain circumstances.
Thanks!
Point. Need to think.
Being told something is dangerous =/= believing it is =/= alieving it is.
Right. I’ll clarify in the OP.
This seems implied by X-complete. X-complete generally means “given a solution to an X-complete problem, we have a solution for X”.
eg. NP complete: given a polynomial solution to any NP-complete problem, any problem in NP can be solved in polynomial time.
(Of course the technical nuance of the strength of the statement X-complete is such that I expect most people to imagine the wrong thing, like you say.)