Second, when I ask myself “what argument/logic/experiment would convince me to take the AGI x-risk seriously enough to personally try to do something about it?”, I come up with nothing.
Is your position
AGI isn’t a a large x-risk
It’s too hard to do anything about it
or something else?
Just to make an extra step toward MIRI, suppose it had a convincing argument that without the FAI research the odds of human extinction due to UFAI are at least 10% (with high confidence), and that the FAI research can reduce the odds to, say, 1% (again, with high confidence), then I would possibly reevaluate my attitude.
You and I might be on the same page here. How broadly are you defining “FAI research” ?
I don’t see how any of the mentioned historical examples can do that. And definitely not any kind of counterfactual history scenarios, those have too low confidence to be taken seriously.
There are potentially promising interventions that are less targeted than the FAI research that MIRI is currently doing (e.g. lobbying for government regulations on AI research).
Can you clarify what sorts of counterfactual history scenarios you have in mind?
Is your position AGI isn’t a a large x-risk It’s too hard to do anything about it or something else?
I don’t have a well defended position. All I have is an estimate of confidence that my action or inaction would affect the hypothetical AGI x-risk in a known way. And that confidence is too low to be worth acting upon.
How broadly are you defining “FAI research” ?
Any research included in such an argument, in any area. Really, anything that provides some certainty.
There are potentially promising interventions that are less targeted than the FAI research that MIRI is currently doing (e.g. lobbying for government regulations on AI research).
I have extremely low confidence that these interventions can affect the hypothetical AGI x-risk in the desired direction.
Can you clarify what sorts of counterfactual history scenarios you have in mind?
I can’t imagine anything convincing. Similarly, I don’t find an argument “if one of the Hitler assassination attempts were successful, would be avoided” compelling. Not to say that one should not have tried to assassinate him at the time, given the information available. But a valid reason to carry out such an assassination attempt would have to be something near-term and high-confidence, like reducing the odds of further poor military decisions or something.
I don’t have a well defended position. All I have is an estimate of confidence that my action or inaction would affect the hypothetical AGI x-risk in a known way. And that confidence is too low to be worth acting upon.
This is close to my current position, but I would update if I learned that there’s a non-negligible chance of AGI within the next 20 years.
I have extremely low confidence that these interventions can affect the hypothetical AGI x-risk in the desired direction.
This is the issue under investigation
I can’t imagine anything convincing. Similarly, I don’t find an argument “if one of the Hitler assassination attempts were successful, would be avoided” compelling. Not to say that one should not have tried to assassinate him at the time, given the information available. But a valid reason to carry out such an assassination attempt would have to be something near-term and high-confidence, like reducing the odds of further poor military decisions or something.
What about policies to reduce hydrofluorocarbons emissions that would otherwise deplete the ozone layer?
Is your position
AGI isn’t a a large x-risk
It’s too hard to do anything about it
or something else?
You and I might be on the same page here. How broadly are you defining “FAI research” ?
There are potentially promising interventions that are less targeted than the FAI research that MIRI is currently doing (e.g. lobbying for government regulations on AI research).
Can you clarify what sorts of counterfactual history scenarios you have in mind?
I don’t have a well defended position. All I have is an estimate of confidence that my action or inaction would affect the hypothetical AGI x-risk in a known way. And that confidence is too low to be worth acting upon.
Any research included in such an argument, in any area. Really, anything that provides some certainty.
I have extremely low confidence that these interventions can affect the hypothetical AGI x-risk in the desired direction.
I can’t imagine anything convincing. Similarly, I don’t find an argument “if one of the Hitler assassination attempts were successful, would be avoided” compelling. Not to say that one should not have tried to assassinate him at the time, given the information available. But a valid reason to carry out such an assassination attempt would have to be something near-term and high-confidence, like reducing the odds of further poor military decisions or something.
This is close to my current position, but I would update if I learned that there’s a non-negligible chance of AGI within the next 20 years.
This is the issue under investigation
What about policies to reduce hydrofluorocarbons emissions that would otherwise deplete the ozone layer?
Well, there is no need for any fancy counterfactual history there, the link was confirmed experimentally with high confidence.
Yes the Montreal Protocol, an extremely successful international treaty.
By the way, do I know you personally? Feel free to email me at jsinick@gmail.com if you’d like to correspond.
I doubt it. And I don’t think I have much to contribute to any genuine AGI/risk research.