Is your position AGI isn’t a a large x-risk It’s too hard to do anything about it or something else?
I don’t have a well defended position. All I have is an estimate of confidence that my action or inaction would affect the hypothetical AGI x-risk in a known way. And that confidence is too low to be worth acting upon.
How broadly are you defining “FAI research” ?
Any research included in such an argument, in any area. Really, anything that provides some certainty.
There are potentially promising interventions that are less targeted than the FAI research that MIRI is currently doing (e.g. lobbying for government regulations on AI research).
I have extremely low confidence that these interventions can affect the hypothetical AGI x-risk in the desired direction.
Can you clarify what sorts of counterfactual history scenarios you have in mind?
I can’t imagine anything convincing. Similarly, I don’t find an argument “if one of the Hitler assassination attempts were successful, would be avoided” compelling. Not to say that one should not have tried to assassinate him at the time, given the information available. But a valid reason to carry out such an assassination attempt would have to be something near-term and high-confidence, like reducing the odds of further poor military decisions or something.
I don’t have a well defended position. All I have is an estimate of confidence that my action or inaction would affect the hypothetical AGI x-risk in a known way. And that confidence is too low to be worth acting upon.
This is close to my current position, but I would update if I learned that there’s a non-negligible chance of AGI within the next 20 years.
I have extremely low confidence that these interventions can affect the hypothetical AGI x-risk in the desired direction.
This is the issue under investigation
I can’t imagine anything convincing. Similarly, I don’t find an argument “if one of the Hitler assassination attempts were successful, would be avoided” compelling. Not to say that one should not have tried to assassinate him at the time, given the information available. But a valid reason to carry out such an assassination attempt would have to be something near-term and high-confidence, like reducing the odds of further poor military decisions or something.
What about policies to reduce hydrofluorocarbons emissions that would otherwise deplete the ozone layer?
I don’t have a well defended position. All I have is an estimate of confidence that my action or inaction would affect the hypothetical AGI x-risk in a known way. And that confidence is too low to be worth acting upon.
Any research included in such an argument, in any area. Really, anything that provides some certainty.
I have extremely low confidence that these interventions can affect the hypothetical AGI x-risk in the desired direction.
I can’t imagine anything convincing. Similarly, I don’t find an argument “if one of the Hitler assassination attempts were successful, would be avoided” compelling. Not to say that one should not have tried to assassinate him at the time, given the information available. But a valid reason to carry out such an assassination attempt would have to be something near-term and high-confidence, like reducing the odds of further poor military decisions or something.
This is close to my current position, but I would update if I learned that there’s a non-negligible chance of AGI within the next 20 years.
This is the issue under investigation
What about policies to reduce hydrofluorocarbons emissions that would otherwise deplete the ozone layer?
Well, there is no need for any fancy counterfactual history there, the link was confirmed experimentally with high confidence.
Yes the Montreal Protocol, an extremely successful international treaty.
By the way, do I know you personally? Feel free to email me at jsinick@gmail.com if you’d like to correspond.
I doubt it. And I don’t think I have much to contribute to any genuine AGI/risk research.