AI development risks are existential(/crucial/critical).—Does this statement quality for Extraordinary claims require extraordinary evidence?
Counterargument stands on the sampling of analogous (breakthrough )intentions, some people call those *priors* here. Which inventions do we allow in here would strongly decide if the initial claim is extraordinary or just plain and reasonable, well fit in the dangerously powerful inventions*.
My set of analogies: nuclear energy extraction; fire; shooting; speech/writing;;
Other set: Nuclear power, bio-engineering/weapons—as those are the only two endangering whole civilised biome significantly.
Set of *all* inventions: Renders the claim extraordinary/weird/out of scope.
AI development risks are existential(/crucial/critical).—Does this statement quality for Extraordinary claims require extraordinary evidence?
Counterargument stands on the sampling of analogous (breakthrough )intentions, some people call those *priors* here. Which inventions do we allow in here would strongly decide if the initial claim is extraordinary or just plain and reasonable, well fit in the dangerously powerful inventions*.
My set of analogies: nuclear energy extraction; fire; shooting; speech/writing;;
Other set: Nuclear power, bio-engineering/weapons—as those are the only two endangering whole civilised biome significantly.
Set of *all* inventions: Renders the claim extraordinary/weird/out of scope.