ok, combinations.
For each 1 to 5 I’m assuming mutually exclusive because I don’t want to mess around with too many scenarios.
For AI risk I’m assuming a paper clipper as a reasonable example of a doomsday AI scenario.
1-high : We’d expect nothing visible.
1-low : We’d expect nothing visible.
2-high : This comes down to “how impossible?” impossible for squishy meatbags or impossible for an AI with a primary goal that implies spreading. We’d still expect to see something weird as entire solar systems are engineered.
2-low :We’d expect nothing visible.
3-high :We’d expect nothing visible.
3-low :We’d expect nothing visible.
4-high : Implies something much more immediately deadly than AI risk which we should be devoting our resources to avoiding.
4-low : We’d expect nothing visible.
5-high : We’d still expect to see the universe being converted into paperclips by someone who screwed up.
5-low : We’d expect nothing visible.
Ok so fair point made, there’s a couple more options implied.
a:early filter,
b:low AI risk,
c:wizards already in charge who enforce low AI risk.
d:AI risk being far less important than some other really horrible soon to come risk.
ah I see.
ok, combinations. For each 1 to 5 I’m assuming mutually exclusive because I don’t want to mess around with too many scenarios.
For AI risk I’m assuming a paper clipper as a reasonable example of a doomsday AI scenario.
1-high : We’d expect nothing visible.
1-low : We’d expect nothing visible.
2-high : This comes down to “how impossible?” impossible for squishy meatbags or impossible for an AI with a primary goal that implies spreading. We’d still expect to see something weird as entire solar systems are engineered.
2-low :We’d expect nothing visible.
3-high :We’d expect nothing visible.
3-low :We’d expect nothing visible.
4-high : Implies something much more immediately deadly than AI risk which we should be devoting our resources to avoiding.
4-low : We’d expect nothing visible.
5-high : We’d still expect to see the universe being converted into paperclips by someone who screwed up.
5-low : We’d expect nothing visible.
Ok so fair point made, there’s a couple more options implied.
a:early filter,
b:low AI risk,
c:wizards already in charge who enforce low AI risk.
d:AI risk being far less important than some other really horrible soon to come risk.