1 is early filter meaning before our current state, #4 would be around or after our current state.
Do you mean that an alien FAI may look very much like an UFAI to us? if so I agree.
Not in the sense of harming us. For the Fermi paradox visible benevolent aliens are as inconsistent with our observations as murderous Berserkers.
I’m trying to get you to explain why you think a belief that “AI is a significant risk” would change our credence in any of #1-5, compared to not believing that.
ok, combinations.
For each 1 to 5 I’m assuming mutually exclusive because I don’t want to mess around with too many scenarios.
For AI risk I’m assuming a paper clipper as a reasonable example of a doomsday AI scenario.
1-high : We’d expect nothing visible.
1-low : We’d expect nothing visible.
2-high : This comes down to “how impossible?” impossible for squishy meatbags or impossible for an AI with a primary goal that implies spreading. We’d still expect to see something weird as entire solar systems are engineered.
2-low :We’d expect nothing visible.
3-high :We’d expect nothing visible.
3-low :We’d expect nothing visible.
4-high : Implies something much more immediately deadly than AI risk which we should be devoting our resources to avoiding.
4-low : We’d expect nothing visible.
5-high : We’d still expect to see the universe being converted into paperclips by someone who screwed up.
5-low : We’d expect nothing visible.
Ok so fair point made, there’s a couple more options implied.
a:early filter,
b:low AI risk,
c:wizards already in charge who enforce low AI risk.
d:AI risk being far less important than some other really horrible soon to come risk.
1 is early filter meaning before our current state, #4 would be around or after our current state.
Not in the sense of harming us. For the Fermi paradox visible benevolent aliens are as inconsistent with our observations as murderous Berserkers.
I’m trying to get you to explain why you think a belief that “AI is a significant risk” would change our credence in any of #1-5, compared to not believing that.
ah I see.
ok, combinations. For each 1 to 5 I’m assuming mutually exclusive because I don’t want to mess around with too many scenarios.
For AI risk I’m assuming a paper clipper as a reasonable example of a doomsday AI scenario.
1-high : We’d expect nothing visible.
1-low : We’d expect nothing visible.
2-high : This comes down to “how impossible?” impossible for squishy meatbags or impossible for an AI with a primary goal that implies spreading. We’d still expect to see something weird as entire solar systems are engineered.
2-low :We’d expect nothing visible.
3-high :We’d expect nothing visible.
3-low :We’d expect nothing visible.
4-high : Implies something much more immediately deadly than AI risk which we should be devoting our resources to avoiding.
4-low : We’d expect nothing visible.
5-high : We’d still expect to see the universe being converted into paperclips by someone who screwed up.
5-low : We’d expect nothing visible.
Ok so fair point made, there’s a couple more options implied.
a:early filter,
b:low AI risk,
c:wizards already in charge who enforce low AI risk.
d:AI risk being far less important than some other really horrible soon to come risk.