2 would be somewhat surprising since there’s no physical law that disallows it.
3 comes close to theology and would imply low AI risk since such entities would probably not allow a potentially dangerous AI to exist within any area they control.
4 is sort of a re-phrasing of 1.
5 is possible but implies some strong reason many would all reliably choose the same options.
For #4 and #5 what difference does it make whether biological beings make ‘FAI’ that helps them or ‘UFAI’ that kills them before going about its business?
Do you mean that an alien FAI may look very much like an UFAI to us? if so I agree.
1 is early filter meaning before our current state, #4 would be around or after our current state.
Do you mean that an alien FAI may look very much like an UFAI to us? if so I agree.
Not in the sense of harming us. For the Fermi paradox visible benevolent aliens are as inconsistent with our observations as murderous Berserkers.
I’m trying to get you to explain why you think a belief that “AI is a significant risk” would change our credence in any of #1-5, compared to not believing that.
ok, combinations.
For each 1 to 5 I’m assuming mutually exclusive because I don’t want to mess around with too many scenarios.
For AI risk I’m assuming a paper clipper as a reasonable example of a doomsday AI scenario.
1-high : We’d expect nothing visible.
1-low : We’d expect nothing visible.
2-high : This comes down to “how impossible?” impossible for squishy meatbags or impossible for an AI with a primary goal that implies spreading. We’d still expect to see something weird as entire solar systems are engineered.
2-low :We’d expect nothing visible.
3-high :We’d expect nothing visible.
3-low :We’d expect nothing visible.
4-high : Implies something much more immediately deadly than AI risk which we should be devoting our resources to avoiding.
4-low : We’d expect nothing visible.
5-high : We’d still expect to see the universe being converted into paperclips by someone who screwed up.
5-low : We’d expect nothing visible.
Ok so fair point made, there’s a couple more options implied.
a:early filter,
b:low AI risk,
c:wizards already in charge who enforce low AI risk.
d:AI risk being far less important than some other really horrible soon to come risk.
yes, 1 is equivalent to an early filter.
2 would be somewhat surprising since there’s no physical law that disallows it.
3 comes close to theology and would imply low AI risk since such entities would probably not allow a potentially dangerous AI to exist within any area they control.
4 is sort of a re-phrasing of 1.
5 is possible but implies some strong reason many would all reliably choose the same options.
Do you mean that an alien FAI may look very much like an UFAI to us? if so I agree.
1 is early filter meaning before our current state, #4 would be around or after our current state.
Not in the sense of harming us. For the Fermi paradox visible benevolent aliens are as inconsistent with our observations as murderous Berserkers.
I’m trying to get you to explain why you think a belief that “AI is a significant risk” would change our credence in any of #1-5, compared to not believing that.
ah I see.
ok, combinations. For each 1 to 5 I’m assuming mutually exclusive because I don’t want to mess around with too many scenarios.
For AI risk I’m assuming a paper clipper as a reasonable example of a doomsday AI scenario.
1-high : We’d expect nothing visible.
1-low : We’d expect nothing visible.
2-high : This comes down to “how impossible?” impossible for squishy meatbags or impossible for an AI with a primary goal that implies spreading. We’d still expect to see something weird as entire solar systems are engineered.
2-low :We’d expect nothing visible.
3-high :We’d expect nothing visible.
3-low :We’d expect nothing visible.
4-high : Implies something much more immediately deadly than AI risk which we should be devoting our resources to avoiding.
4-low : We’d expect nothing visible.
5-high : We’d still expect to see the universe being converted into paperclips by someone who screwed up.
5-low : We’d expect nothing visible.
Ok so fair point made, there’s a couple more options implied.
a:early filter,
b:low AI risk,
c:wizards already in charge who enforce low AI risk.
d:AI risk being far less important than some other really horrible soon to come risk.