Most math kills you quietly, neatly, and cleanly, unless the apparent obstacles to distant timeless trade are overcome in practice
Will mentioned a couple of other possible ways in which UFAI fails to kill off humanity, besides distant timeless trade. (BTW I think the current standard term for this is “acausal trade” which incorporates the idea of trading across possible worlds as well as across time.) Although perhaps “hidden AGIs” is unlikely and you consider “potential simulators” to be covered under “distant timeless trade”.
I don’t spend much time talking about this on LW because timeless trade speculation eats people’s brains and doesn’t produce any useful outputs from the consumption; only decision theorists whose work is plugging into FAI theory need to think about timeless trade
The idea is relevant not just for actually building FAI, but also for deciding strategy (ETA: for example how much chance of creating UFAI should we accept in order to build FAI). See here for an example of such discussion (between people who perhaps you think are saner than Will Newsome).
not to mention the horrid way it sounds from the perspective of traditional skeptics
I agreed with this, but it’s not clear what we should do about it (e.g., whether we should stop talking about it), given the strategic relevance.
The idea is relevant not just for actually building FAI, but also for deciding strategy
And also relevant, I hasten to point out, for solving moral philosophy. I want to be morally justified whether or not I’m involved with an FAI team and whether or not I’m in a world where the Singularity is more than just a plot device. Acausal influence elucidates decision theory, and decision theory elucidates morality.
Will mentioned a couple of other possible ways in which UFAI fails to kill off humanity, besides distant timeless trade. [...] Although perhaps “hidden AGIs” is unlikely and you consider “potential simulators” to be covered under “distant timeless trade”.
This is considered unlikely ’round these parts, but one should also consider God, Who is alleged by some to be omnipotent and Who might prefer to keep humans around. Insofar as such a God is metaphysically necessary this is mechanistically but not phenomenologically distinct from plain “hidden AGI”.
Will mentioned a couple of other possible ways in which UFAI fails to kill off humanity, besides distant timeless trade. (BTW I think the current standard term for this is “acausal trade” which incorporates the idea of trading across possible worlds as well as across time.) Although perhaps “hidden AGIs” is unlikely and you consider “potential simulators” to be covered under “distant timeless trade”.
The idea is relevant not just for actually building FAI, but also for deciding strategy (ETA: for example how much chance of creating UFAI should we accept in order to build FAI). See here for an example of such discussion (between people who perhaps you think are saner than Will Newsome).
I agreed with this, but it’s not clear what we should do about it (e.g., whether we should stop talking about it), given the strategic relevance.
And also relevant, I hasten to point out, for solving moral philosophy. I want to be morally justified whether or not I’m involved with an FAI team and whether or not I’m in a world where the Singularity is more than just a plot device. Acausal influence elucidates decision theory, and decision theory elucidates morality.
To clarify what I assume to be Eliezers point: “here there be basilisks, take it somewhere less public”
There only be basilisks if you don’t accept SSA or assume that utility scales superlinearly with computations performed.
There’s more than one kind. For obvious reasons I wont elaborate.
This is considered unlikely ’round these parts, but one should also consider God, Who is alleged by some to be omnipotent and Who might prefer to keep humans around. Insofar as such a God is metaphysically necessary this is mechanistically but not phenomenologically distinct from plain “hidden AGI”.