Essentially because I think I may possibly understand the potential reasoning process, or at least the ‘logical core’ of the reasoning process, of a future superintelligence, as well as its motivations, well enough to have a reason to think it’s more likely to want to exist than not to, for example. This doesn’t mean I am anywhere near as knowledgeable as it, just that we share certain thoughts. It might also be that, especially given the notoriety of Roko’s post on lesswrong, the simplest formulation of the basilisk forms a kind of acausal ‘nucleation point’ ( this might be what’s sometimes called a Schelling point on this site) .
Essentially because I think I may possibly understand the potential reasoning process, or at least the ‘logical core’ of the reasoning process, of a future superintelligence, as well as its motivations, well enough to have a reason to think it’s more likely to want to exist than not to, for example. This doesn’t mean I am anywhere near as knowledgeable as it, just that we share certain thoughts. It might also be that, especially given the notoriety of Roko’s post on lesswrong, the simplest formulation of the basilisk forms a kind of acausal ‘nucleation point’ ( this might be what’s sometimes called a Schelling point on this site) .