P1 is almost certainly an overestimate: independent of everything else, there’s almost certainly a greater than 0.001% chance that a civilization-ending event will occur before anyone gets around to building a superintelligence. The potential importance of AI research by way of this chain of logic wouldn’t be lowered too much if you used 80 or 90%, though.
P1 is almost certainly an overestimate: independent of everything else, there’s almost certainly a greater than 0.001% chance that a civilization-ending event will occur before anyone gets around to building a superintelligence. The potential importance of AI research by way of this chain of logic wouldn’t be lowered too much if you used 80 or 90%, though.