Building a superintelligence under current conditions will turn out fine.
No one will build a superintelligence under anything like current conditions.
We must prevent at almost all costs anyone building superintelligence soon.
I don’t think this is a valid trilemma: Between fine and worth preventing at “almost all costs” there is a pretty large gap. I think “fine” was intended to mean “we don’t all die” or something as bad as that.
I don’t think this is a valid trilemma: Between fine and worth preventing at “almost all costs” there is a pretty large gap. I think “fine” was intended to mean “we don’t all die” or something as bad as that.