I completely agree with solving actual problems instead of only working on Scanless Whole Brain Emulation :). I also agree that just working on science and seeing what comes up is valuable.
Both simulated humans and other paths to superintelligence will be subject to AI race pressures. I want to say that given the same level of race pressure, simulated humans are safer. Current AI labs are willing to wait months before releasing their AI, the question is whether this is enough.
I didn’t think of that, that is a very good point! They should avoid killing copies, and maybe save them to be revived in the future. I highly suspect that compute is more of a bottleneck than storage space. (You can store the largest AI models in a typical computer hard-drive, but you won’t have enough compute to run them.)
I completely agree with solving actual problems instead of only working on Scanless Whole Brain Emulation :). I also agree that just working on science and seeing what comes up is valuable.
Both simulated humans and other paths to superintelligence will be subject to AI race pressures. I want to say that given the same level of race pressure, simulated humans are safer. Current AI labs are willing to wait months before releasing their AI, the question is whether this is enough.
I didn’t think of that, that is a very good point! They should avoid killing copies, and maybe save them to be revived in the future. I highly suspect that compute is more of a bottleneck than storage space. (You can store the largest AI models in a typical computer hard-drive, but you won’t have enough compute to run them.)