I agree that companies which want to be profitable, should focus on medical products rather than such a moonshot. The idea I wrote here is definitely not an investor pitch, it’s more of an idea for discussion similar to the FHI’s discussion on Whole Brain Emulation.
In the beginning, the simulated humans should not do any self modifications all, and just work like a bunch of normal human researchers (e.g. on AI alignment, or aligning the smarter versions of themselves). The benefit is that the smartest researchers can be cloned many times, and they might think many times faster.
That’s like the people who advocated AI boxing. Connecting AI’s to the internet is so economically valuable that it’s done automatically.
The main source of danger is not a superintelligence which kills or harms people out of “hatred” or “disgust” or any human-like emotion. Instead, the main source of extinction is a superintelligence which assigns absolutely zero weight to everything humans cherish
Humans consider avoiding death to have a pretty high weight. Entities that spin up and kill copies at a regular basis are likely going to evolve quite different norms about the value of life than humans. A lot of what humans value comes out of how we interact with the world in an embodied way.
I completely agree with solving actual problems instead of only working on Scanless Whole Brain Emulation :). I also agree that just working on science and seeing what comes up is valuable.
Both simulated humans and other paths to superintelligence will be subject to AI race pressures. I want to say that given the same level of race pressure, simulated humans are safer. Current AI labs are willing to wait months before releasing their AI, the question is whether this is enough.
I didn’t think of that, that is a very good point! They should avoid killing copies, and maybe save them to be revived in the future. I highly suspect that compute is more of a bottleneck than storage space. (You can store the largest AI models in a typical computer hard-drive, but you won’t have enough compute to run them.)
You might also want to read Truthseeking is the ground in which other principles grow. Solving actual problems on the way to building up capabilities is a way that keeps everyone honest.
That’s like the people who advocated AI boxing. Connecting AI’s to the internet is so economically valuable that it’s done automatically.
Humans consider avoiding death to have a pretty high weight. Entities that spin up and kill copies at a regular basis are likely going to evolve quite different norms about the value of life than humans. A lot of what humans value comes out of how we interact with the world in an embodied way.
I completely agree with solving actual problems instead of only working on Scanless Whole Brain Emulation :). I also agree that just working on science and seeing what comes up is valuable.
Both simulated humans and other paths to superintelligence will be subject to AI race pressures. I want to say that given the same level of race pressure, simulated humans are safer. Current AI labs are willing to wait months before releasing their AI, the question is whether this is enough.
I didn’t think of that, that is a very good point! They should avoid killing copies, and maybe save them to be revived in the future. I highly suspect that compute is more of a bottleneck than storage space. (You can store the largest AI models in a typical computer hard-drive, but you won’t have enough compute to run them.)