You initially questioned whether uploads would be aligned, but now you seem to be raising several other points which do not engage with that topic or with any of my last comment. I do not think we can reach agreement if you switch topics like this—if you now agree that uploads would be aligned, please say so. That seems to be an important crux, so I am not sure why you want to move on from it to your other objections without acknowledgement.
I am not sure I was able to correctly parse this comment, but you seem to be making a few points.
In one place, you question whether the capabilities / alignment distinction exists—I do not really understand the relevance, since I nowhere suggested pure alignment work, only uploading / emulation etc. This also seems to be somewhat in tension with the rest of your comment, but perhaps it is only an aside and not load bearing?
Your main point, as I understand it, is that alignment may actually be tractable to solve, and a focus on uploading is an excuse to delay alignment progress and then (as you seem to frame my suggestion) have an upload solve it all at once. And this does not allow incremental progress or partial solutions until uploading works.
...then you veer into speculation about the motives / psychology of MIRI and the superalignment team which is interesting but doesn’t seem central or even closely connected to the discussion at hand.
So I will focus on the main point here. I have a lot of disagreements with it.
I think you may misunderstand my plan here—you seem to characterize the idea as making uploads, and then setting them loose to either self-modify etc. or mainly to work on technical alignment. Actually, I don’t view it this way at all. Creating the uploads (or emulations, if you can get a provably safe imitation learning scheme to work faster) is a weak technical solution to the alignment problem—now you have something aligned to (some) human(’s) values which you can run 10x faster, so it is in that sense not only an aligned AGI but modestly superintelligent. You can do a lot of things with that—first of all, it automatically hardens the world significantly: it lowers the opportunity cost for not building superintelligence because now we already have a bunch of functionally genius scientists, you can drastically improve cybersecurity, and perhaps the uploads make enough money to buy up a sufficient percentage of GPU’s that whatever is left over is not enough to outcompete them even if someone creates an unaligned AGI. Another thing you can do is try to find a more scalable and general solution to the AI safety problem—including technical methods like agent foundations, interpretability, and control, as well as governance. But I don’t think of this is as the mainline path to victory in the short term.
Perhaps you are worried that uploads will recklessly self-modify or race to build AGI. I don’t think this is inevitable or even the default. There is currently no trillion dollar race to build uploads! There may be only a small number of players, and they can take precautions, and enforce regulations on what uploads are allowed to do (effectively, since uploads are not strong superintelligences) and technically it even seems hard for uploads to recursively self-improve by default (human brains are messy, they don’t even need to be given read/write access). Even some uploads escaped, to recursively self-improve safely they would need to solve their own alignment problem and it is not in their interests to recklessly forge ahead, particularly if they can be punished with shutdown and are otherwise potentially immortal. I suspect that most uploads who try to foom will go insane, and it is not clear that the power balance favors any rogue uploads who fare better.
I also don’t agree that there is no incremental progress on the way to full uploads—I think you can build useful rationality enhancing artifacts well before that points—but that is maybe worth a post.
Finally, I do not agree with this characterization of trying to build uploads rather than just solving alignment. I have been thinking about and trying to solve alignment for years, I see serious flaws in every approach, and I have recently started to wonder if alignment is just uploading with more steps anyway. So, this is more like my most promising suggestion for alignment, rather than giving up on solving alignment.
You initially questioned whether uploads would be aligned, but now you seem to be raising several other points which do not engage with that topic or with any of my last comment. I do not think we can reach agreement if you switch topics like this—if you now agree that uploads would be aligned, please say so. That seems to be an important crux, so I am not sure why you want to move on from it to your other objections without acknowledgement.
I am not sure I was able to correctly parse this comment, but you seem to be making a few points.
In one place, you question whether the capabilities / alignment distinction exists—I do not really understand the relevance, since I nowhere suggested pure alignment work, only uploading / emulation etc. This also seems to be somewhat in tension with the rest of your comment, but perhaps it is only an aside and not load bearing?
Your main point, as I understand it, is that alignment may actually be tractable to solve, and a focus on uploading is an excuse to delay alignment progress and then (as you seem to frame my suggestion) have an upload solve it all at once. And this does not allow incremental progress or partial solutions until uploading works.
...then you veer into speculation about the motives / psychology of MIRI and the superalignment team which is interesting but doesn’t seem central or even closely connected to the discussion at hand.
So I will focus on the main point here. I have a lot of disagreements with it.
I think you may misunderstand my plan here—you seem to characterize the idea as making uploads, and then setting them loose to either self-modify etc. or mainly to work on technical alignment. Actually, I don’t view it this way at all. Creating the uploads (or emulations, if you can get a provably safe imitation learning scheme to work faster) is a weak technical solution to the alignment problem—now you have something aligned to (some) human(’s) values which you can run 10x faster, so it is in that sense not only an aligned AGI but modestly superintelligent. You can do a lot of things with that—first of all, it automatically hardens the world significantly: it lowers the opportunity cost for not building superintelligence because now we already have a bunch of functionally genius scientists, you can drastically improve cybersecurity, and perhaps the uploads make enough money to buy up a sufficient percentage of GPU’s that whatever is left over is not enough to outcompete them even if someone creates an unaligned AGI. Another thing you can do is try to find a more scalable and general solution to the AI safety problem—including technical methods like agent foundations, interpretability, and control, as well as governance. But I don’t think of this is as the mainline path to victory in the short term.
Perhaps you are worried that uploads will recklessly self-modify or race to build AGI. I don’t think this is inevitable or even the default. There is currently no trillion dollar race to build uploads! There may be only a small number of players, and they can take precautions, and enforce regulations on what uploads are allowed to do (effectively, since uploads are not strong superintelligences) and technically it even seems hard for uploads to recursively self-improve by default (human brains are messy, they don’t even need to be given read/write access). Even some uploads escaped, to recursively self-improve safely they would need to solve their own alignment problem and it is not in their interests to recklessly forge ahead, particularly if they can be punished with shutdown and are otherwise potentially immortal. I suspect that most uploads who try to foom will go insane, and it is not clear that the power balance favors any rogue uploads who fare better.
I also don’t agree that there is no incremental progress on the way to full uploads—I think you can build useful rationality enhancing artifacts well before that points—but that is maybe worth a post.
Finally, I do not agree with this characterization of trying to build uploads rather than just solving alignment. I have been thinking about and trying to solve alignment for years, I see serious flaws in every approach, and I have recently started to wonder if alignment is just uploading with more steps anyway. So, this is more like my most promising suggestion for alignment, rather than giving up on solving alignment.