I believe something similar to this, though it may be different enough to say in my own words.
I think that the most likely way to get the big, powerful, steering-the-future-lightcone AI system to steer according to our values is for it to directly access our values, as encoded in our brains.
This has to involve “figuring out” neuroscience one way or the other, whether that’s the AI learning to read off neurons, or from brain emulation/uploading.
I agree with others here that this brain tech has a far longer timeline than AGI.
So my hope is that we figure out how to create controlled AGIs and/or slow down capabilities advances, and then use these tools (plus the normal advances of science) to figure out uploading.
I believe something similar to this, though it may be different enough to say in my own words.
I think that the most likely way to get the big, powerful, steering-the-future-lightcone AI system to steer according to our values is for it to directly access our values, as encoded in our brains.
This has to involve “figuring out” neuroscience one way or the other, whether that’s the AI learning to read off neurons, or from brain emulation/uploading.
I agree with others here that this brain tech has a far longer timeline than AGI.
So my hope is that we figure out how to create controlled AGIs and/or slow down capabilities advances, and then use these tools (plus the normal advances of science) to figure out uploading.