I don’t have a very good answer yet, but combining Vivid’s and Randaly’s proposals with mine seems to yield a reasonably safe and fast scenario: sandbox the AI to get a formal problem solver, use that to quickly advance uploading tech, then upload some good humans and let them FOOM.
How quickly do you think we can develop uploading tech given such an AI? Would it be quick enough if others were writing seed AIs that can FOOM directly?
ETA: Also, while you’re using this AI to develop upload tech, it seems vulnerable to being stolen or taken by force.
I don’t know. It seems to be a good reason to spend effort today on formalizing the problems that lead to uploading tech. One possible way is through protein folding and nanotech. Another way is to infer the microscopic structure of my brain from knowledge of current physics and enough macroscopic observations (e.g. webcam videos of me, or MRI scans, or something). That would make a nice formalizable problem for an AI in a box.
Agree up to ‘let them FOOM’. FOOMing uploads seem potentially disastrous for all the usual reasons. Why not have the uploaded good humans research friendliness and CEV then plug it into the easy seed AI and let that FOOM?
Or is the above what you meant? It just seems so obviously superior that you could have been doing shorthand.
Will the uploaded good humans still stay good once they are uploaded?
What is the test to make sure that a regular (though very smart) human will remain “good”?
Don’t we have to select from a small pool of candidates who have not succumbed to temptation to abuse power?
I have to say that I can think of very few candidates who would pass muster in my mind and not abuse power.
Even here: Reading through many of the posts, we have widely conflicting “personal utility functions” most of whose owners would consider to be “good” utility functions and yet those who do not hold the same personal utility functions could consider the utility functions of others to be “bad”.
To my way of thinking it’s incredily risky to try to upload “good” humans.
I don’t have a very good answer yet, but combining Vivid’s and Randaly’s proposals with mine seems to yield a reasonably safe and fast scenario: sandbox the AI to get a formal problem solver, use that to quickly advance uploading tech, then upload some good humans and let them FOOM.
How quickly do you think we can develop uploading tech given such an AI? Would it be quick enough if others were writing seed AIs that can FOOM directly?
ETA: Also, while you’re using this AI to develop upload tech, it seems vulnerable to being stolen or taken by force.
I don’t know. It seems to be a good reason to spend effort today on formalizing the problems that lead to uploading tech. One possible way is through protein folding and nanotech. Another way is to infer the microscopic structure of my brain from knowledge of current physics and enough macroscopic observations (e.g. webcam videos of me, or MRI scans, or something). That would make a nice formalizable problem for an AI in a box.
Agree up to ‘let them FOOM’. FOOMing uploads seem potentially disastrous for all the usual reasons. Why not have the uploaded good humans research friendliness and CEV then plug it into the easy seed AI and let that FOOM?
Or is the above what you meant? It just seems so obviously superior that you could have been doing shorthand.
Could be either. I’d leave that choice up to the uploaded good humans.
Will the uploaded good humans still stay good once they are uploaded?
What is the test to make sure that a regular (though very smart) human will remain “good”?
Don’t we have to select from a small pool of candidates who have not succumbed to temptation to abuse power?
I have to say that I can think of very few candidates who would pass muster in my mind and not abuse power.
Even here: Reading through many of the posts, we have widely conflicting “personal utility functions” most of whose owners would consider to be “good” utility functions and yet those who do not hold the same personal utility functions could consider the utility functions of others to be “bad”.
To my way of thinking it’s incredily risky to try to upload “good” humans.