Interesting interview. Possibly a dumb question though, but… in the interview Eliezer said that as soon as they actually solved all the reflective decision theory stuff, they could code the thing.
But… don’t they also have to actually solve all the tricky “exactly how to properly implement the CEV idea in the first place” thing? ie, even given they’ve got a way to provably maintain the utility function under self modification, they still have to give it the right utility function. (Or did I just misunderstand that interview answer?)
(Or has SIAI actually made really good progress on that to the point that the reflective decision theory side of things is the big roadblock now?)
Interesting interview. Possibly a dumb question though, but… in the interview Eliezer said that as soon as they actually solved all the reflective decision theory stuff, they could code the thing.
But… don’t they also have to actually solve all the tricky “exactly how to properly implement the CEV idea in the first place” thing? ie, even given they’ve got a way to provably maintain the utility function under self modification, they still have to give it the right utility function. (Or did I just misunderstand that interview answer?)
(Or has SIAI actually made really good progress on that to the point that the reflective decision theory side of things is the big roadblock now?)
I second the question, all it needs to get superhuman intelligence is reflective decision theory?
Well, that plus a way to efficiently approximately compute it in real life scenarios, of course.
But my question was more given that, I thought there was still work to be done as far as precisely formulating what exactly it is that it should do.