This problem is related to the problem of producing FAI, according to the terms and assumptions that Eliezer has been using.
I’m willing to bet that making a human, with a broken value system, more intelligent (according to some measure of intelligence based on some kind of increased computational ability of the brain), suffers from much the same kinds of problems that throwing more computing power at an improperly designed AI does.
Doh! I think missed the obvious.
This problem is related to the problem of producing FAI, according to the terms and assumptions that Eliezer has been using.
I’m willing to bet that making a human, with a broken value system, more intelligent (according to some measure of intelligence based on some kind of increased computational ability of the brain), suffers from much the same kinds of problems that throwing more computing power at an improperly designed AI does.