Goertzel, Voss, and similar folks are not working on the FAI problem. They’re working on the AGI problem. Contrary to what Goertzel, Voss, and similar folks find most convenient to believe, these two problems are not on the same planet or even in the same galaxy.
No? I’ve been thinking of both problems as essentially problems of rationality. Once you have a sufficiently rational system, you have a Friendliness-capable, proto-intelligent system.
And it happens that I have a copy of “Do the Right Thing: Studies in Limited Rationality”, but I’m not reading it, even though I feel like it will solve my entire problem perfectly. I wonder why this is.
No? I’ve been thinking of both problems as essentially problems of rationality. Once you have a sufficiently rational system, you have a Friendliness-capable, proto-intelligent system.
And it happens that I have a copy of “Do the Right Thing: Studies in Limited Rationality”, but I’m not reading it, even though I feel like it will solve my entire problem perfectly. I wonder why this is.