We don’t know what we want from AI, beyond obvious goals like survival. Mostly I think in terms of a perfect tutor that would bring us to its own level of intelligence before turning itself off. But quite possibly we don’t want that at all. I recall some commenter here seemed to want a long-term ruler AI.
I am generally in favour of a long-term ruler AI; though I don’t think I’m the one you heard it from before. As you say, though, this is an area where we should have unusually low confidence that we know what we want.
What do we want out of AI? Is it happiness? If so, then why not just research wireheading itself and not encounter the risks of an unfriendly AI?
We don’t know what we want from AI, beyond obvious goals like survival. Mostly I think in terms of a perfect tutor that would bring us to its own level of intelligence before turning itself off. But quite possibly we don’t want that at all. I recall some commenter here seemed to want a long-term ruler AI.
I am generally in favour of a long-term ruler AI; though I don’t think I’m the one you heard it from before. As you say, though, this is an area where we should have unusually low confidence that we know what we want.
The promise of AI is irresistibly seductive because an FAI would make everything easier, including wireheading and survival.