Yeah, it’s bad news in terms of timelines, but good news in terms of an AI being able to implicitly figure out what we want it to do.
Alignment:
1) Figure out what we want.
2) Do that.
People who are worried about 2/two, may still be worried. I’d agree with you on 1/one, it does seem that way. (I initially thought of it as understanding things/language better—the human nature of jokes is easily taken for granted.)
Alignment:
1) Figure out what we want.
2) Do that.
People who are worried about 2/two, may still be worried. I’d agree with you on 1/one, it does seem that way. (I initially thought of it as understanding things/language better—the human nature of jokes is easily taken for granted.)