Open and Welcome Thread December 2018

If it’s worth say­ing, but not worth its own post, then it goes here.

Also, if you are new to LessWrong and want to in­tro­duce your­self, this is the place to do it. Per­sonal sto­ries, anec­dotes, or just gen­eral com­ments on how you found us and what you hope to get from the site and com­mu­nity are wel­come. If you want to ex­plore the com­mu­nity more, I recom­mend read­ing the Library, check­ing re­cent Cu­rated posts, and see­ing if there are any mee­tups in your area.

As well as try­ing out com­bin­ing wel­come threads and open threads, I thought I’d try high­light­ing some front­page com­ments I found es­pe­cially in­sight­ful in the last month, for fur­ther dis­cus­sion:

  • Scott Garrabrant wrote a com­ment on how Embed­ded Agency and Agent Foun­da­tions re­search are like sci­ence in re­la­tion to ML ap­proaches to AI al­ign­ment which are more like en­g­ineer­ing. The com­ment helped me think about how I go about for­mal­is­ing and solv­ing prob­lems more gen­er­ally.

  • Ro­hin Shah wrote a com­ment on ba­sic defi­ni­tions of the al­ign­ment prob­lem, con­trast­ing a mo­ti­va­tion-com­pe­tence split ver­sus a defi­ni­tion-op­ti­miza­tion split. (It is then fol­lowed by a convo on defi­ni­tions be­tween Paul and Wei which gets pretty deep into the weeds—I’d love to read a sum­mary here from any­one else who fol­lowed along.)