Open and Welcome Thread December 2018
If it’s worth saying, but not worth its own post, then it goes here.
Also, if you are new to LessWrong and want to introduce yourself, this is the place to do it. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are welcome. If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, and seeing if there are any meetups in your area.
As well as trying out combining welcome threads and open threads, I thought I’d try highlighting some frontpage comments I found especially insightful in the last month, for further discussion:
Scott Garrabrant wrote a comment on how Embedded Agency and Agent Foundations research are like science in relation to ML approaches to AI alignment which are more like engineering. The comment helped me think about how I go about formalising and solving problems more generally.
Rohin Shah wrote a comment on basic definitions of the alignment problem, contrasting a motivation-competence split versus a definition-optimization split. (It is then followed by a convo on definitions between Paul and Wei which gets pretty deep into the weeds—I’d love to read a summary here from anyone else who followed along.)