I’m confused about what sort of content belongs on LW 2.0, even in the Archipelago model.
I’ve been a lurker on LW and many of the diaspora rational blogs for years, and I’ve only recently started commenting after being nudged to do so by certain life events, certain blog posts, and the hopeful breath of life slightly reanimating LessWrong.
Sometimes I write on a personal blog elsewhere, but my standards are below what I’d want to see on LW. Then again, I’ve seen things on LW that are below my standards of what I expect on LW.
I’ve seen it said multiple times that people can now put whatever they want on their personal LW spaces/blogposts, and that’s stressed again here. But I still feel unsettled and like I don’t really understand what this means. Does it mean that anyone on the internet talking about random stuff is welcome to have a blog on LW? Does it mean well known members are encouraged to stick around and can be off the rationality topic in their personal blogposts? How about the unknown members? How tangential can the topic be from rationality before it’s not welcome?
Could a personal post about MealSquares and trading money for time flip my modest amount of Karma into the negative and make it harder for me to participate in conversations in the future? Is part of the intent behind the Archipelago model to bring in this kind of content in addition to the well known names? I can’t tell.
As 2018 began, I started thinking about what I should do if I personally take AI seriously. So your post is timely for me. I’ve spent the last couple weeks figuring out how to catch up on the current state of AI development.
What I should do next is still pretty muddy. Or scary.
I have a computer engineering degree and have been a working software developer for several years. I do consider myself a “technical person,” but I haven’t focused on AI before now. I think I could potentially contribute to AI safety research. If I spend some time studying first. I’m not caught up on the technical skills these research guides point to:
MIRI’s Research Guide
80,000 Hours—Career Review of Research Into risks from artificial intelligence—The section “What are some good first steps if you’re interested?” is very relevant.
Bibliography for the Berkeley Center for Human Compatible AI (I had this link saved before reading this post.)
But I’m also not intimidated by the topics or the prospect of a ton of self-directed study. Self-directed study is my fun. I’ve already started on some of the materials.
The scary stuff is:
I could lose myself for years studying everything in those guides.
I have no network of people to bounce any ideas or plans off of.
I live in the bible belt, and my day-to-day interactions are completely devoid of anyone who would take any of this seriously.
People in the online community (rationality or AI Safety) don’t know I exist, and I’m concerned that spending a lot of time getting noticed is a status game and time sink that doesn’t help me learn about AI as fast as possible.
There’s also a big step of actually reaching out to people in the field. I don’t know how to know when I’m ready or qualified. Or if it’s remotely worth contacting people sooner than later because I’m prone to anxious underconfidence, and I could at least let people know I exist, even if I doubt I’m impressive.
I do feel like one of these specialty CFAR workshops would be a wonderful kick-start, but none are yet listed for 2018.