I hope you’ve smiled today :)
I really want to experience and learn about as much of the world as I can, and pride myself on working to become a sort of modern day renaissance man, a bridge builder between very different people if you will. Some not-commonly-seen-in-the-same-person things: I’ve slaughtered pigs on my family farm and become a vegan, done HVAC (manual labor) work and academic research, been a member of both the Republican and Democratic clubs at my university.
Discovering EA has been one of the best things to happen to me in my life. I think I likely share something really important with all the people that consider themselves under this umbrella. EA can be a question, sure, but I hope more than that that EA can be a community, one that really works towards making the world a little better than it was.
Below are some random interests of mine. I’m happy to connect over any of them, and over anything EA, please feel free to book a time whenever is open on my calendly.
Philosophy (anything Plato is up my alley, but also most interested in ethical and political texts)
Psychology (not a big fan of psychotropic medication, also writing a paper on a interesting, niche brand of therapy called logotherapy that analyses its overlap with religion and thinking about how religion, specifically Judaism, could itself be considered a therapeutic practice)
Music (Lastfm, Spotify, Rateyourmusic; have deep interests in all genres but especially electronic and indie, have been to Bonnaroo and have plans to attend more festivals)
Politics (especially American)
Drug Policy (current reading Drugs Without the Hot Air by David Nutt)
Gaming (mostly League these days, but shamefully still Fortnight and COD from time to time)
Cooking (have been a head chef, have experience working with vegan food too and like to cook a lot)
Photography (recently completed a project on community with older people (just the text), arguing that the way we treat the elderly in the US is fairly alarming)
Meditation (specifically mindfulness, which I have both practiced and looked at in my RA work, which involved trying to set forth a categorization scheme for the meditative literature)
Home (writing a book on different conceptions of it and how relationships intertwine, with a fairly long side endeavor into what forms of relationships should be open to us)
Speaking Spanish (Voy a Espana por un ano a dar clases de ingles, porque quiero hablar en Espanol con fluidez)
Traveling (have hit a fair bit of Europe and the US, as well as some random other places like Morocco)
Reading (Goodreads; I think I currently have over 200 books to read, and have been struggling getting through fantasy recently finding myself continually pulled to non-fiction, largely due to EA reasoning I think)
How you can help me: I’ve done some RA work in AI Policy now, so I’d be eager to try to continue that moving forward in a more permanent position (or at least a longer period funded) and any help better myself (e.g. how can I do research better?) or finding a position like that would be much appreciated. Otherwise I’m on the look for any good opportunities in the EA Community Building or General Longtermism Research space, so again any help upskilling or breaking into those spaces would be wonderful.
Of a much lower importance, I’m still not for sure on what cause area I’d like to go into, so if you have any information on the following, especially as to a career in it, I’d love to hear about it: general longtermism research, EA community building, nuclear, AI governance, and mental health.
How I can help others: I don’t have domain expertise by any means, but I have thought a good bit about AI policy and next best steps that I’d be happy to share about (i.e. how bad is risk from AI misinformation really?). Beyond EA related things, I have deep knowledge in Philosophy, Psychology and Meditation, and can potentially help with questions generally related to these disciplines. I would say the best thing I can offer is a strong desire to dive deeper into EA, preferably with others who are also interested. I can also offer my experience with personal cause prioritization, and help others on that journey (as well as connect with those trying to find work).
The LessWrong comments here are generally (quite) (brutal), and I think I disagree, which I’ll try to outline very briefly below. But I think it may be generally more fruitful here to ask some questions I had to break down the possible subpoints of disagreement as to the goodness of this letter.
I expected some negative reaction because I know that Elon is generally looked down upon by the EAs that I know, with some solid backing to those claims when it comes to AI given that he cofounded OpenAI, but with the (immediate) (press) (attention) it’s getting in combination with some heavy hitting signatures (including Elon Musk, Stuart Russel, Steve Wozniak (Co-founder, Apple), Andrew Yang, Jaan Tallinn (Co-Founder, Skype, CSER, FLI), Max Tegmark (President, FLI), and Tristan Harris (from The Social Dilemma) among many others) I kind of can’t really see the overall impact of this letter being net negative. At worst it seems mistimed and with technical issues, but at best it seems one of the better calls to action (or global moratoriums as Greg Colbourn put it) that could have happened, given AI’s current presence in the news and much of the world’s psyche.
But I’m not super certain in anything, and generally came away with a lot of questions, here’s a few:
How convergent is this specific call for pause on developing strong language models with how AI x-risk people would go about crafting a verifiable, tangible metric for AI labs to follow to reduce risk? Is this to be seen as a good first step? Or something that might actually be close enough to what we want that we could rally around this metric given its endorsement by this influential group?
This helps clarify the “6 months isn’t enough to develop the safety techniques they detail” objection which was fairly well addressed here as well as the “Should Open AI be at the front” objection.
How much should we view messages that are a bit more geared towards non x-risk AI worries than the community seems to be? They ask a lot of good questions here, but they are also still asking “Should we let machines flood our information channels with propaganda and untruth?” an important question, but one that to me seems to deviate away from AI x-risk concerns.
This is at least tangential to the “This letter felt rushed” objection, because even if you accept it was rushed, the next question is “Well, what’s our bar for how good something has to be before it is put out into the world?”
Are open letters with influential signees impactful? This letter at the very least to me seems to be a neutral at worst, quite impactful at best sort of thing, but I have very little to back that, and honestly can’t recall any specific time I know of where open letters cause significant change at the global/national level.
Given the recent desire to distance from potentially fraught figures, would that mean shying away from a group wide EA endorsement of such a letter because a wild card like Elon is a part of it? I personally don’t think he’s at that level, but I know other EAs who would be apt to characterize him that way.
Do I sign the post? What is the impact of adding signatures with significantly less professional or social clout to such an open letter? Does it promote the message of AI risk as something that matters to everyone? Or would someone look at “Tristan Williams, Tea Brewer” and think “oh, what is he doing on this list?”