What is your background? I feel like if you are living in the bay area, you may have different assumptions than the general population. I am from Florida, and while many people I know wouldn’t be judgemental towards more casual attire in a debate, they would be distracted by Yudkowsky‘s outfit which is both ostentatious and absurd.
Oliver Kuperman
Sure, maybe on Dath Ilan people don’t really care about how people dress, but these norms certainly exist in our world, and it is probably not worth challenging them when far greater issues exist. Sure, you may not personally care about outfit choices, but there is a reason most public figures that rely on public approval (politicians, lawyers, etc) dress in formal attire. Dressing like a steam-punk villain will make general audiences less willing to listen to you.
Lol at the comparison to sexuality. Eliezer Yudkowsky has worn normal clothes in the past, it’s not that big a deal to just conform to societal norms in this case (and clothing preferences are not nearly as intrinsic/immutable as sexual ones).
I have no problem with Yudkowsky earning money or spending money for persona pleasure, but the issue is he is doing so while making his views seem fringe by dressing like a cosplayer. It doesn’t matter that it is on an ”unimportant, unserious debate” it’s a video released to the general public that others can reference to when trying to make the case that Eliezer Yudkowsky is an unqualified grifter.
You make the point about not having to be a saint, but 1.) I think it’s general good to offer constructive criticism to people (even if they are close to being a “saint”). Just because an element of your behavior doesn’t immediately pose risk to yourself or others doesn’t mean we shouldn’t criticize it, especially if the problem is relatively simple and easy to fix 2.) this is not a referendum on Eliezer Yudkowsky as a person, it is a referendum on Eliezer Yudkowsky as a spokesperson for AI safety. If it turned out that Eliezer Yudkowsmy dressed like this at home, I would find it a bit wierd but wouldn’t’ write a post about it.
Please Be Serious
I am arguing against an attitude on this form that political engagement is too difficult to model and ulitimately pointless to engage in. I have definitely encountered this in the past.
You act like Plan A and Plan B (let’s say Plan B is Dustin Moskovitz running for president, Yudkowsky is a far worse candidate due to his lack of a college or high school diploma, worse public speaking skills, and relative lack of executive leadership) are mutually exclusive, when they are not. Furthermore, the whole point is that there is no credible Plan A without at least the media coverage of a Plan B being implemented. Enough politicians won’t care until it’s abundantly clear that voters will.
Only Politics Can Prevent Extinction*
Dustin Moskovitz is the largest funder of AI safety in history and he signed the CAIS letter on AI safety back in 2023. While he clearly is less concerned about AI than many others on this forum, he is clearly far more concerned about AI safety than most US politicians.
Edit: And he actually has made favorable comments on pausing: I don’t have the link but you can search up “pause” on his Bluesky account.
I think at the very least Dustin Moskovitz would make for a decent AI czar. In terms of the other issues, while I don’t know him personally and he hasn’t made very many public comments on specific policy, Dustin Moskovitz seems to be a very competent person with genuine empathy and a commitment to giving. I don’t really know what more you could realistically ask for.
Dustin Moskovitz left Facebook in 2008, so I don’t think he really has too much baggage from Facebook. I agree Moskovitz winning the presidency would be a long shot, but I still think it is worth it, as he could perhaps gain a cabinet position/ influence the eventual winner to take a harder line on AI Safety.
Dustin Moskovitz is like the largest individual funder of AI safety causes ever. I think that should send a higher signal to his commitment to AI safety than his foundation’s stake in Anthropic, which does not personally affect his wealth https://www.forbesindia.com/amp/article/global-game/cross-border/change-agents-cari-tuna-and-dustin-moskovitzs-ai-safety-bet/2991379/1.
Bernie Sanders is probably too old to run but I would prefer Sanders over a lot of other candidates if he were like 10 years younger. For most other big name politicians, their stances on AI safety are muddled at best, and the best pro AI Safety politicians like Scott Wiener just don’t have the executive experience or gravitas that Dustin Moskovitz has.
Thanks. I Fixed the typo in the URL, so this should work now.
Americans For Moskovitz
Thanks for clarifying. So posts that place 100% of their content within will be approved? What about 50%? 45%? Will this content be disadvantaged? I think a lot of the same concerns apply, even if this policy is somewhat less strict than I thought.
As I alluded to in this post, my thoughts on LLM writing are multifaceted. I think that LLMs lack the creativity of human writers and are not a substitute for good ideas and a sense of direction. However, I think someone who has a good idea of what they want to write (and a compelling subject to write about) can use LLMs to save considerable amounts of time and also improve their writing on the margins.
Because you asked, I think this recent post of mine is a good example of how LLMs can help with writing. Compared to the baseline of this post (which I think is not particularly well written), the example post is considerably smoother, while also being faithful to the vision I had for the post.
Making things sound better without being better is does not bring upvotes here (usually). We are blessed that it’s not required or appreciated on LW. Mod policy is an attempt to keep LW a special and better place than the rest of the internet.
...
The other thing to think of is this: if we make everyone an excellent writer without improving their thinkingk we’ll lose the signal we currently have that helps us read good ideas by noting good writing.
Okay. Which interpretation of the role of writing quality on LessWrong would you like to defend?
Jokes aside, I think everyone writing in a better manner would be better, as better writing is typically more pleasant to read and also conveys ideas more effectively than worse writing.
As to your point about writing quality being one of the best gauges we have for human thought being put into writing, I can kinda see that, but if the moderators want a better gauge for high effort writing, they should put more effort into finding new ways to measure effort, instead of just making a policy that tangentially affects this and either won’t really be enforceable anyway or lead to a lot of false positives.
I think tracking the amount of time spent editing/ number of edits on a given LessWrong post would be a good way to judge the amount of effort placed into a post (this should not be too difficult to track/implement, and for people who write their posts in google drive or word, I doubt it would be a huge inconvenience to move over to LessWrong).
We are going to be more strictly enforcing the “no LLM output” rule by normalizing our auto-moderation logic to treat posts by approved[7] users similarly to posts by new users—that is, they’ll be automatically rejected if they score above a certain threshold in our automated LLM content detection pipeline. Having spent a few months staring at what’s been coming down the pipe, we are also going to be lowering that threshold.
The above quote lays out pretty clearly that substantial LLM usage will be banned. This is further reinforced by the quote of Oliver Habryka I included:
We intentionally made the choice that light editing is fine, and heavy editing is not fine (where the line is somewhere between “is it doing line edits and suggesting changes to a relatively sparse number of individual sentences, or is it rewriting multiple sentences in a row and/or adding paragraphs”).
I don’t think catching bots plays any real role in the policy. It’s largely IMO about preventing pollution of the epistemic commons by LLM slop.
I think I responded to this line of thinking a bit in my post, but I think this “pollution” is greatly overblown. Compared to humans, LLMs have been found to be better at analyzing complex texts, less likely to believe myths, and third statement to make this sentence sound better (the last part of this sentence is a joke and demonstrates the importance of boilerplate in writing).
Editing: light editing is allowed. Heavy editing always changes the meaning. Whether it’s changed a lot is specific to the writing and very much a judgment call. But saying “make sure you looked closely” is entirely unenforceable. You’d assume lots of people just aren’t going to take the time.
The line between light and heavy editing is blurry, and if you assume people aren’t even going to take the time to review LLM outputs, why would you expect them not to make false claims on their own accord? This is a problem with humans, not LLMs.
So the implication is that there’s a different rule for Neel than for the rest of us. Which makes sense; Neel has proven his contributions to be high-quality, however he’s produced them.
Maybe just ban LLMs for new users or create a karma threshold after which LLM usage is allowed? It seems like the majority of the rationale for the ban is “unscrupulous users will use LLMs irresponsibly and produce writing which is just good enough to not be downvoted, gradually crowding out higher effort posts”, but if this is the case, then the policy should be more targeted towards such users. People with substantial post history have hopefully already shown themselves to be fairly scrupulous.
Without targeting, the justification for this policy becomes tantamount to “we should ban driving, because some people will drive drunk”. How about we focus on those most likely to drive drunk instead of just banning cars for everyone?
I think bot detection is getting more and more difficult, but I do not think we are at the point where less invasive mouse-capturing procedures are ineffective. This is reflected by the fact that a “bot-pocalypse” where LessWrong is overwhelmed by non-human posters was not referenced in the original post justifying the new LLM policy. Why risk burning down the house when mouse-traps can still work?
I think even Yudkowsky would agree he is not the most charismatic speaker. Yudkowsk’s advantages come from his intellect, but this advantage can be better deployed in a medium like writing or bf omeone else who is fanilliar with his arguments and a tad bit more charismatic (Nate Soares, for instance). I am not saying for Eliezer Yudkowsky to stop working on AI Safety, just public appearances. Would you like to run a poll on this question and bet on the results?
Nate Soares