I think at the very least Dustin Moskovitz would make for a decent AI czar. In terms of the other issues, while I don’t know him personally and he hasn’t made very many public comments on specific policy, Dustin Moskovitz seems to be a very competent person with genuine empathy and a commitment to giving. I don’t really know what more you could realistically ask for.
Oliver Kuperman
Dustin Moskovitz left Facebook in 2008, so I don’t think he really has too much baggage from Facebook. I agree Moskovitz winning the presidency would be a long shot, but I still think it is worth it, as he could perhaps gain a cabinet position/ influence the eventual winner to take a harder line on AI Safety.
Dustin Moskovitz is like the largest individual funder of AI safety causes ever. I think that should send a higher signal to his commitment to AI safety than his foundation’s stake in Anthropic, which does not personally affect his wealth https://www.forbesindia.com/amp/article/global-game/cross-border/change-agents-cari-tuna-and-dustin-moskovitzs-ai-safety-bet/2991379/1.
Bernie Sanders is probably too old to run but I would prefer Sanders over a lot of other candidates if he were like 10 years younger. For most other big name politicians, their stances on AI safety are muddled at best, and the best pro AI Safety politicians like Scott Wiener just don’t have the executive experience or gravitas that Dustin Moskovitz has.
Thanks. I Fixed the typo in the URL, so this should work now.
Americans For Moskovitz
Thanks for clarifying. So posts that place 100% of their content within will be approved? What about 50%? 45%? Will this content be disadvantaged? I think a lot of the same concerns apply, even if this policy is somewhat less strict than I thought.
As I alluded to in this post, my thoughts on LLM writing are multifaceted. I think that LLMs lack the creativity of human writers and are not a substitute for good ideas and a sense of direction. However, I think someone who has a good idea of what they want to write (and a compelling subject to write about) can use LLMs to save considerable amounts of time and also improve their writing on the margins.
Because you asked, I think this recent post of mine is a good example of how LLMs can help with writing. Compared to the baseline of this post (which I think is not particularly well written), the example post is considerably smoother, while also being faithful to the vision I had for the post.
Making things sound better without being better is does not bring upvotes here (usually). We are blessed that it’s not required or appreciated on LW. Mod policy is an attempt to keep LW a special and better place than the rest of the internet.
...
The other thing to think of is this: if we make everyone an excellent writer without improving their thinkingk we’ll lose the signal we currently have that helps us read good ideas by noting good writing.
Okay. Which interpretation of the role of writing quality on LessWrong would you like to defend?
Jokes aside, I think everyone writing in a better manner would be better, as better writing is typically more pleasant to read and also conveys ideas more effectively than worse writing.
As to your point about writing quality being one of the best gauges we have for human thought being put into writing, I can kinda see that, but if the moderators want a better gauge for high effort writing, they should put more effort into finding new ways to measure effort, instead of just making a policy that tangentially affects this and either won’t really be enforceable anyway or lead to a lot of false positives.
I think tracking the amount of time spent editing/ number of edits on a given LessWrong post would be a good way to judge the amount of effort placed into a post (this should not be too difficult to track/implement, and for people who write their posts in google drive or word, I doubt it would be a huge inconvenience to move over to LessWrong).
We are going to be more strictly enforcing the “no LLM output” rule by normalizing our auto-moderation logic to treat posts by approved[7] users similarly to posts by new users—that is, they’ll be automatically rejected if they score above a certain threshold in our automated LLM content detection pipeline. Having spent a few months staring at what’s been coming down the pipe, we are also going to be lowering that threshold.
The above quote lays out pretty clearly that substantial LLM usage will be banned. This is further reinforced by the quote of Oliver Habryka I included:
We intentionally made the choice that light editing is fine, and heavy editing is not fine (where the line is somewhere between “is it doing line edits and suggesting changes to a relatively sparse number of individual sentences, or is it rewriting multiple sentences in a row and/or adding paragraphs”).
I don’t think catching bots plays any real role in the policy. It’s largely IMO about preventing pollution of the epistemic commons by LLM slop.
I think I responded to this line of thinking a bit in my post, but I think this “pollution” is greatly overblown. Compared to humans, LLMs have been found to be better at analyzing complex texts, less likely to believe myths, and third statement to make this sentence sound better (the last part of this sentence is a joke and demonstrates the importance of boilerplate in writing).
Editing: light editing is allowed. Heavy editing always changes the meaning. Whether it’s changed a lot is specific to the writing and very much a judgment call. But saying “make sure you looked closely” is entirely unenforceable. You’d assume lots of people just aren’t going to take the time.
The line between light and heavy editing is blurry, and if you assume people aren’t even going to take the time to review LLM outputs, why would you expect them not to make false claims on their own accord? This is a problem with humans, not LLMs.
So the implication is that there’s a different rule for Neel than for the rest of us. Which makes sense; Neel has proven his contributions to be high-quality, however he’s produced them.
Maybe just ban LLMs for new users or create a karma threshold after which LLM usage is allowed? It seems like the majority of the rationale for the ban is “unscrupulous users will use LLMs irresponsibly and produce writing which is just good enough to not be downvoted, gradually crowding out higher effort posts”, but if this is the case, then the policy should be more targeted towards such users. People with substantial post history have hopefully already shown themselves to be fairly scrupulous.
Without targeting, the justification for this policy becomes tantamount to “we should ban driving, because some people will drive drunk”. How about we focus on those most likely to drive drunk instead of just banning cars for everyone?
I think bot detection is getting more and more difficult, but I do not think we are at the point where less invasive mouse-capturing procedures are ineffective. This is reflected by the fact that a “bot-pocalypse” where LessWrong is overwhelmed by non-human posters was not referenced in the original post justifying the new LLM policy. Why risk burning down the house when mouse-traps can still work?
The New LessWrong LLM Policy is Worse Than You Think
I think the second part of that statement is also somewhat problematic. At some point in the future, we may want to create artificial intelligences that deserve personhood, as digital beings are likely the best way to convert the resources of the universe into utility given their potential to be more energy efficient than physical beings.
For your examples, they are either used by locals informally (“The gulf” as a shorthand for gulf of Mexico is rarely used by national publications), by foreigners (who often give illogical or simplified names to things because the original would be too complicated), or apply to very large regions (which in my opinion, is less annoying because it is less self important). The fact that the San Francisco area gets its own special name that is frequently used despite the fact that there is nothing especially impressive about that particular feature of the area is slightly annoying imo. If people in the Bay Area want to colloquially call it that, I wouldn’t mind, but the fact that national news refers to it as such is slightly annoying to me.
This isn’t that important, but the “term Bay Area” has always annoyed me (as someone from Tampa), as it appropriates a generic geographical feature that many cities contain. Should we start calling Denver the “Mountain Area”? New York City the “Island Area”? Phoenix the “Desert Area”? Oklahoma City the “Plains Area”? The only reason the “Bay Area” sounds normal is because people are used to it.
Thanks for the advice. I guess I can see why some would be opposed to this, although personally I would not mind if people reposted more often (perhaps a good middle position would be to bar reposts from making the front page until reaching a certain karma threshold). I think I’ll leave my reposts up, as I unpublished a couple of my originals. Nevertheless, LessWrong should make its guidelines on reposting clearer, because I did not see any rule against this, and could see myself continuing to repost if I did not ask this question.
How? I would only do this if I got positive karma on the first post?
I agree that spam can negatively impact a site, but if my posts is only being seen by a couple of people and are still getting upvoted, I do not see the issue with reposting provided I edit the post a bit and wait at least a day between posts.
I also do not understand how the whole front-page/ personal blog distinction works, which makes me think once again only a few people are actually able to see certain posts to begin with.
Dustin Moskovitz is the largest funder of AI safety in history and he signed the CAIS letter on AI safety back in 2023. While he clearly is less concerned about AI than many others on this forum, he is clearly far more concerned about AI safety than most US politicians.
Edit: And he actually has made favorable comments on pausing: I don’t have the link but you can search up “pause” on his Bluesky account.