Team Lead for LessWrong
Ruby
Curated. This is a worthwhile message to get out there. Many people are feeling urgency these days with regards to AI, and if not directly themselves, the social community applies a lot of pressure on people to be “doing something”, and I suspect this pressure detracts from the mental Slack to think, which I in turn suspect is important for people noticing new angles on the problems. Just being able to think about things freely without requirement to “do your job” or “justify a grant” seems pretty valuable, so I hope more people consider this.
Curated. It’s not everyday that someone attempts to add concepts to the axioms of game theory/bargaining theory/utility theory and I’m pretty excited for where this is headed, especially if the implications are real for EA and x-risk.
Where’s the link to the video? Also I get “app.operation not allowed” when clicking to the EA Forum version (suggesting it’s a draft over there).
Curated. I think this is a post that is beneficial for many people to hear right now. In the last six months, there has been a shift of beliefs across people both that powerful AI stuff is going to happen soon and that very probably it will be bad. This seems to have been a wake-up call for many, and many of those feel that they suddenly have to do something, now.
And it is good to do things, but we have to stay sane. I like this post for pushing in that direction. I think goes well with a post I made on the topic a while back.
We are definitely thinking about it. We’ve just made a limited 300 copies print-run we’ll be giving out to some people, if that goes well and seems worth it, we might scale up to the Amazon-available version.
When I process new posts, I add tags so they can more easily be found later. I wasn’t sure what tag this one with beyond “AI”. I think it’d increase the likelihood your post gets found and read later if you look through the available AI tags (you can search or use the Concepts page), or create new tags if you think none of the existing ones fit.
Curated. I could imagine a world where different people pursue different agendas in a “live and let live” way, with no one waiting to be too critical of anyone else. I think that’s a world where many people could waste a lot of time with nothing prompting them to reconsider. I think posts like this one give us a chance to avoid scenarios like that. And posts like this can spur discussion of the higher-level approaches/intuitions that spawn more object-level research agenda. The top comments here by Paul Christianno, John Wentworth, and others are a great instance of this.
I also kind of like how this just further develops my gears-level understanding of why Nate predicts doom. There’s color here beyond AGI Ruin: List of Lethalities, which I assume captured most of Nate’s pessimism, but in fact I wonder if Nate disagrees with Eliezer and thinks things would be a bunch more hopeful if only people worked on the right stuff (in contrast with the problem is too hard for our civilization).
Lastly I’ll note that I think it’s good that Nate wrote this post even before being confident he could pass other people’s ITT. I’m glad he felt it was okay to be critical (with caveats) even before his criticisms were maximally defensible (e.g. because he thinks he could pass an ITT).
LessWrong is also still hiring.
That passage has stuck with me too.
It’s possible there’s a bug in the comment ordering here that we should look into, but it’s very unlikely to be because the agreement voting is being taken into account.
Curated. Lessons from- and the mindset- of computer security have long been invoked in the context of AI Alignment, and I love seeing this write-up from a veteran of the industry. What this gave that I didn’t already have was, not just the nature of the technical challenges, but some sense of how people have responded to security challenges in the past and have the development of past solutions has proceeded. This does feel quite relevant to predicting what by default will happen in AGI development.
The personal/frontpage distinct came much later than when those posts were written, we must have set the default on old posts to be personal, and I guess no one went back and frontpaged all the ones that were.
So I guess this is as good a place as any to express that view
Meta point: seems sad to me if many arguments on topics are spread across many posts in a way that’d be hard for a person to track down e.g. all the arguments regarding generalization/not-generalization.
This makes me want something like the Arbital wiki vision where you can find not just settled facts, but also the list of arguments/considerations in either direction on disputed topics.
Plausibly the existing LW/AF wiki-tag system could do this as far as format/software goes, we just need to get people creating pages for all the concepts/disagreements and then properly tagging things and distilling things. This is an addition to better pages for relatively more settled ideas like “inner alignment”.
All of this is a plausible thing for the LW team to try to make happen. [Focus for the next month or two is (a) ensuring that amidst all the great discussion of AI, LessWrong doesn’t lose its identity as a site for Rationality/epistemics/pursuing truth/accurate models across all domains, (b) fostering epistemics in the new wave of alignment researchers (and community builders), though I am quite uncertain about many aspects of this goal/plan.]
Kudos for being the person who wrote it up.
I actually had the same desire a while back and began a Pull Request that would place vote buttons at the bottom of long comments by detecting whether the upper buttons were still in view or not. Didn’t finish it though, though I think it’s a very reasonable idea!
I think not having them at the top is probably the wrong choice since many people use the karma to decide what to read or not. I suppose you list the karma at the top but have the buttons only below....
GreaterWrong has “kibbitz mode” that hides names and karma, perhaps we should also have that on LessWrong.
Now that you promote this to my attention again, I’ll think about it some more. Good idea, just a matter of prioritization. (LessWrong/Lightcone continues to hire)
I included these comments in the post so people actively know these resources are disendorsed.
Added!
Thanks! Added!
Added!
This is very cool. Thanks for link posting!