I’m not sure, but Nate’s recent post updated me towards this opinion significantly in many ways. I still think there’s significant risk, but I trust the cultural ensemble a lot more after reading nate’s post.
There are a lot of highly respected researchers who have similar opinions, though.
and it’s not like machine learning has consensus on much in the domain of speculative predictions, even ones by highly skilled researchers with track records are doubted by significant portions of the field.
science is hard yo.
I will say, people who think the rationality sphere has bad epistemics, very fair, but people who think the rationality sphere on less wrong has bad epistemics, come fight me on less wrong! let’s argue about it! people here might not change their minds as well as they think they do, but the software is much better for intense discussions than most other places I’ve found.
I think the lesswrong community is wrong about x-risk and many of the problems about ai, and I’ve got a draft longform with concrete claims that I’m working on...
But I’m sure it’ll be downvoted because the bet has goalpost-moving baked in, and lots of goddamn swearing, so that makes me hesitant to post it.
if you think it’s low quality, post it, and warn that you think it might be low quality, but like, maybe in less self-dismissive phrasing than “I’m sure it’ll be downvoted”. I sometimes post “I understand if this gets downvoted—I’m not sure how high quality it is” types of comments. I don’t think those are weird or bad, just try to be honest in both directions, don’t diss yourself unnecessarily.
And anyway, this community is a lot more diverse than you think. it’s the rationalist ai doomers who are rationalist ai doomers—not the entire lesswrong alignment community. Those who are paying attention to the research and making headway on the problem, eg wentworth, seem considerably more optimistic. The alarmists have done a good job being alarmists, but there’s only so much being an alarmist to do before you need to come back down to being uncertain and try to figure out what’s actually true, and I’m not impressed with MIRI lately at all.
A word of advice: don’t post any version of it that says “I’m sure this will be downvoted”. Saying that sort of thing is a reliable enough signal of low quality that if your post is actually good then it will get a worse reception than it deserves because of it.
I’m not sure, but Nate’s recent post updated me towards this opinion significantly in many ways. I still think there’s significant risk, but I trust the cultural ensemble a lot more after reading nate’s post.
There are a lot of highly respected researchers who have similar opinions, though.
and it’s not like machine learning has consensus on much in the domain of speculative predictions, even ones by highly skilled researchers with track records are doubted by significant portions of the field.
science is hard yo.
I will say, people who think the rationality sphere has bad epistemics, very fair, but people who think the rationality sphere on less wrong has bad epistemics, come fight me on less wrong! let’s argue about it! people here might not change their minds as well as they think they do, but the software is much better for intense discussions than most other places I’ve found.
I think the lesswrong community is wrong about x-risk and many of the problems about ai, and I’ve got a draft longform with concrete claims that I’m working on...
But I’m sure it’ll be downvoted because the bet has goalpost-moving baked in, and lots of goddamn swearing, so that makes me hesitant to post it.
if you think it’s low quality, post it, and warn that you think it might be low quality, but like, maybe in less self-dismissive phrasing than “I’m sure it’ll be downvoted”. I sometimes post “I understand if this gets downvoted—I’m not sure how high quality it is” types of comments. I don’t think those are weird or bad, just try to be honest in both directions, don’t diss yourself unnecessarily.
And anyway, this community is a lot more diverse than you think. it’s the rationalist ai doomers who are rationalist ai doomers—not the entire lesswrong alignment community. Those who are paying attention to the research and making headway on the problem, eg wentworth, seem considerably more optimistic. The alarmists have done a good job being alarmists, but there’s only so much being an alarmist to do before you need to come back down to being uncertain and try to figure out what’s actually true, and I’m not impressed with MIRI lately at all.
Thanks. fyi, i tried making the post i alluded to:
https://www.lesswrong.com/posts/F7xySqiEDhJBnRyKL/i-think-we-re-approaching-the-bitter-lesson-s-asymptote
“the bet”—what bet?
A word of advice: don’t post any version of it that says “I’m sure this will be downvoted”. Saying that sort of thing is a reliable enough signal of low quality that if your post is actually good then it will get a worse reception than it deserves because of it.
For sure. The actual post I make will not demonstrate my personal insecurities.
I will propose a broad test/bet that will shed light on my claims or give some places to examine.