(This is the last of the re-released series of Eliezer posts on bioethics)
I like this collection of paradigms as a resource, thanks.
I didn’t know what Kaizen meant, and think it’d be handy to either give it a more commonplace name, or briefly explain the name choice.
We’ve tweaked a bunch of parameters on the abridging function, and added user settings as a safety valve.
This remains an overall pretty experimental feature, and I wouldn’t be surprised if there turns out be a better way to accomplish the same goals (Taymon had listed a few good contenders). But for the immediate future it should be at least a bit less aggressive.
Comments now load about twice as much content by default.
High karma comments load about 40% more than they previously did.
The threshold for high-karma comment is now 10 instead of 20
Comments from the past two days get the high-karma-truncation-amount
Users have two additional settings, one to turn off truncation on post pages, one to turn off truncation on the home page (where truncation is serving a fairly different purpose)
It’s an example, but I don’t think it’s inclusive.
For example, in highly contrarian circles, social reality may even be anti-conformity (which is maybe a kind of conformity, but if someone just hears social reality described as “conformity” that won’t be an obvious interpretation)
Social reality also includes things like how status-is-assigned and what behaviors are rewarded and punished, and other properties.
Conformity doesn’t quite cover the nuances here. (I don’t think the answers so far cover the breadth of what I meant. I think if Benquo expanded his entry it’ll end up explaining some of the nuts and bolts here)
I think this link is currently the best explanation that exists online:
The earliest contender for “East Coast Megameetup” was the 2012 Solstice, although it was not billed as such. (There was a non-Solstice Megameetup billed as such a few months later in 2013, followed by the first actual Solstice Megameetup in 2013)
I think there is a lot of irrational paranoia going on with pushes towards secrecy
I think the paranoia is basically entirely rational. Several people have listed a variety of threats ranging from (at one extreme) death threats, and much more commonly, mild social disapproval that just makes it harder to accomplish things.
This doesn’t mean there aren’t benefits to transparency. But I think the threats are generally well understood, and if you want (yourself, or others) to get the benefits of transparency you to need to actually do a lot of social infrastructure work to alleviate those costs.
This is an important and worthwhile project, but even within the rationality community, “mild social disapproval that is demoralizing and makes it harder to accomplish things” is still a problem that needs to be actively addressed in order for transparency benefits to scale.
Curious if you could explain molecular clock analysis like I’m five? Your argument here sounds plausible but I’d still be interested to get a better handle on that.
Oh, yeah that makes sense.
I basically agree with this. I once went to a transhumanist conference and attendees I talked to seemed at least a bit like this.
Clicking on comment expands all comments below it. We don’t currently expand comments above it because that changes your screen position which can be disorienting, although i could imagine changing my mind about that
I was talking with someone about why we did the abridging-comments thing, and realized it was probably better to write up those reasons publicly so others could engage or reference them:
First, quick update: if you type “ctrl-F” or “cmd-F”, it’ll autoexpand all comments (this is so that people trying to search for a given phrase automatically get the behavior they want). [Note: this doesn’t currently expand comments that are collapsed because of low karma, which I currently lean towards changing]
Second: I’m not at all confident the current setup is optimal, but here’s what I was thinking about that led to it:
There’s a few tradeoffs we could make with the comments. Obviously, leaving them expanded-by-default makes it easier to read an entire thread if you’re already committed to doing that.
But auto-expansion is implicitly making a choice on which direction to nudge people. (It’s a different choice depending on whether you’re sorting comments by top karma, or most recent, or oldest). Whichever way you’re sorting comments, default-expanded means that if you’re quickly perusing the thread and _not_ committed to reading through the whole thing, you basically just get to read the first couple conversations, and those conversations aren’t necessarily the ones most relevant to you.
This becomes especially bad on huge threads where it’s just impossible to read everything, but even on a mid-length thread it can get a bit tiresome to read through looking for gems.
This has an effect not just on people’s reading experience, but on what sort of followup-comments we’re incentivizing.
If someone writes a mediocre comment that ends up sorted last, but someone else makes an insightful reply to it, it ends up buried and less engaged with. Meanwhile people sometimes end up replying to a top-karma comment (that has nothing to do with their new comment) just to give it a chance of being seen.
Put another way:
Any choice you make about the comments section will dictate what content people experience in the first minute or so, which in turn shapes what discussions people are incentivized to have, and my current guess is that it’s better to allow a breadth first search (while still providing tools that make expanded all comments pretty easy for people that want that)
But in this case, basically I mean “the people around here who use the term social reality, what do you mean by it?” (Ideally, as comprehensively as possible, such that this is a reasonably good post for people to read to get a handle on it).
(I think one good use of questions it the general class of “people who use this jargon term, please write a nice explanation of it. Obviously only people familiar with the jargon term can do so)
Thanks for writing this up! I’ve added it to the LW Open Source sequence.
The motivation for the current set of repostings is simply that the posts had never been on LessWrong (and this post wasn’t even on Yudkowsky.net) and it seemed nice to get them in a more accessible place. (I hadn’t actually read this one)
There is one more essay from the mini-sequence, coming Monday.
While I think this is mostly old-hat to longterm LW readers, I do think it’s still relevant outside of our bubble.
How confident are we that Octopuses can recognize individual humans, as opposed to “a small number of outlier Octopuses can do so?”
We’re definitely planning for things in this vein.