[Epistemic status: synthesis of observation, intuition, advice from other people]
I don’t think the “rather than” in that second paragraph is workable. Strong ties usually grow out of weak ties, so if you don’t have a broad buffer of weak ties (or if it goes away, or if you let it go away), your replenishment pool for strong ties also goes away. Even strong ties frequently don’t last forever, so if you have only strong ties, you’re in an unstable position in the long term. Sometimes strong ties can give you access to more weak ties, but sometimes they can’t, and even when they can, you still have to step up to take advantage of this.
I also vaguely think the investment metaphor might go wrong places for reasons similar to what Dagon mentions, but I don’t think I can unpack that now.
I’m looking for some clarification/feelings on the social norms here surrounding reporting typo/malapropism-like errors in posts. So far I’ve been sending a few by PM the way I’m used to doing on some other sites, as a way of limiting potential embarrassment and not cluttering the comments section with things that are easily fixed, but I notice some people giving that feedback in the comments instead. Is one or the other preferred?
I also have the impression that that sort of feedback is generally wanted here in the first place, due to precise, correct writing being considered virtuous, but I’m not confident of this. Is this basically right, or should I be holding back more?
How many of you are there, and what is your dosh-distimming schedule like these days?
What sort of better are you hoping to become?
“Wariness, thoughtfully following, should think about this more.”
I intuitively believe that anonymous reactions will be more likely to lead to gaming, becoming a way to snipe or brigade from the sidelines in a more emotionally impactful way than downvotes and upvotes. Being able to weight the reactions by status is important.
There is also less pushback possible versus toxic anonymous uses of emoji-like reactions, because they often encode emotions less abstractly than votes do, and norms like “you should vote based on certain criteria that promote the purpose of the space” don’t translate well to “you should emote based on certain criteria” (even though the latter does happen in human societies).
A place where I see private information as potentially beneficial, in a way that isn’t reflected in any previous reaction systems I’ve seen, is actually “reacting user reveals reaction only to comment owner”. This would be to a PM response as a visible reaction would be to a comment response, and would serve a similar function when someone doesn’t feel comfortable revealing a potentially low-status emotional reaction to the group nor being clear enough about it to raise the interaction stakes, but where such information especially in aggregate could still be useful. If a lot of people have a good or bad feeling about something, but few of them feel comfortable showing it in public, that can be very useful dynamics information.
(My previous comment’s caveats about how I’m not sure how well any of this works in a comment-tree situation apply.)
My experience in other circles with Slack and Discord is that the niche of emoji reactions is primarily non-interrupting room-sensing (there are also sillier uses in casual social contexts, but they don’t seem relevant here). I don’t feel any pressure to specifically have read something, and I haven’t observed people reading anything into failure to provide a reaction. The rare exception to the latter is when there’s clearly an active conversation going on that someone’s already clearly been active in, which can be handled by explicitly signaling departure, which was a norm in those circumstances anyway.
Non-interrupting room-sensing in a fast-flowing channel environment has generally struck me as beneficial. Being able to quickly find the topic-flow of the current conversation is important, and reactions do not have to be scanned for topic introductions. Reactions encode leafness: you can’t reply to a reaction easily, which also means giving a reaction cannot induce social pressure to reply to it. They encode weaker ties to the individual: people with the same reaction are stacked together, and it takes an extra effort to look at the list of reacting users. Differentially, reactions can also signal level of involvement: someone “conversing” in only reactions may not be up for thinking about the conversation hard enough to produce text responses, but is able to listen and give base emotional feedback (which seems to be the most relevant to the proposed uses here). It serves a similar function to scanning people’s facial expressions in a physical meeting room.
I’m very unclear on how these patterns would play out in a longer-form, more delay-tolerant environment like a comment tree. Some of the room-sensing interpretation makes less sense the less the timescale of the reactions corresponds to unconscious-emotion synchronization; there’s a lot of lost flow context.
Since this seems to be an akrasia/executive-related problem, I suspect just having links to possible addons to use (and ideally, example configurations) easily accessible could be disproportionately ameliorative compared to its implementation cost, both via the reminder that compulsive browsing and mitigations for it both exist, and via the social signaling that this is an approved way of browsing that won’t make you weird. Though I’m not sure about the possible noise it creates, depending on what easy options you have for placement/hiding.
I think it depends a lot on how you frame it, and analogies work much less well than people expect because of ways the Internet is very different from previous environments.
The intuitive social norms surrounding the store clerk involve the clerk having socially normal memory performance and a social conscience surrounding how they use that memory. What if the store clerk were writing down everything you did in the store, including every time you picked your nose, your exact walking path, every single item you looked at and put back, and what you were muttering to your shopping companion? What if that list were quickly sent off to an office across the country, where they would try to figure out any number of things like “which people look suspicious” and “where to display which items”? What if the clerk followed you around the entire store with their notepad when it’s a giant box store with many departments? For the cross-site case, imagine that the office also receives detailed notes about you from the clerks at just about every other place you go, because those ones wound up with more profitable store layouts and lower theft rates and the other shops gradually went out of business.
There are other analogy framings still; consider one with security cameras instead, and whether it feels different, and what different assumptions might be in play. But in all of those cases, relying on misplaced assumptions about humanlike capability, motivation, and agency is to be wary of. (Fortunately, I think a lot of people here should be familiar with that one!)
Extending this: trust problems could impede the flow of information in the first place in such a way that the introspective access stops being an amplifier across a system boundary. An AI can expose some code, but an AI that trusts other AIs to be exposing their code in a trustworthy fashion rather than choosing what code to show based on what will make the conversation partner do something they want seems like it’d be exploitable, and an AI that always exposes its code in a trustworthy fashion may also be exploitable.
Human societies do “creating enclaves of higher trust within a surrounding environment of lower trust” a lot, and it does improve coordination when it works right. I don’t know which way this would swing for super-coordination among AIs.
But jointly constructing a successor with compromise values and then giving them the reins is something humans can sort of do via parenting, there’s just more fuzziness and randomness and drift involved, no? That is, assuming human children take a bunch of the structure of their mindsets from what their parents teach them, which certainly seems to be the case on the face of it.
Speculative followup: seeing a few other people say similar things here and contrasting it with what seems to have been implied in the retrospective itself makes me guess there’s a seriousness split between LW and email “subscribers”. Does the former have passersby dominating the reader set (especially since it’ll be presented to people who are on LW for some other reason), whereas anyone who cares more deeply and specifically will primarily consume the newsletter by email?
I browse this newsletter occasionally via LW; I am not subscribed by email. I am not so far seriously involved in AI research, and I don’t wind up understanding most of it in detail, but I have a longer-term interest in such issues, and I want to keep a fraction of a bird’s eye on the state of the field if possible, so that if I start in on deeper such activities a few years from now, I can re-skim the archives and try to catch up.
But how do the two things in the last paragraph mix if I have (1) a preference for others to judge me well, (2) a belief that others will judge me well if they believe I am doing what they believe is optimal for what they think my beliefs and preferences should be, and (3) a belief that the extrapolated cost of convincing them that I am doing such a thing without actually doing the thing is so incredibly high as to make plans involving that almost never show up in decision-making processes?
Put another way, it seems like the two definitions can collapse in a sufficiently low-privacy conformist environment—which can be unified with the emotion of “freedom”, but at least in most Western contexts, that seems infrequent. The impression I get is that most people obvious-patch around this by trying to extrapolate “what a version of me completely removed from peer pressures would prefer” and using that as the preference baseline, but I both think and feel that that’s incoherent. (Further meta, I also get the impression that many people don’t feel that it’s incoherent even if they would agree cognitively that it is, and that that leads to a lot of worldmodel divergence down the line.)
(I realize this might be a bit off-track from its parent comment, but I think it’s relevant to the broader discussion.)
“Default” and “Common” feel wrong, but perhaps “Core” has a place somewhere? “This is what we’re here for; the rest is in support of it.”
Is the “Chaos” part meant to be a link? It doesn’t seem to go anywhere.
The bat and ball problem I answer in what I’ll call one conscious time-step with the correct “five cents”, but it happens too fast for me to verify how (beyond the usual trouble with verifying internal reflection). I would speculate, in decreasing order of intuitive probability, that in order to get the answer, either (a) I’ve seen an exactly analogous “trick” problem before and am pattern-matching on that or (b) I’m doing the algebra quickly using my seemingly well-developed mathematical intuition. I can also imagine (c) I’m leaping to the “wrong” answer, then trying to verify it, noticing it’s wrong, and correcting it, all in the same subconscious flash, but that feels off. Imagining the “ten cents” answer doesn’t actually feel compelling; it just feels wrong. (It feels like a similar emotion to noticing I’ve gotten the wrong amount of change, in fact.)
The widgets problem I do a noticeable double-take on, but it’s rapidly corrected within one conscious time-step; the “100” is a momentary flicker before my brain settles on the correct answer. Imagining “100” afterwards feels wrong, but less immediately so than “ten cents” did. It feels like I have a bias there toward answering “how many widgets can you produce in a fixed time” questions, so I might have an echo of the misreading “how many widgets can 100 machines produce in [assumed to be the same amount of time as before, since no contrary time value is presented to override this]”.
The lily pads question takes me a conscious time-step longer to answer than either of the other two; the initial flash is “inconclusive”, and then I see myself rechecking the part where the quantity doubles every step before answering “47”. (I notice I didn’t remember that the steps were days, only remembering that there was a time unit; I don’t know if that’s relevant.) Imagining “24” afterwards feels some intermediate level of wrong between “ten cents” and “100”; my mental graph of the growth curve puts the expected value 24 at “way too low” intuitively before I can compute the actual exponent.
I wonder if Chris_Leong was trying to deliver a meta-joke-based answer by pointing out that any consensus definition of “social reality” is itself a part of social reality.
Thanks for clarifying. In that case, I don’t count that as a gesture for word count in the sense that I was hoping, because it’s far too heavy and requires flow-breaking motion tracking of an unpredictable expand/collapse.