You see either something special, or nothing special.
Rana Dexsin
If I’m not the first, was this posted before? I don’t see the same suggestion elsewhere in the comments, at least…
And the part I’m worried about above is that the poetic view will lead to conflationary thinking about the categories along the way, rendering the model a lot less useful; sure, a dragon can cause multiple symptoms, but that’s not the central image that comes to mind (at least to me), and trying to get a grip on something like this as an intuition pump gets fragile if you lean into what sounds compelling.
This sounds similar in effect to what philosophy of mind calls “embodied cognition”, but it takes a more abstract tack. Is there a recognized background link between the two ideas already? Is that a useful idea, regardless of whether it already exists, or am I off track?
How do I report a top-level post to the moderators? I see a kebab menu for comments, but I don’t see anything like that for top-level posts, neither on the front page nor on the post page. The specific situation is that there currently seem to be multiple spam posts in the “all posts” queue, but I’d also like to know how to do this in general for future reference.
It would be appreciated (and pleasingly symmetrical). Thanks for the response.
I use it as a proxy, but I’d like word count better. T3t implied that there’s already a gesture for word count, but I don’t know what it is, so maybe that’s not discoverable enough as it is, too.
Thanks for clarifying. In that case, I don’t count that as a gesture for word count in the sense that I was hoping, because it’s far too heavy and requires flow-breaking motion tracking of an unpredictable expand/collapse.
I wonder if Chris_Leong was trying to deliver a meta-joke-based answer by pointing out that any consensus definition of “social reality” is itself a part of social reality.
The bat and ball problem I answer in what I’ll call one conscious time-step with the correct “five cents”, but it happens too fast for me to verify how (beyond the usual trouble with verifying internal reflection). I would speculate, in decreasing order of intuitive probability, that in order to get the answer, either (a) I’ve seen an exactly analogous “trick” problem before and am pattern-matching on that or (b) I’m doing the algebra quickly using my seemingly well-developed mathematical intuition. I can also imagine (c) I’m leaping to the “wrong” answer, then trying to verify it, noticing it’s wrong, and correcting it, all in the same subconscious flash, but that feels off. Imagining the “ten cents” answer doesn’t actually feel compelling; it just feels wrong. (It feels like a similar emotion to noticing I’ve gotten the wrong amount of change, in fact.)
The widgets problem I do a noticeable double-take on, but it’s rapidly corrected within one conscious time-step; the “100” is a momentary flicker before my brain settles on the correct answer. Imagining “100” afterwards feels wrong, but less immediately so than “ten cents” did. It feels like I have a bias there toward answering “how many widgets can you produce in a fixed time” questions, so I might have an echo of the misreading “how many widgets can 100 machines produce in [assumed to be the same amount of time as before, since no contrary time value is presented to override this]”.
The lily pads question takes me a conscious time-step longer to answer than either of the other two; the initial flash is “inconclusive”, and then I see myself rechecking the part where the quantity doubles every step before answering “47”. (I notice I didn’t remember that the steps were days, only remembering that there was a time unit; I don’t know if that’s relevant.) Imagining “24” afterwards feels some intermediate level of wrong between “ten cents” and “100”; my mental graph of the growth curve puts the expected value 24 at “way too low” intuitively before I can compute the actual exponent.
Is the “Chaos” part meant to be a link? It doesn’t seem to go anywhere.
“Default” and “Common” feel wrong, but perhaps “Core” has a place somewhere? “This is what we’re here for; the rest is in support of it.”
But how do the two things in the last paragraph mix if I have (1) a preference for others to judge me well, (2) a belief that others will judge me well if they believe I am doing what they believe is optimal for what they think my beliefs and preferences should be, and (3) a belief that the extrapolated cost of convincing them that I am doing such a thing without actually doing the thing is so incredibly high as to make plans involving that almost never show up in decision-making processes?
Put another way, it seems like the two definitions can collapse in a sufficiently low-privacy conformist environment—which can be unified with the emotion of “freedom”, but at least in most Western contexts, that seems infrequent. The impression I get is that most people obvious-patch around this by trying to extrapolate “what a version of me completely removed from peer pressures would prefer” and using that as the preference baseline, but I both think and feel that that’s incoherent. (Further meta, I also get the impression that many people don’t feel that it’s incoherent even if they would agree cognitively that it is, and that that leads to a lot of worldmodel divergence down the line.)
(I realize this might be a bit off-track from its parent comment, but I think it’s relevant to the broader discussion.)
I browse this newsletter occasionally via LW; I am not subscribed by email. I am not so far seriously involved in AI research, and I don’t wind up understanding most of it in detail, but I have a longer-term interest in such issues, and I want to keep a fraction of a bird’s eye on the state of the field if possible, so that if I start in on deeper such activities a few years from now, I can re-skim the archives and try to catch up.
Speculative followup: seeing a few other people say similar things here and contrasting it with what seems to have been implied in the retrospective itself makes me guess there’s a seriousness split between LW and email “subscribers”. Does the former have passersby dominating the reader set (especially since it’ll be presented to people who are on LW for some other reason), whereas anyone who cares more deeply and specifically will primarily consume the newsletter by email?
But jointly constructing a successor with compromise values and then giving them the reins is something humans can sort of do via parenting, there’s just more fuzziness and randomness and drift involved, no? That is, assuming human children take a bunch of the structure of their mindsets from what their parents teach them, which certainly seems to be the case on the face of it.
Extending this: trust problems could impede the flow of information in the first place in such a way that the introspective access stops being an amplifier across a system boundary. An AI can expose some code, but an AI that trusts other AIs to be exposing their code in a trustworthy fashion rather than choosing what code to show based on what will make the conversation partner do something they want seems like it’d be exploitable, and an AI that always exposes its code in a trustworthy fashion may also be exploitable.
Human societies do “creating enclaves of higher trust within a surrounding environment of lower trust” a lot, and it does improve coordination when it works right. I don’t know which way this would swing for super-coordination among AIs.
I think it depends a lot on how you frame it, and analogies work much less well than people expect because of ways the Internet is very different from previous environments.
The intuitive social norms surrounding the store clerk involve the clerk having socially normal memory performance and a social conscience surrounding how they use that memory. What if the store clerk were writing down everything you did in the store, including every time you picked your nose, your exact walking path, every single item you looked at and put back, and what you were muttering to your shopping companion? What if that list were quickly sent off to an office across the country, where they would try to figure out any number of things like “which people look suspicious” and “where to display which items”? What if the clerk followed you around the entire store with their notepad when it’s a giant box store with many departments? For the cross-site case, imagine that the office also receives detailed notes about you from the clerks at just about every other place you go, because those ones wound up with more profitable store layouts and lower theft rates and the other shops gradually went out of business.
There are other analogy framings still; consider one with security cameras instead, and whether it feels different, and what different assumptions might be in play. But in all of those cases, relying on misplaced assumptions about humanlike capability, motivation, and agency is to be wary of. (Fortunately, I think a lot of people here should be familiar with that one!)
Since this seems to be an akrasia/executive-related problem, I suspect just having links to possible addons to use (and ideally, example configurations) easily accessible could be disproportionately ameliorative compared to its implementation cost, both via the reminder that compulsive browsing and mitigations for it both exist, and via the social signaling that this is an approved way of browsing that won’t make you weird. Though I’m not sure about the possible noise it creates, depending on what easy options you have for placement/hiding.
My experience in other circles with Slack and Discord is that the niche of emoji reactions is primarily non-interrupting room-sensing (there are also sillier uses in casual social contexts, but they don’t seem relevant here). I don’t feel any pressure to specifically have read something, and I haven’t observed people reading anything into failure to provide a reaction. The rare exception to the latter is when there’s clearly an active conversation going on that someone’s already clearly been active in, which can be handled by explicitly signaling departure, which was a norm in those circumstances anyway.
Non-interrupting room-sensing in a fast-flowing channel environment has generally struck me as beneficial. Being able to quickly find the topic-flow of the current conversation is important, and reactions do not have to be scanned for topic introductions. Reactions encode leafness: you can’t reply to a reaction easily, which also means giving a reaction cannot induce social pressure to reply to it. They encode weaker ties to the individual: people with the same reaction are stacked together, and it takes an extra effort to look at the list of reacting users. Differentially, reactions can also signal level of involvement: someone “conversing” in only reactions may not be up for thinking about the conversation hard enough to produce text responses, but is able to listen and give base emotional feedback (which seems to be the most relevant to the proposed uses here). It serves a similar function to scanning people’s facial expressions in a physical meeting room.
I’m very unclear on how these patterns would play out in a longer-form, more delay-tolerant environment like a comment tree. Some of the room-sensing interpretation makes less sense the less the timescale of the reactions corresponds to unconscious-emotion synchronization; there’s a lot of lost flow context.
“Agree.”
I like the basic idea of the classification. I suggest “Hydra” instead of “Dragon”, since you specifically mention multiple seemingly independent heads/symptoms. If I were to only read the comments, I would think a Dragon was just a particularly large or difficult Bug; I don’t know if that means people are letting the definition slip in that direction.
I think I need to chew on this more and think about how much usefully breaks down along these lines. As I read this, you’re describing a correlation between a 2×2 matrix of bimodal levels of multiplicity of causes and effects, and good strategies for dealing with problems with those traits. Is that accurate? But there’s also a very distinct feeling that each of these categories evokes (especially given the names), and I’m not as sure that the feeling is correlated with the purported criteria; I have an intuitive guess that it’s more correlated with perceptions of agency over problems, which may have only a skewed relation to the “number” of causes and effects (insofar as that’s meaningful in the first place).