Open Thread – Autumn 2023
If it’s worth saying, but not worth its own post, here’s a place to put it.
If you are new to LessWrong, here’s the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don’t want to write a full top-level post.
If you’re new to the community, you can start reading the Highlights from the Sequences, a collection of posts about the core ideas of LessWrong.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the Concepts section.
The Open Thread tag is here. The Open Thread sequence is here.
Self-Resolving Prediction Markets for Unverifiable Outcomes by Siddarth Srinivasan, Ezra Karger, Yiling Chen:
This is a fascinating result.
I wonder how important that last part about the common prior is. Here’s how it works:
They conclude:
I really hope someone makes an empirical study of this idea. It could be extremely useful if it works.
Bug report: What’s up with the bizarre rock image on the LW homepage in this screenshot? Here is its URL.
It’s supposed to be right-aligned with the post recommendation to the right (“Do you fear the rock or the hard place”) but a Firefox-specific CSS bug causes it to get mispositioned. We’re aware of the issue and working on it. A fix will be deployed soon.
What the heck. In a single comment you’ve made me dread the entirety of web development. As a developer, you have to compensate for a browser bug which was reported 8 months ago, and which presumably shouldn’t have to be your responsibility in the first place? That sounds infuriating. My sympathies.
“First time?”
If you think that’s bad, just think about compensating for browser bugs which were reported 20 years ago…
(That links to a comment on a post which was moved back to drafts at some point. You can read the comment through the GreaterWrong version.)
It’s not so much that I thought this one instance was bad, as that I tried to extrapolate under the assumption that this was a common occurrence, in which case the extrapolation did not bode well. Naturally I still didn’t expect the situation to be as bad as the stuff you linked, yikes.
Proposal: Remove strong downvotes (or limit their power to −3). Keep regular upvotes, regular downvotes, and strong upvotes.
Variant: strong downvoting a post blocks that user’s posts from appearing on your feed.
Say more about what you want from option 1?
I’m not sure if this is the right course of action. I’m just thinking about the impact of different voting systems on group behavior. I definitely don’t want to change anything important without considering negative impacts.
But I suspect that strong downvotes might quietly contribute to LW being more group thinky.
Consider a situation where a post strongly offends a small number of LW regulars, but is generally approved of by the median reader. A small number of regulars hard downvote the post, resulting in a suppression of the undesirable idea.
I think this is unhealthy. I think a small number of enthusiastic supporters should be able to push an idea (hence allowing strong upvotes) but that a small number of enthusiastic detractors should not be able to suppress a post.
For LW to do it’s job, posts must be downvoted because they are poorly-reasoned and badly-written.
I often write things which are badly written (which deserve to be downvoted) and also things which are merely offensive (which should not be downvoted). [I mean this in the sense of promoting heretical ideas. Name-calling absolutely deserves to be downvoted.] I suspect that strong downvotes are placed more on my offensive posts than my poorly-written posts, which is opposite the signal LW should be supporting.
There is a catch: abolishing strong downvotes might weaken community norms and potentially allow posts to become more political/newsy, which we don’t want. It may also weaken the filter against low quality comments.
Though, perhaps all of that is just self-interested confabulation. What’s really bothering me is that I feel like my more offensive/heretical posts get quickly strong downvoted by what I suspect is a small number of angry users. (My genuinely bad posts get soft downvoted by many users, and get very few upvotes.)
In the past, this has been followed by good argument. (Which is fine!) But recently, it hasn’t. Which makes me feel like it’s just been driven out of anger and offense i.e., a desire to suppress bad ideas rather than untangle why they’re wrong.
This is all very subjective and I don’t have any hard data. I’ve just been getting a bad feeling for a while. This dynamic (if real) has discouraged me from posting my most interesting (heretical) ideas on LW. It’s especially discouraged me from questioning the LW orthodoxy in top-level posts.
Soft downvotes make me feel “this is bad writing”. Strong downvotes make me feel “you’re not welcome here”.
That said, I am not a moderator. (And, as always, I appreciate the hard work you do to keep things wells gardened.) It’s entirely possible that my proposal has more negative effects that positive effects. I’m just one datapoint.
Recently I watched “The Tangle.” It’s an indie movie written and directed by the main actor from Ink, if that means anything to you. (Ink is also an indie movie, but it’s in my top 5 of all time.) Anyway, The Tangle is set in a world right after the singularity (of sorts), but where humans haven’t fully gave up control. Don’t want to spoil too much here, but I found a lot of the ideas there that were popular 5-10 years ago in the rationalist circles. Quite unexpected for an indie movie. I really enjoyed it and I think you would too.
I want to better understand how prediction markets on numeric questions work and how effective are they. Can someone share a good explanation and/or analysis of them? I read the Mataculus FAQ entry but it didn’t satisfy all my questions. Do numeric prediction markets have to use probability density functions like Metaculus, or can they use higher/lower like Manifold used to do, or are there other options as well? Would the way Metaculus does it work for real money markets?
FYI, current comment reactions bug (at least in desktop Firefox):
This is mostly because it’s actually pretty annoying to get exactly even numbers of icons in each row. I agree it looks pretty silly but it’s a somewhat annoying design challenge to get it looking better.
Why not just leave that spot empty, though? Or rather, the right-most spot in the second row.
The current implementation, where reaction icons aren’t deduplicated, might (debatably) look prettier in some sense, but it has other weird consequences. Like this and this:
Update: EDIT: Several reactions appear to be missing in grid view: “Thanks”, “Changed my Mind”, and “Empathy”.
In the first place, I made my original bug report because I couldn’t find the Thanks reaction, looked through all the reactions one by one, and thus noticed the doubled Thumbs Up reactions. I eventually decided I’d hallucinated there being a Thanks reaction, or that it was only available on the EA Forum—but I just noticed that it’s still available, it’s just missing in grid view.
No, if you look you’ll notice that the top row of the palette view is the same as the top row of the list view, and the second row of the palete-view is the same as the bottom row of the list view. The specific lines of code were re-used.
The actual historical process was: Jim constructed the List View first, then I spent a bunch of time experimenting with different combinations of list and palette views, then afterwards made a couple incremental changes for the List view that accidentally messed up the palette view. (I did spent, like, hours, trying to get the palette view to work, visually, which included inventing new emojis. It was hard because each line was trying to have a consistent theme, as well as the whole thing fitting in to a grid)
But yeah it does look like the “thanks” emoji got dropped by accident from the palette view and it does just totally solve the display problem to have it replace the thumbs-up.
Apologies. After posting my original comment I noticed myself what you mention in your first paragraph, realized that my initial annoyance was obviously unwarranted, and thus edited my original comment before I even saw your reply.
Anyway, see my edited comment above: I found at least three reactions that are missing in the grid view.
(It’s deliberate that there is one thumbs up in the top row and 2 in the bottom row of the list-view, because it seemed actually important to give people immediate access to the thumbs-up. Thumbs down felt vaguely important to give people overall but not important to put front-and-center)
That justification makes sense. Though to make the search behavior less weird, it would be good if the search results a) were deduplicated, and maybe b) didn’t display the horizontal divider bars for empty sections.
I’ve noticed that I’m no longer confused about anthropics, and a prediction-market based approach works.
Postulate. Anticipating (expecting) something is only relevant to decision making (for instance, expected utility calculation).
Expecting something can be represented by betting on a prediction market (with large enough liquidity so that it doesn’t move and contains no trade history).
If merging copies is considered, the sound probability to expect depends on merging algorithm. If it sums purchased shares across all copies, then the probability is influenced by splitting; if all copies except one are ignored, then not.
If copies are not merged, then what to anticipate depends on utility function.
“quantum suicide” aka rewriting arbitrary parts of utility function with zeroes is possible but don’t you really care about the person in unwanted scenario? Also, if AGI gets to know that, it can also run arbitrarily risky experiments...
Sleeping Beauty: if both trades go through in the case she is woken up twice, she should bet at probability 1⁄3. If not (for example, living the future: this opportunity will be presented to her only once), it’s coherent to bet at probability 1⁄2.
I’ve heard a comment that betting odds is something different from probability:
Well, if you feel sure about an event with incorrect probability, you may end up in suboptimal state with respect to instrumental rationality (since expected utility calculations will be flawed), so it’s perhaps more useful to have correct intuitions. (Eliezer may want to check this out and make fun of people with incorrect intuitions, by the way :-))
New problems are welcome!
A stupid question about anthropics and [logical] decision theories. Could we “disprove” some types of anthropic reasoning based on [logical] consistency? I struggle with math, so please keep the replies relatively simple.
Imagine 100 versions of me, I’m one of them. We’re all egoists, each one of us doesn’t care about the others.
We’re in isolated rooms, each room has a drink. 90 drinks are rewards, 10 drink are punishments. Everyone is given the choice to drink or not to drink.
The setup is iterated (with memory erasure), everyone gets the same type of drink each time. If you got the reward, you get the reward each time. Only you can’t remember that.
If I reason myself into drinking (reasoning that I have a 90% chance of reward), from the outside it would look as if 10 egoists have agreed (very conveniently, to the benefit of others) to suffer again and again… is it a consistent possibility?
Has anyone explored the potential of AGI agents forming friendships, or genuine interests in humans (not as pets or some consumable they “farm”)?
I was just considering writing a post with a title like “e/acc as Death Cult”, when I saw this:
-- https://twitter.com/garrytan/status/1699828209918046336
It was a mistake to reject this post. This seems like a case where both the rule that was applied is a mis-rule, as well as that it was applied inaccurately—which makes the rejection even harder to justify. It is also not easy to determine which “prior discussion” is being referred to by the rejection reasons.
It doesn’t seem like the post was political...at all? Let alone “overly political” which I think is perhaps kind of mind-killy be applied frequently as a reason for rejection. It also is about a subject that is fairly interesting to me, at least: Sentiment drift on Wikipedia.
It seems the author is a 17-year old girl, by the way.
This isn’t just about standards being too harsh, but about whether they are even being applied correctly to begin with.
I have read that post, and here are my thoughts:
The essence of the post is only in one section of seven: “Exploring Nuances: Case Studies of Evolving Portrayals”.
Related work descriptions could be fit into one sentence for each work, to make reading the report easier.
Sentences about relevance of work, being pivotal step in something, etc don’t carry much meaning.
The report doesn’t state what to anticipate; what [social] observations can one predict better after reading it.
Overall, the post doesn’t look like it tries to communicate anything, and it’s adapted to formal vague style.
I was reading Obvious advice and noticed that at times when I’m overrun by emotions, or in a hurry to make the decision, or for some other reasons I’m not able to articulate verbally I fail to see the obvious. During such times, I might even worry that whatever I’m seeing is not one of the obvious — I might be missing something so obvious that the whole thing would’ve worked out differently had I thought of that one simple obvious thing.
Introspecting, I feel that perhaps I am not exactly sure what this “obvious” even means. I am able to say “that’s obvious” sometimes on the spot and sometimes in hindsight. But when I sit down and think about it, I come up things like “what’s obvious is what feels obvious!” and I am not satisfied really.
Can someone link me to resources to explore this topic further? A discussion here is appreciated as well.