Interested in big picture considerations and thoughtful action.
Edwin Evans
The guidelines above say “Before users can moderate, they have to set one of the three following moderation styles on their profile...”. But I don’t see this displayed on user profiles. Is “Norm Enforcing” or “Reign of Terror” displayed anywhere? Also I don’t think “Easy Going” really captures the “I Don’t Put Finger on Scales” position.
If the author’s policy is displayed somewhere and I just didn’t find it then this seems good enough to me as a Reader. I hope there is a solution that can make authors both like Eliezer and Wei happy. It will be nice to make Commenters happy also and I’ve thought less about that.
Brainstorming: I wonder if it will be possible to have a subtle indicator at the bottom of the comment section for when comments have been silently modified by the author (such as a ban triggered). I think this may still be unfair to party 1, so perhaps there could instead be badges in prominent author profiles that indicate whether they fall into the “gardener” or “equal scales” position (plus perhaps a setting for users that is off by default but will allow them to see a note for when an article has silent moderations/restrictions by author) or a way for authors to display that they haven’t made any silent edits/restrictions?
Here’s my understanding of the situation. The interested parties are:
Prominent authors: Contribute the most value to the forum and influence over the forum’s long term trajectory. They will move to other platforms if they think it will be better for their messages.
Readers: Don’t want to see low quality comments that are hard to filter out (though I think when there are a lot of comments, comment karma helps a lot and I’m a lot more concerned about prominent authors leaving than about needing to skim over comments)
Prominent authors concerned with fairness: Authors like Wei who have equally or more valuable content and will prefer a forum that shows that the writer is allowing non-biased commenting from readers even if the reader (like me) needs to be willing to do a little more work to see this.
Suspected negative value commenters: Think their comments are valuable and being suppressed due to author bias
Intelligent automated systems: Should probably just get everything since they have unlimited patience for reading low quality, annotated comments
Forum developers: Their time is super valuable
Does this sound about right?
[Update: The guidelines above say “Before users can moderate, they have to set one of the three following moderation styles on their profile...”. But I don’t see this displayed on user profiles. Is the information recorded but not displayed? (I’m looking at Eliezer’s profile. If it’s displayed somewhere then this seems good enough to me.)]
Perhaps in most of the simulations, they help by sharing what they’ve learned. giving brain enhancements, etc, but those ones quickly reach philosophical dead ends, so we find ourselves in one of the ones which doesn’t get help and takes longer doing exploration.
(This seems more plausible to me than using the simulations for “mapping the spectrum of rival resource‑grabbers” since I think we’re not smart enough to come up with novel ASIs that they haven’t already seen or thought of.)
Why do you think they haven’t talked to us?
Creating zillions of universes doing bad philosophy (or at least presumably worse than they could do if the simulators shared their knowledge) doesn’t seem like a good way to try to solve philosophy.
Even if they prefer to wait and narrow down a brute force search to ASIs that the surviving civilizations create (like in jaan’s video), it seems like it would be worth not keeping us in the dark so that we don’t just create ASIs like they’ve already seen before from similarly less informed civilizations.
I’m not sure how yours is creepy? Is it in the idea that all the worst universes also exist?
Yes, and also just that I find it a little creepy/alien to imagine a young child that could be that good at math.
Care to explain? Is the Servant God an ASI and the true makers the humans that built it? Why did the makers hide their deeds?
Thanks for the riff!
Note, I wasn’t sure how to convey it but in the version I wrote, I didn’t mean it as a world where people have god-like powers. The only change intended was that it was a world where it was normal for six-year-olds to be able to think about multiple universes and understand what counts as advanced math for us, like Group Theory. There were a couple things I was thinking about:
I was musing on a possible solution to the measure problem that our universe is an actual hypothetical/mathematical object and there a finite number of actual hypotheticals such that having a copy of a universe would make no more sense than having a copy of a number. (The mathematical object only needs to be as real as we are within it.)
I was also asking if it would be possible to have a world where it was normal for six-year-olds to be that much better at math (and presumably get better as they grow up) in the same way that a six-year-old is that much better at conceptual math than a chimpanzee. Would it have to be creepy or could they still be relatable? (The girl was smiling because she knew she was being silly.)
Disclaimer: I’m not a Group Theorist and the LLM I asked said it would take ten plus years if ever for me to be able to derive the order of the Fischer–Griess monster group from first principles (but it’s normal that the child could do this).
Prompt: write a micro play that is both disturbing and comforting
--Title: “The Silly Child”
Scene: A mother is putting to bed her six-year-old child
CHILD: Mommy, how many universes are there?
MOTHER: As many as are possible.
CHILD (smiling): Can we make another one?
MOTHER (smiling): Sure. And while we’re at it, let’s delete the number 374? I’ve never liked that one.
CHILD (excited): Oh! And let’s make a new Fischer-Griess group element too! Can we do that Mommy?
MOTHER (bops nose) That’s enough stalling. You need to get your sleep. Sweet dreams, little one. (kisses forehead)
End
Thank you for your clear response. How about another example? If somebody offers to flip a fair coin and give me $11 if Heads and $10 if Tails then I will happily take this bet. If they say we’re going to repeat the same bet 1000 times then I will take this bet also and I expect to gain and unlikely to lose a lot. If instead they show me five unfair coins and say they are weighted from 20% Heads to 70% Heads then I’ll be taking on more risk. The other three could be all 21% Heads or all 69% Heads but if I had to pick then I’ll pick Tails because if I know nothing about the other three and I know nothing about if the other person wants me to make or lose money then I’d figure the other three are randomly biased within that range (even though I could be playing a loser’s game for 1000 rounds with flips of those coins if each time one of the coins is selected randomly to flip, but it’s still better than picking Heads). Is this the situation we’re discussing?
Maximality seems asymmetrical and losing information?
Maybe it will help me to have an example though I’m not sure if this is a good one… if I have two weather forecasts that provide different probabilities for 0 inches, 1 inch, etc but I have absolutely no idea about which forecast is better, and I don’t want to go out if there is greater than 20% probability of more than 2 inches of rain then I’d weigh each forecast equally and calculate the probability from there. If the forecasts themselves provide a high/low probabilities for 0 inches, 1 inch, etc then I’d think this isn’t a very good forecast since the forecaster should either have combined all their analysis into a single probability (say 30%) or else given the conditions under which they give their low end (say 10%) or high end (say 40%) and then if I didn’t have any opinions on the probability of those conditions then I would weigh the low and high equally (and get 25%). Do you think I should be doing something different (or what is a better example)?
This seems like 2 questions:
Can you make up mathematical counterfactuals and propagate the counterfactual to unrelated propositions? (I’d guess no. If you are just breaking a conclusion somewhere you can’t propagate it following any rules unless you specify what those rules are, in which case you just made up a different mathematical system.)
Does the identical twin one shot prisoners dilemma only work if you are functionally identical or can you be a little different and is there anything meaningful that can be said about this? (I’m interested in this one also.)
I donated. I think Lightcone is helping strike at the heart of questions around what we should believe and do. Thank you for making LessWrong work so well and being thoughtful around managing content, and providing super quality spaces both online and offline for deep ideas to develop and spread!
What is your tax ID for people wanting to donate from a Donor Advised Fund (DAF) to avoid taxes on capital gains?
Cool. Is this right? For something with a 1/n chance of success I can have a 95% chance of success by making 3n attempts, for large values of n. About what does “large” mean here?
A small improvement to Wikipedia page on Pareto Efficiency
I’m confused by what you mean by “non-pragmatic”. For example, what makes “avoiding dominated strategies” pragmatic but “deference” non-pragmatic?
(It seems like the pragmatic ones help you decide what to do and the non-pragmatic ones help you decide what to believe, but then this doesn’t answer how to make good decisions.)
I meant this as a joke since if there’s one universe that contains all the other universes since it isn’t limited by logic, and that one doesn’t exist then that would mean I don’t exist either and wouldn’t have been able to post this. (Unless I only sort-of exist in which case I’m only sort-of joking.)
We can be virtually certain that 2+2=4 based on priors. This is because it’s true in the vast multitude of universes. In fact all the universes except the one universe that contains all the other universes. And I’m pretty sure that one doesn’t exist anyway.
@Duncan Sabien (Inactive): given the updated totals @habryka mentioned does this increase your sense of LessWrong being a great place for co-thinking?
(Current totals are 42⁄39 and 16⁄11.)