Hello! I work at Lightcone and like LessWrong :-)
kave
Is there anything particularly quantum about this effect?
Using the simulator frame, one might think there’s space to tweak:
The basic physical laws
The fundamental constants
The “PRNG” (in an Everettian picture this looks kind of weird because its more like throwing out parts of the wavefunction to save on computation; reminds me a little of mangled worlds)
Perhaps the idea is that tweaking 1 & 2 results in worlds less interesting to the simulator?
I’m not seeing any active rate limits. Do you know when you observed it? It’s certainly the case that an automatic rate limit could have kicked in and then, as voting changed, been removed.
Daniel Dennett has died (1942-2024)
Good question! From the Wiki-Tag FAQ:
A good heuristic is that tag ought to have three high-quality posts, preferably written by two or more authors.
I believe all tags have to be approved. If I were going through the morning moderation queue, I wouldn’t approve an empty tag.
I was trying to figure out why you believed something that seemed silly to me! I think it barely occurred to me that it’s a joke.
The main subcultures that I can think of where this applies are communities based around solving some problem:
Weight loss, especially if based around a particular diet
Dealing with a particular mental health problem
Trying to solve a particular problem in the world (e.g. explaining some mystery or finding the identity of some criminal)
Any favourite examples?
I think my big problem with complexity science (having bounced off it a couple of times, never having engaged with it productively) is that though some of the questions seem quite interesting, none of the answers or methods seem to have much to say.
Which is exacerbated by a tendency to imply they have answers (or at least something that is clearly going to lead to an answer)
I would like to read it! Satire is sometimes helpful for me to get a perspective shift
To answer, for now, just one piece of this post:
We’re currently experimenting with a rule that flags users who’ve received several downvotes from “senior” users (I believe 5 downvotes from users with above 1,000 karma) on comments that are already net-negative (I believe that were posted in the last year).
We’re currently in the manual review phase, so users are being flagged and then users are having the rate limit applied if it seems reasonable. For what it’s worth, I don’t think this rule has an amazing track record so far, but all the cases in the “rate limit wave” were reviewed by me and Habryka and he decided to apply a limit in those cases.(We applied some rate limit in 60% of the cases of users who got flagged by the rule).
People who get manually rate-limited don’t have an explanation visible when trying to comment (unlike users who are limited by an automatic rule, I think).
We have explained this to users that reached out (in fact this answer is adapted from one such conversation), but I do think we plausibly should have set up infrastructure to explain these new rate limits.
Hello and welcome to the site! I’m glad you’re saying hello despite having been too shy :-)
Do let us know in this thread or in the intercom in the bottom right if you run into any problems.
Curated.
Using Bayes-type epistemology is a core LessWrong topic, and I think this represents a bunch of progress on that front (whether the results are already real-world-ready or just real-world-inspired). I have only engaged with small parts of the thesis, but those parts seem pretty exciting; so far, I particularly like knowing about quasi-arithmetic pooling. It feels like I’ve become less confused about something that I didn’t know I was confused about — the connection between the character of the proper scoring rule and the right ways to aggregate those probabilities.
I also appreciate Eric’s work making blogposts explaining more of his thoughts in a friendly way. Hope to see a few more distillations come out of this thesis!
Much sweat and some tears were spent on trying to get something like that working, but the Shoggoths are fickle
Thanks so much for the fix!
LessWrong’s (first) album: I Have Been A Good Bing
(I assume you mean the story with him and the SS soldier; I think a couple of people got confused and thought you were referring to the fact Kahneman had died)
Say more / references?
From a message I wrote to a friend once that seems a little relevant
[H]ow should you act when you’re inside someone’s OODA loop? I was thinking about how like Wikipedia/tab explosions are sort of inside my ooda loop. But sometimes I can be more of an active reader who is navigating the concepts being exposed to me as I choose, and the process becomes like a magic genie or butler who is doing interpretative labour and conjuring up new scenes following my fickle interest.
So it seems like one thing that the person with the smaller loop can do is interpretative labour, and spend the faster cycles on self-legibilising.
Yep, the question is definitely about how far it transfers.
This seems insufficiently argued; the existence of any alignment research that can be done without huge profits is not enough to establish that you don’t need huge profits to solve alignment (particularly when considering things like how long timelines are even absent your intervention).
To be clear, I agree that OpenAI are doing evil by creating AI hype.