Hello! I work at Lightcone and like LessWrong :-)
kave
Maybe “counterfactually robust” is an OK phrase?
I am sad to see you getting so downvoted. I am glad you are bringing this perspective up in the comments.
I like comments about other users’ experiences for similar reasons why I like OP. I think maybe the ideal such comment would identify itself more clearly as an experience report, but I’d rather have the report than not.
What you probably mean is “completely unexpected”, “surprising” or something similar
I think it means the more specific “a discovery that if it counterfactually hadn’t happened, wouldn’t have happened another way for a long time”. I think this is roughly the “counterfactual” in “counterfactual impact”, but I agree not the more widespread one.
It would be great to have a single word for this that was clearer.
Enovid is also adding NO to the body, whereas humming is pulling it from the sinuses, right? (based on a quick skim of the paper).
I found a consumer FeNO-measuring device for €550. I might be interested in contributing to a replication
(No, “you need huge profits to solve alignment” isn’t a good excuse — we had nowhere near exhausted the alignment research that can be done without huge profits.)
This seems insufficiently argued; the existence of any alignment research that can be done without huge profits is not enough to establish that you don’t need huge profits to solve alignment (particularly when considering things like how long timelines are even absent your intervention).
To be clear, I agree that OpenAI are doing evil by creating AI hype.
Is there anything particularly quantum about this effect?
Using the simulator frame, one might think there’s space to tweak:
The basic physical laws
The fundamental constants
The “PRNG” (in an Everettian picture this looks kind of weird because its more like throwing out parts of the wavefunction to save on computation; reminds me a little of mangled worlds)
Perhaps the idea is that tweaking 1 & 2 results in worlds less interesting to the simulator?
I’m not seeing any active rate limits. Do you know when you observed it? It’s certainly the case that an automatic rate limit could have kicked in and then, as voting changed, been removed.
Good question! From the Wiki-Tag FAQ:
A good heuristic is that tag ought to have three high-quality posts, preferably written by two or more authors.
I believe all tags have to be approved. If I were going through the morning moderation queue, I wouldn’t approve an empty tag.
I was trying to figure out why you believed something that seemed silly to me! I think it barely occurred to me that it’s a joke.
The main subcultures that I can think of where this applies are communities based around solving some problem:
Weight loss, especially if based around a particular diet
Dealing with a particular mental health problem
Trying to solve a particular problem in the world (e.g. explaining some mystery or finding the identity of some criminal)
Any favourite examples?
I think my big problem with complexity science (having bounced off it a couple of times, never having engaged with it productively) is that though some of the questions seem quite interesting, none of the answers or methods seem to have much to say.
Which is exacerbated by a tendency to imply they have answers (or at least something that is clearly going to lead to an answer)
I would like to read it! Satire is sometimes helpful for me to get a perspective shift
To answer, for now, just one piece of this post:
We’re currently experimenting with a rule that flags users who’ve received several downvotes from “senior” users (I believe 5 downvotes from users with above 1,000 karma) on comments that are already net-negative (I believe that were posted in the last year).
We’re currently in the manual review phase, so users are being flagged and then users are having the rate limit applied if it seems reasonable. For what it’s worth, I don’t think this rule has an amazing track record so far, but all the cases in the “rate limit wave” were reviewed by me and Habryka and he decided to apply a limit in those cases.(We applied some rate limit in 60% of the cases of users who got flagged by the rule).
People who get manually rate-limited don’t have an explanation visible when trying to comment (unlike users who are limited by an automatic rule, I think).
We have explained this to users that reached out (in fact this answer is adapted from one such conversation), but I do think we plausibly should have set up infrastructure to explain these new rate limits.
Hello and welcome to the site! I’m glad you’re saying hello despite having been too shy :-)
Do let us know in this thread or in the intercom in the bottom right if you run into any problems.
Curated.
Using Bayes-type epistemology is a core LessWrong topic, and I think this represents a bunch of progress on that front (whether the results are already real-world-ready or just real-world-inspired). I have only engaged with small parts of the thesis, but those parts seem pretty exciting; so far, I particularly like knowing about quasi-arithmetic pooling. It feels like I’ve become less confused about something that I didn’t know I was confused about — the connection between the character of the proper scoring rule and the right ways to aggregate those probabilities.
I also appreciate Eric’s work making blogposts explaining more of his thoughts in a friendly way. Hope to see a few more distillations come out of this thesis!
Much sweat and some tears were spent on trying to get something like that working, but the Shoggoths are fickle
Thanks so much for the fix!
Curated! This kicked off a wonderful series of fun data science challenges. I’m impressed that it’s still going after over 3 years, and that other people have joined in with running them, especially @aphyer who has an entry running right now (go play it!).
Thank you, @abstractapplic for making these. I don’t think I’ve ever submitted a solution, but I often like playing around with them a little (nowadays I just make inquiries with ChatGPT). I particularly like
That it nuanced my understanding of the supremacy of neural networks and when “just throw a neural net” at it might work or might not.
Here’s to another 3.4 years!