Hello! I work at Lightcone and like LessWrong :-)
kave
I was trying to figure out why you believed something that seemed silly to me! I think it barely occurred to me that it’s a joke.
The main subcultures that I can think of where this applies are communities based around solving some problem:
Weight loss, especially if based around a particular diet
Dealing with a particular mental health problem
Trying to solve a particular problem in the world (e.g. explaining some mystery or finding the identity of some criminal)
Any favourite examples?
I think my big problem with complexity science (having bounced off it a couple of times, never having engaged with it productively) is that though some of the questions seem quite interesting, none of the answers or methods seem to have much to say.
Which is exacerbated by a tendency to imply they have answers (or at least something that is clearly going to lead to an answer)
I would like to read it! Satire is sometimes helpful for me to get a perspective shift
To answer, for now, just one piece of this post:
We’re currently experimenting with a rule that flags users who’ve received several downvotes from “senior” users (I believe 5 downvotes from users with above 1,000 karma) on comments that are already net-negative (I believe that were posted in the last year).
We’re currently in the manual review phase, so users are being flagged and then users are having the rate limit applied if it seems reasonable. For what it’s worth, I don’t think this rule has an amazing track record so far, but all the cases in the “rate limit wave” were reviewed by me and Habryka and he decided to apply a limit in those cases.(We applied some rate limit in 60% of the cases of users who got flagged by the rule).
People who get manually rate-limited don’t have an explanation visible when trying to comment (unlike users who are limited by an automatic rule, I think).
We have explained this to users that reached out (in fact this answer is adapted from one such conversation), but I do think we plausibly should have set up infrastructure to explain these new rate limits.
Hello and welcome to the site! I’m glad you’re saying hello despite having been too shy :-)
Do let us know in this thread or in the intercom in the bottom right if you run into any problems.
Curated.
Using Bayes-type epistemology is a core LessWrong topic, and I think this represents a bunch of progress on that front (whether the results are already real-world-ready or just real-world-inspired). I have only engaged with small parts of the thesis, but those parts seem pretty exciting; so far, I particularly like knowing about quasi-arithmetic pooling. It feels like I’ve become less confused about something that I didn’t know I was confused about — the connection between the character of the proper scoring rule and the right ways to aggregate those probabilities.
I also appreciate Eric’s work making blogposts explaining more of his thoughts in a friendly way. Hope to see a few more distillations come out of this thesis!
Much sweat and some tears were spent on trying to get something like that working, but the Shoggoths are fickle
Thanks so much for the fix!
LessWrong’s (first) album: I Have Been A Good Bing
(I assume you mean the story with him and the SS soldier; I think a couple of people got confused and thought you were referring to the fact Kahneman had died)
Say more / references?
From a message I wrote to a friend once that seems a little relevant
[H]ow should you act when you’re inside someone’s OODA loop? I was thinking about how like Wikipedia/tab explosions are sort of inside my ooda loop. But sometimes I can be more of an active reader who is navigating the concepts being exposed to me as I choose, and the process becomes like a magic genie or butler who is doing interpretative labour and conjuring up new scenes following my fickle interest.
So it seems like one thing that the person with the smaller loop can do is interpretative labour, and spend the faster cycles on self-legibilising.
Yep, the question is definitely about how far it transfers.
increase the agent’s expected future value
I wonder if there’s a loopiness here is which breaks the setup (the expectation I’m guessing is relative to the prediction markets probabilities? Though it seems like the market is over sensory experiences but the values are over world states in general, so maybe I’m missing something). But it seems like if I take an action and move the market at the same time, I might be able to extract a bunch of extra money and acquire outsize control.
Bidding to control the agent’s actions for the next N timesteps
This seems like it’s wasteful relative to contributing to a pool that bids on action A (or short-term policy P). I guess coordination is hard if you’re just contributing to the pool though, and all connects to the merging process you describe.
I mean when I journal I come up with little exercises to improve areas of my life. I imagine that people in your cohort might do similarly, and given that they signed up to improve their IQ, that might include things adjacent to the tasks of the IQ test.
And I don’t think general meditation should count as training, but specific meditations could (e.g. if you are training doing mental visualisations and the task involves mental rotations).
I’m not trying to say that there are definitely cross-training effects, just that these seem like the kinds of thing which are somewhat more likely (than, say, supplements) to create fairly narrow improvements close to the test.
And I can make people think “out of the box” (e.g. via specific games, specific “supplements”, specific meditations)
And prod people to think about how they can improve in whatever areas they want (e.g. via journaling, talking, and meditating)
Ah, these two have made me more concerned about training effects: especially the games, but also the meditations and journaling.
It seems pretty plausible certain games could basically train the same skills as the IQ test.
I think this a real problem (tho I think it’s more fundamental than your hypothesis would suggest; we could check commenting behaviour in the 2000s as a comparison).
We have some explorations underway addressing related issues (like maybe the frontpage should be more recommender-y and show you good old posts, while the All Posts page is used for people who care a lot about recency). I don’t think we’ve concretely considered stuff that would show you good old posts with new comments, but that might well be worth exploring.
Good question! From the Wiki-Tag FAQ:
I believe all tags have to be approved. If I were going through the morning moderation queue, I wouldn’t approve an empty tag.