I thought this was kind of known that few of the billionaires were rationalist adjacent in a lot of ways, given effective altruism caught on with billionaire donors, also in the emails released by OpenAI https://openai.com/index/openai-elon-musk/ there is link to slatestarcodex forwarded to elonmusk in 2016, elon attended eliezer’s conference iirc. There are a quite of places you could find them in the adjacent circles which already hint to this possibility like basedbeffjezos’s followers being billionaires etc. I was kind of predicting that some of them would read popular things on here as well since they probably have overlapping peer groups.
lesswronguser123
Few feature suggestions: (I am not sure if these are feasible)
1) Folders OR sort by tag for bookmarks.
2) When I am closing the hamburger menu on the frontpage I don’t see a need for the blogs to not be centred. It’s unusual, it might make more sense if there was a way to double stack it side by side like mastodon.
3) RSS feature for subscribed feeds? I don’t like using Emails because too many subscriptions and causes spam.
(Unrelated: can I get deratelimited lol or will I have to make quality Blogs for that to happen?)
I usually think of this in terms of Dennett’s concept of the intentional stance, according to which there is no fact of the matter of whether something is an agent or not. But there is a fact of the matter of whether we can usefully predict its behavior by modeling it as if it was an agent with some set of beliefs and goals.
That sounds awfully lot like asserting agency to be a mind-projecting fallacy.
Sorry for the late reply, I was looking through my past notifs, I would recommend you to taboo the words and replace the symbols with the substance , I would also recommend you to treat language as instrumental since words don’t have inherent meaning, that’s how an algorithm feels from inside.
Is this the copy of video which has been listed as removed? @Raemon
It is surely the case for me, I was raised a hindu nationalist,I ended up also trusting various sides of the political spectrum from far right to far left, porn addiction , later ended up falling into trusting science,technology without thinking for myself. Then i fell into epistemic helplessness, did some 16 hr/day work hrs as a denial of the situation led to me getting sleep paralysis, later my father also died due to his faulty beliefs in naturopathy and alternative medicine honestly due to his contrarian bias he didn’t go to a modern medicine doctor, I was 16 back then (last year) . Which eventually ended up leading me here, initially being every skeptical of anything but my default common sense intuition I realised the cognitive biases I fell for etc etc. so on
Most useful post, I was intuitively aware of these states, thanks for providing the underlying physiological underpinning. I am aware enough to actually feel a sense of tension in my head in general in SNS dominated states and noticed that I was biased during these states, my predictions seem to align well with the literature it seems.
Why does lesswrong.com have the bookmark feature without a way to sort them out? As in using tags or maybe even subfolders. Unless I am missing something out. I think it might be better if I just resort to browser bookmark feature.
I think what they mean is the intuitve notion of typicality rather than the statistical concept of average.
98 seems approximately 100
but 100 doesn’t seem approximately 98 due to how this heuristic works.
That is typicality is a system 1 heuristic of a similarity cluster, it’s asymmetric.
Here is the post on typicality from a human guide’s to word sequence.
To interpret what you meant when you said “my hair has grown above average” you have a extensional which you refer to with the word “average hair” and you find yourself to be on the outer ends of this extensional cluster in the hairspace. Ideally you would craft an intensional of this extensional instead of “average hair as in mathematical concept Sum of terms/no. of terms” to somewhat like “that’s the amount of hair growth I tend to experience usually” now this statement may or may not be accurate based on how much data you have provided to your inner sim. Or if you mean by average hair as in “the societal stereotype of average hair growth” then that would be subject to cultural factors like what shows you watch etc.
(also if you reply back I won’t be able to reply I have been ratelimited one post per 2 days for an year on lesswrong)
The student employing version one of the learning strategy will gain proficiency at watching information appear on a board, copying that information into a notebook, and coming up with post-hoc confirmations or justifications for particular problem-solving strategies that have already provided an answer.
ouch I wasn’t prepared for direct attacks but thank you very much for explaining this :), I now know why some of the later strategies of my experienced self of “if I was at this step how would I figure this out from scratch” and “what will the teacher teach today based on previous knowledge” worked better, or felt more engaging from my POV (I love maths and it was normal for me to try find ways to engage more) .
But this tells me I should apply rationality A-Z techniques more often to learning...given how this is just anticipation controller,fake causality and replacing symbol with the referent, positive bias.
Leaning into the obvious is also the whole point of every midwit meme.
I would argue this is not a very good example, “do the obvious thing” just implies that you have a higher prior for a plan or a belief and you are choosing to believe it without looking for further evidence.
It’s epistemically arogant to assume that your prior will be always correct.
Although if you are experienced in a field it probably took your mind a lot of epistemic work to isolate a hypothesis/idea/plan in the total space of them while doing the inefficient bayesian processing in the background.
The root issue is that reality has a surprising amount of detail. All models are wrong. The map is not the territory.
We look at the territory via our beliefs, I think intuition is just a model by another name. A true map corresponds to territory. I think the extra amount of details is due to our brain’s inability to comprehend the raw truth since the levels to reality lie on the map and these level can often leave out minor details since we cannot compute it. Our higher level maps are just approximation of the fundamental reality.
The emissary’s narrow, analytical view of the world and desire to have everything fully under control, cut it into pieces and arrange it in ways it can fully grasp, is inadequate for dealing with the complexities of reality.
I think there are a lot of sequences on this topic on how our intuitions of categorisation which were evolved to deal with the complexities of the world aren’t adequate and can often need help of reductionism.
There is a reason why mathematicians talk about the 3Bs: bus, bad, bed. This is where we have our best ideas.
Eureka moments don’t happen when you try to force it.
That is diffused vs focused thinking you cannot really distill and tell which eurekas are real eurekas and which are fake eurekas without doing the focused part after the hypothesis generator part of the brain does its thing.
This is once again a fact our left brain likes to ignore as the chemicals in our body are not something fully under its control and this potentially diminishes its importance.
Uhh I mean I just don’t understand why is this post at first criticisng the left brain for valuing truth and then coming back at it for not valuing truth...
Also (if there hasn’t been further research which made a comeback) the premise of the post left-right brain influencing personality dichotomy is inaccurate.
This theory seems to explain all observations but I am not able to figure out what it doesn’t explain in day to day life.
Also, for the last picture the key lies in looking straight at the grid and not the noise then you can see the straight lines, although it takes a bit of practice to reduce your perception to that.
Obviously this isn’t true in the literal sense that if you ask them, “Are you indestructible?” they will reply “Yes, go ahead and try shooting me.”
Oh well- I guess meta-sarcasm about guns is a scarce finding in your culture because I remember non-zero times when I have said this months ago. (also I emotionally consider myself as mortal if that means I will die just like 90% of other humans who have ever lived and like my father)
Bayesian probability theory is the sole piece of math I know that is accessible at the high school level
They teach it here without the glaring implications because those don’t come in exams. Also I was extremely confused by the counterintuitive nature of probability until I stumbled upon here and realised my intuitions were wrong.
instead semi-sensible policies would get considered somewhere in the bureaucracy of the states?
Whilst normally having radical groups is useful for shifting the Overton window or abusing anchoring effects in this case study of environmentalism I think it backfired from what I can understand, given the polling data of public in the sample country already caring about the environment.
I think the hidden motives are basically rationalisation, I have found myself singlethinking those motives in the past nowadays I just bring those reasons to the centre-stage and try to actually find whether they align with my commitments instead of motivated stopping. Sometimes I just corner my motivated reasoning (bottom line) so bad (since it’s not that hard to just do expected consequentialist reasoning properly for day to day stuff) that instead of my brain trying to come up with better reasoning it just makes the idea of the impulsive action more salient, some urge along the lines of “think less and intuit/act more”.
Also I have personally used this concept of “intellectual masturbation” to divert discussions away from potentially philosophical bomb to more relevant topics it’s much better to reduce the philosophical jargon in day to day conversations lol.
Ever since they killed (or made it harder to host) nitter,rss,guest accounts etc. Twitter has been out of my life for the better. I find the twitter UX in terms of performance, chronological posts, subscriptions to be sub-optimal. If I do create an account my “home” feed has too much ingroup v/s outgroup kind of content (even within tech enthusiasts circle thanks to the AI safety vs e/acc debate etc), verified users are over-represented by design but it buries the good posts from non-verified. Elon is trying wayy too hard to prevent AI web scrapers ruining my workflow .
The gray fallacy strikes again, the point is to be lesswrong!
I remember this point that yampolskiy made for impossibleness of AGI alignment on a podcast that as a young field AI safety had underwhelming low hanging fruits, I wonder if all of the major low hanging ones have been plucked.