Well that tweet can easily be interpreted as overconfidence for their own side, I don’t know whether Vance would continue with being more of a rationalist and analyse his own side evenly.
lesswronguser123
I think the post was a deliberate attempt to overcome that psychology, the issue is you can get stuck in these loops of “trying to try” and convincing yourself that you did enough, this is tricky because it’s very easy to rationalise this part for feeling comfort.
When you set up for winning v/s try to set up for winning.
The latter is much easier to do than the former, and former still implies chance of failure but you actually try to do your best rather than, try to try to do your best.
I think this sounds convoluted, maybe there is a much easier cognitive algorithm to overcome this tendency.
I thought we had a bunch of treaties which prevented that from happening?
I think it’s an hyperbole, one can still progress, but in one sense of the word it is true, check The Proper Use of Humility and The Sin of Underconfidence
I don’t say that morality should always be simple. I’ve already said that the meaning of music is more than happiness alone, more than just a pleasure center lighting up. I would rather see music composed by people than by nonsentient machine learning algorithms, so that someone should have the joy of composition; I care about the journey, as well as the destination. And I am ready to hear if you tell me that the value of music is deeper, and involves more complications, than I realize—that the valuation of this one event is more complex than I know.
I wonder if he lived up to that standard, given we have genAI like suno and udio now.
I recommend having this question in the next lesswrong survey.
Along the lines of “How often do you use LLMs and your usecase?”
Is this selection bias? I have had people who are overconfident and get nowhere.
I don’t think it’s independent from smartness, a smart+conscientious person is likely to do better.
https://www.lesswrong.com/tag/r-a-z-glossary
I found this by mistake and luckily I remembered glancing over your question
It would be an interesting meta post if someone did a analysis of each of those traction peaks due to various news or other articles.
accessibility error: Half the images on this page appear to not load.
Have you tried https://alternativeto.net ? It may not be AI specific but it was pretty useful for me to find lesser known AI tools with particular set of features.
Error: The mainstream status on the bottom of the post links back to the post itself. Instead of comments.
I prefer system 1: fast thinking or quick judgement
Vs
System 2 : slow thinking
I guess it depends on where you live and who you interact with and what background they have because fast vs slow covers the inferential distance fastest for me avoids the spirituality intuition woo woo landmine, avoids the part where you highlight a trivial thing to their vocab called “reason” etc
William James (see below) noted, for example, that while science declares allegiance to dispassionate evaluation of facts, the history of science shows that it has often been the passionate pursuit of hopes that has propelled it forward: scientists who believed in a hypothesis before there was sufficient evidence for it, and whose hopes that such evidence could be found motivated their researches.
Einstein’s Arrogance seems like a better explanation of the phenomena to me
I remember this point that yampolskiy made for impossibleness of AGI alignment on a podcast that as a young field AI safety had underwhelming low hanging fruits, I wonder if all of the major low hanging ones have been plucked.
I thought this was kind of known that few of the billionaires were rationalist adjacent in a lot of ways, given effective altruism caught on with billionaire donors, also in the emails released by OpenAI https://openai.com/index/openai-elon-musk/ there is link to slatestarcodex forwarded to elonmusk in 2016, elon attended eliezer’s conference iirc. There are a quite of places you could find them in the adjacent circles which already hint to this possibility like basedbeffjezos’s followers being billionaires etc. I was kind of predicting that some of them would read popular things on here as well since they probably have overlapping peer groups.
Few feature suggestions: (I am not sure if these are feasible)
1) Folders OR sort by tag for bookmarks.
2) When I am closing the hamburger menu on the frontpage I don’t see a need for the blogs to not be centred. It’s unusual, it might make more sense if there was a way to double stack it side by side like mastodon.
3) RSS feature for subscribed feeds? I don’t like using Emails because too many subscriptions and causes spam.
(Unrelated: can I get deratelimited lol or will I have to make quality Blogs for that to happen?)
- May 22, 2025, 1:44 PM; 1 point) 's comment on Eliezer’s Lost Alignment Articles / The Arbital Sequence by (
I usually think of this in terms of Dennett’s concept of the intentional stance, according to which there is no fact of the matter of whether something is an agent or not. But there is a fact of the matter of whether we can usefully predict its behavior by modeling it as if it was an agent with some set of beliefs and goals.
That sounds awfully lot like asserting agency to be a mind-projecting fallacy.
does too-hard-to-win bets make you wary of something unpredictably going right?