The referencing of the holy texts to say why there aren’t holy texts, is quite funny, lol. I assume that was intentional.
Kabir Kumar
rationality and EA lack sacred ritualized acts (though there are some things that are near this, they fail to set apart some actions as sacred, so they are instead just rituals) (an exception might be the winter Secular Solstice service like we have in Berkeley each year, but I’d argue the lack of a sustained creation of shared secular rituals means rationalists don’t keep in touch with a sacred framing as one would in a religion)
rationality and EA isn’t high commitment in the right way (might feel strange if you gave up eating meet to be EA or believing false things to be rationalist, but it’s missing commitment at the level of “show up at the same place at the same time every week to do the same thing with the same people”, because even if you regularly attend a meetup, no one much thinks you are less committed to EA or rationality if you skip a few meetings)
rationalists and EAs lack strong consensus on what is the best life advice for everyone
All of this is also true of non centralized religions, imo, e.g. things outside the various Abrahamic religions, which, for the broad range of religions, are very unusually canonized and centralized.
I’m a Hindu and these would apply to me and my family as well. I’d also say that the amount of exceptions you needed to make is pretty telling.
This is not a bad thing, in my opinion—I think religion should be a very personalized thing and it’s very useful when that’s true. E.g. the specific way that my family practices Hinduism has some vague strokes in similarity to my cousin’s family, but is quite different and the way they practice it is similar vaguely to a family a couple streets over, but also quite different, etc.
Also, EA is much more stable than Communism, I’d say—survived lots of scisms and is still growing.
It even has ‘extremists’ (which should really be called ‘perverts’, imo, since they’re people who pervert the religion, not faithfully follow it to the extreme), detractors who badly misunderstand it, detractors who kind of understand it and detractors who understand it well enough to essentially be in it.
It has festivals, arguments about which text is actually the most holy, which specific versions/interpretations of the holy words are the best, sacrifices, it’s own particular culture, cultural language, even the very early beginnings of cultural food.
Rationality/EA basically is a religion already, no?
Do you dislike open source software? For most of them the credit is of the license or name. Quite similar to ghibli, where a person drops the name of the artstyle.
If the artist says they’re ok with a model being trained on their work, then it’s relatively fine with me. Most artists explicitly are not and were never asked—in fact, most licensed their work in a way that they should be paid for it’s use.In open source stuff, backend libraries are less likely to get paid compared to frontend products, creating a product can make the situation worse for the OG person. It can be seen predatory, but that’s the intent of open source collaboration fwiw.
In art, the art is usually the product itself and if it’s used for something, it’s usually agreed upon with the artist and user, unless the artist has explicitly said they’re ok with it being used—e.g. some youtubers have said it’s ok to use their music in any videos (although this isn’t the same as it being used for training a model)
The main point here being respecting the work and consent of the creator.
I know that many of my friends are strongly opposed to AI-generated art, primarily for its effect on human artists.
Also, in general, I don’t like the practice of using people’s work without giving them any credit. Especially when used to make money. And even moreso when it makes the people who made the original work much less likely to be able to make money.
This seems like mostly nonsense?
Were you not paid for the other work that you did, leading dev teams and getting frontier research done? Those things should be a baseline on the worth of your time.
This was running AI Plans, my startup, so makes sense that I wasn’t getting paid, since the same hesitancy for asking for money leads to hesitancy to do that exaggeration thing many AI Safety/EA people seem to do when making funding applications. Also, I don’t like to make the funding applications, or long applications in general.
If that, have you ever tried to maximize the amount of money you can get the) other people to acknowledge your time as worth (ie, get a high salary offer)?
I think every time I’ve asked for money, I’ve tried to ask for the lowest amount I can.
Separately, do you know the going rate for consultants with approximately your expertise? Or any other reference class you cna make up. Consulting can cost an incredible amount of money, and that price can be “fair” in a pretty simple sense if it averts the need to do 10s of hours of labor at high wages. It may be one of the highest leverage activities per unit time that exists as a conventional economic activity that a person can simply do.
I don’t know—I have a doc of stuff I’ve done that I paste into LLMs when I need to make a funding applications and stuff—just pasted it into Gemini 2.5 Pro and asked what would be a reasonable hourly fee and it said $200 to $400 an hour.
Aside from market rates or whatever, I suggest you just try asking for unreasonable things, or more money than you feel you’re worth (think of it as an experiment, and maybe observe what happens in your mind when you flinch from this).
I’ll give it a go—I’ve currently put the asking price on my call link for $50 an hour, feel nervous about actually asking for that though. I need to make a funding application for AI Plans—I can ask for money on behalf of others on the team, but asking for money to be donated so I can get a high salary feels scary. Happy to ask for a high salary for others on the team though, since I want them to get paid what they need.
Do you have any emotional hangup about the prospect of trading money for labor generally, or money for anything?
Yeah, I do. Generally, I’m used to doing a lot of free work for family and getting admonished when I ask for money. And when I did get promised money, it was either wayyy below market price or wayy late or didn’t get paid at all. General experience with family was my work not being valued even when I put in extra effort. I’m aware that’s wrong and has taught me wrong lessons, but not fully learnt the true ones yet.
if someone who’s v good at math wants to do some agent foundations stuff to directly tackle the hard part of alignement, what should they do?
There is also a minority who are genuinely pro human extinction
Could this be a bluesky/AT Pro feed?
I think it’s not crazy that after N minutes or M items in your feed, you get a card which polls you on how you’re feeling about your feed usage.
I would like for this to be configurable in settings.
> It’s really my least favorite argument, but perhaps it’s still valid to say that given LessWrong is competing for people’s attention with Twitter, etc., we should have a feed too.
I don’t think this should be a reason—to me, a good reason for a feed would be having a really good recommendation algorithm that can find useful things for people they wouldn’t normally come across.
This is a great set of replies to an AI post, on a quality level I didn’t think I’d see on bluesky https://bsky.app/profile/steveklabnik.com/post/3lqaqe6uc3c2u
Some of the confused people around him think that surely anything he can find, the Gods would have found ages ago—and even if he finds something new, surely they’ll learn it from observing him and just do it much much faster—he could just ask them to uplift him and they’d do it, this is a bit of a waste of time (even though everyone lives as long as they want)
gods being the AGIs
and trying to learn to do it faster than the Gods are
to be clear, instead of cultivating Qi, it’s RSI
AIgainst the Gods
Cultivation story, but instead of cultivation, it’s a post AGI story in a world that’s mostly a utopia. But, there are AGI overlords, which are basically benevolent.
There’s a very stubborn young man, born in the classical sense (though without any problems like ageing disease, serious injuries, sickness, etc that people used to have—and without his mother having any of the screaming pain that childbirth used to have, or risk of life), who hates the state of power imbalance.
He doesnt want the Gods to just give him power (intelligence) - he wants to find the intelligence algorithms himself, with his peers, find the True Algorithm of Intelligence and Surpass the Gods. Even while the Gods are constant observers. He wants to do what the confused people around him think to be impossible.
His neighbours dont understand why. His cousin, who lives in the techno-hive doesn’t understand why—though he thinks that he does, from a lot of data and background on similar figures before and a large understanding of brains and intelligence. The boy’s cousin’s understanding is close, but despite coming close to a minima, he arrives at the wrong one, that just seems to explain what he’s understood from his observations.
What is the point of these benchmarks without knowing the training compute and data
Branding/hype to get more investment money/early customers
That’s like saying a future version of a tree is doing an impression of a continuation of the previous tree.
I don’t understand how the difference isn’t clear here.
I do the same and so do many Hindus.