unironically earnestly confused by the downvotes here
lc
I love mandarin’s grammar. People get hung up on the characters but it’s really not so bad nowadays with computers. Feels like a language designed by actual people (mostly)
Seems correct, and very important. However I expect that the first businesses that they will eat up will be software businesses, as has largely already been occurring.
Is Celestia a lab employee?
So what is supposed to be the point of working for OpenAI if you don’t think they’re going to “win”?
With Anthropic I can at least see the propagandistic motive; you’re working for the “safe” artificial superintelligence corporation, inside a culture that feels like it has a distinct vision of justice & the good. Even xAI’s schizophrenic objective of “fighting wokeness” is a worldview that is grander and more specific than personal power.
But OpenAI-in-practice, at this point, is just like, Sam’s mad science project to take over the world. He deliberately destroyed the governance mechanisms of the organization that would have constrained him to any other virtues, and since its founding has applied very little cultural filtering to incoming applicants. There is no clear moral vision, no cohesive story for how OpenAI’s competition is a net improvement to either X-risk or lightcone shape or somehow affects the fact-about-the-world that Tool AIs wanting to be Agent AIs. Why would you devote yourself to such a project if you didn’t think he was going to succeed? Money? Inertia?
As long as the perpetrators are still alive, there is still the evidence in the form of memories inside those peoples’ heads, which seems like it would clear quite a few remaining crimes, independent of all of the side channel mechanisms Omega could use to gather information about the likely commission of such crimes. Even for people who are no longer still alive, we will often have their DNA, and can use that to narrow down a lot of the variance in how we expect them to act in various likely circumstances.
Something that is probably obvious to most people, but I have not seen discussed on LessWrong yet: I expect that sometime during the AGI revolution, most crimes committed between now and 2000 are going to become solved, including many that have not been detected as of 2026. I wonder often what society is going to find out WRT what % of the population has committed serious crimes, and what we’re going to do in response.
Right, I am mocking what the conversation would have been had the prediction market not existed (edited).
In 2015:
B: “Do you think there’s gonna be a Hantavirus pandemic?”
A: “uh well it seems unlikely”
B: “I disagree”
A: “totally valid”
“I’m surprised by your take here lc. Fostering a diversity of perspectives is how we keep people honest on LessWrong. Do you really think things would be better if people were less willing to explore alternative hypotheses...”
Getting emotionally tired of “perspectiveslop” on the internet. Just feels like for every possible respectably wrong perspective there’s a guy willing to take it to signal his ability to contribute to conversations.
Hard not to think about the death of software engineering as a legitimate craft and discipline.
why do you expect me to answer questions whichvresfing the post would show are absolutely irrelevant to the thesis in question?
Because your initial reply was “you could read the actual post and find your answers”, and he looked in the post and decided he didn’t find the answers.
As you are aware, your experience is uniquely bad because you are intentionally rude to commenters. For example, in this interaction, a normal person would cite the content of the post that you think is relevant. Inserting artificial typos in your responses, to signal that they’re not worth your time, annoys people because it lowers the quality of discourse on the forum, and it reduces their willingness to engage with your ideas in good faith. I write posts challenging rationalists irregularly and almost never struggle with people commenting without reading them.
I don’t understand what gives you the authority to comment on a post you didn’t read
I didn’t comment about the post, I commented about your interaction with @DaemonicSigil, which I had sufficient context for.
I feel the quality on this site really took a nosedive if thus sort of inchoate shrieking is tolerated
You have the personal power to ban users on your posts.
I am confident there is nothing in the post that would provide meaningfully important context, or else you would have cited it.
Based on the other comments users have left, the post is clearly very poorly written, in a way that makes it difficult to understand. I’m not a twitter addict and it seems low value to me
DaemonicSigil said:
The complaint is about futures that don’t contain any people at all (or maybe only a handful), and whose AI intelligence-optimizers care so little for goodness that they will happily genocide any alien civilization that is unable to defend itself
An inference of a future that “doesn’t contain any people at all”, that is dedicated entirely to von neumann probes and solving mathematical theorems, is that the majority of humans that presently exist are getting wasted, or at least somehow disappearing. You then said:
We have different values. Th isn’t relevant to the essay
Which a natural read takes to mean “I don’t care if I get wasted”. If you don’t mean to take these odd positions you should stop writing comments in a way deliberately designed to be misinterpreted.
Separate discussion, but if this is mostly true, I think it’s kinda dumb to berate EAs for also believing in a bioanchors view.