I didn’t vote, but it could be because it’s formatted as a wall of text, seems to have a lot of useless filler and no clear message. I got to the end of it, but don’t really have any idea of what you wanted to say.
npostavs
Ostensibly-neutral (but in reality heavily blue-tribe-coded) “experts” who stay in their lane are not exactly popular or influential at the moment,
Aren’t they at a low point due to not staying in their lane by smuggling in political views into their ostensibly-neutral pronouncements? E.g., doctors saying it’s okay to break Covid restrictions on gathering if it’s for social justice.
See https://x.com/twocents/status/2020596821228388704
It sounded kind of… rehearsed? Not sure if I should take this is as real position.
Hard RSI: AI modifies itself in a way that is different from just changing numerical values of its weights. It creates a new version of itself [...]
In hard RSI there is no danger of misalignment since AI doesn’t create a successor, but rather modifies itself. In easy RSI there is danger of misalignment, [...]
I don’t think I understand how “creates a new version of itself” is different from “create a successor”?
Oh, LLMs also suggested SCP-3125, but I thought they were wrong because “U” didn’t seem like a plausible typo for “SCP”. I wasn’t aware of the alternate U-3125 naming.
But rebelling against a globalized techno-political-memeplex is like rebelling against U-3125.
Is “U-3125” referencing something?
I think I’m mostly following now, but when you write stuff like:
In the higher education system, I expect it would take the form of increasing the swathe of universities which taught a complete curriculum, as well as evening out the distribution of staff.
I wonder, is the undergraduate curriculum really significantly different between top-tier universities and others? Instead of wasting space on the rocket analogy, it would be useful to establish that sort of thing about the actual subject. And generally, the posts is really missing a lot of detail about universities, and has way too much details about rockets.
(I haven’t cast any votes on your post)
My position with respect to downvoting, or upvoting for that matter, would be only to downvote a post well below 0 if I was confident that I could explain why it was harmful and/or illogical.
I’m not sure whether it’s a good idea or not to take into account the current score before voting. But regardless, there’s no way to enforce that 100% of people will follow any particular voting policy, so you’re going to end up with posts below 0 sometimes, even if they aren’t harmful.
You are talking so much about rockets that I can’t even tell what point you’re trying to make about universities. The post would probably be a lot clearer without this analogy.
This is an accidental double post of https://www.lesswrong.com/posts/FJxc4Lk6mijiFiPp2/the-big-nonprofits-post-2025 (also double posted on the wordpress site: https://thezvi.wordpress.com/2025/11/26/the-big-nonprofits-post-2025/ and https://thezvi.wordpress.com/2025/11/27/the-big-nonprofits-post-2025-2/)
Seems understandable to me (although I guess I’m somewhat primed by reading the previous versions).
I think most of “you” can be omitted in English as well:
Imagine: you study an immature AI in depth. Decode its mind entirely. Develop a great theory of how it works. Validate this theory on a bunch of examples. Use that theory to predict how the AI’s mind will change as it ascends to superintelligence and gains (for the first time) the very real option of grabbing the world for itself. Even then, you are, fundamentally, using a new and untested scientific theory to predict the results of an experiment that has not yet run, about what the AI will do when it really, actually, for real has the opportunity to grab power from the humans.
This seems to be an accidental repost of https://www.lesswrong.com/posts/9TPEjLH7giv7PuHdc/crime-and-punishment-1 from April. (It’s also reposted on https://thezvi.wordpress.com/2025/11/03/crime-and-punishment-1-2/, but not thezvi.substack.com/).
“Von Neumannn was pronounced, by a peer, to be smarter than Albert Einstein to his face and got no objection” interpretation feels off to me
I see that it’s a bit ambiguous, but I read “to his face” as most likely referring to Einstein’s face, which is consistent with your interpretation of Wigner.
The thing that makes hypnosis so bizarre and seemingly powerful is it’s ability to keep attention, [...] [...] In full blown hypnosis [...] they are putting their attention where I specify without doubt or hesitation.
This sounds like it corresponds to “the idea of a state of focused attention”, so I don’t understand why you rejected it. Just because he talks about it as a spectrum (vs a state)? Or something else?
I tried several things without success (each in Claude Opus 4.1, Gemini Pro 2.5, and GPT-5):
Yeah, for now you probably need something more specialized. https://electricalexis.github.io/notagen-demo/ can compose music of semi-decent quality, so with the right training a model ought to be able to manage recognition too (although more unconventional music would be harder).
that I wrote out twice as fast as it actually goes,
Music notation rhythms are relative, so I don’t think this has a real meaning? Like, it might be nicer to use half notes as the main beat, and write the tune mostly in quarters, as you did in the Musescore typeset version. But the hand-written version using eighth notes to a quarter note beat conveys basically the same thing (ignoring the triplet issue).
Your last two Musescore files are missing some separation between 1st and 2nd endings. Compare the images at https://musescore.org/en/handbook/4/voltas
Underdogs lose. If you win, you weren’t the underdog.
Is it not more like, p(underdog_loses) > 0.5? Sometimes the thing with lesser probability happens even if the prediction was well-calibrated.
I don’t think this interpretation can hold up: the body of titotal’s post doesn’t deal with the good vs bad timeline. It’s just about the uncertainty of modelling AI progress which applies for both the good and bad timelines.
I think it’s an intentional pun, like, “whether forecasters” are people who predict whether something will happen or not.
[...]
[...]
2^32 = ~4.3 billion, not trillion.