Student in fundamental and applied mathematics, interested in theoretical computer science and AI alignment
Matrice Jacobine
idk if I’m allowed to take the money if I’m not the OP, but it really doesn’t seem hard to find other examples who read and internalized the Sequences and went on to do at least one of the things you mentioned: the Zizians, Cole Killian, etc. I think I know the person OP meant when talking about “releasing blatant pump-and-dump coins and promoting them on their personal Twitter”, I won’t mention her name publicly. I’m sure you can find people who read the Sequences and endorse alignment optimism or China hawkism (certainly you can find highly upvoted arguments for alignment optimism here or on the Alignment Forum) as well.
The Last Commit Before The End Of The World
This seems trivial. Ctrl+F “the Sequences” here
A single human brain has the energy demands of a lightbulb, instead of the energy demands for all the humans in Wyoming.
This is a non sequitur. The reasons AI models don’t have the energy demands of a lightbulb isn’t because they’re too big and current algorithms are too inefficient. Quite the contrary, an actual whole brain emulation would require the world’s largest supercomputer. Current computers are just nowhere near as efficient as the human brain.
China proposes new global AI cooperation organisation
I am on the other side of the planet and my participation in the rationality community is almost exclusively online. Those of you who live in the center of the “fraternity”, how much would you agree with describing Ziz as a typical member of your “fraternity” back then?
I don’t live in the Bay rn, but at least enough that Ziz went on “long walks” [plural] with Anna Salamon according to the article, and had several high-profile friends (at least Raemon and Kaj).
(EY was one of my hypotheses for which researcher he was talking to two hours before the interview, though I think it’s overall most likely Hinton.)
Bernie Sanders (I-VT) mentions AI loss of control risk in Gizmodo interview
Lead, Own, Share: Sovereign Wealth Funds for Transformative AI
Energy-Based Transformers are Scalable Learners and Thinkers
NYT article about the Zizians including quotes from Eliezer, Anna, Ozy, Jessica, Zvi
FTR: You can choose your own commenting guidelines when writing or editing a post in the section “Moderation Guidelines”.
I think your tentative position is correct and public-facing chatbots like Claude should lean toward harmlessness in the harmlessness-helpfulness trade-off, but (post-adaptation buffer) open-source models with no harmlessness training should be available as well.
This seems related to the 5-and-10 problem? Especially @Scott Garrabrant’s version, considering logical induction is based on prediction markets.
You seem to smuggle in an unjustified assumption: that white collar workers avoid thinking about taking over the world because they’re unable to take over the world. Maybe they avoid thinking about it because that’s just not the role they’re playing in society.
White-collar workers avoid thinking about taking over the world because they’re unable to take over the world, and they’re unable to take over the world because their role in society doesn’t involve that kind of thing. If a white-collar worker is somehow drafted for president of the United States, you would assume their propensity to think about world hegemony will increase. (Also, white-collar workers engage in scheming, sandbagging, and deception all the time? The average person lies 1-2 times per day)
Human white-collar workers are unarguably agents in the relevant sense here (intelligent beings with desires and taking actions to fulfil those desires). The fact that they have no ability to take over the world has no bearing on this.
… do you deny human white-collar workers are agents?
LLMs are agent simulators. Why would they contemplate takeover more frequently than the kind of agent they are induced to simulate? You don’t expect a human white-collar worker, even one who make mistakes all the time, to contemplate world domination plans, let alone attempt one. You could however expect the head of state of a world power to do so.
Technically I guess there is no consensus against alignment optimism (which is fine by itself).