Upvoted for thoughtful dissent and outside perspective.
I … have some complicated mixed feelings here. LW has a very substantial contingent of “gifted kids”, who spent a decent chunk of their (...I suppose I should say “our”) lives being frustrated that the world would not take them seriously due to age. Groups like that are never going to tolerate norms saying that young age is a reason to talk down to someone. And guidelines for protecting younger people from older people, to the extent that they involve disapproval or prevention of apparently-consensual choices by younger people, are going to be tricky that way. Any concern that “young minds are not allowed to waive” will be (rightly) seen as condescending, especially if you extend “young” to age 30. This does not really become less true if the concern is accurate.
This is extra-true here, because the “rationalist community” is not a single organization with a hierarchy, or indeed (I claim) even really a single community. So you can’t make enforceable global rules of conduct, and it’s very hard to kick someone out entirely (although I would say it’s effectively been done a couple of times.)
You might be relieved to learn that, at least from where I’m standing, a substantial fraction of the community is not in fact working towards (or necessarily even believing strongly in) the higher goal of preventing the AI apocalypse. (I am not personally working towards it; I would not say that I have a firm resolution either way on how much I believe in it, but I tend towards being skeptical of most specific forms that I have seen described.)
And, not to “tu quoque” exactly, I hope, but… my sense is that academia is not great along this axis? I have never been a grad student, but I would say at least half my grad student friends have had significant mental health problems directly related to their work. And a small but substantial number have had larger problems stemming directly from abusive or (more often) incompetent advisors. In most cases, the latter seemed to have very little recourse against their advisors, especially the truly abusive ones, which seems like exactly the sort of thing that you’re calling out here. There were always theoretically paths they could take to deal with the problem, but in practice the advisor has so much more power in the relationship that it would usually involve major bridge-burning to use them, and in some cases it’s not clear it would have helped even then.
This latter problem—of theoretical escalation paths around your manager existing, but being unusable in practice—seems pretty similar, to me, between academia and industry. But my impression is that academia has much worse “managers”, on average, because advisors are selected primarily for research skill, and often have poor management skills.
This is all to say—coming back around to the point—that I think academia has lots of people who behave in ways similar to how Michael Vassar is described here. (I have not met him personally, and cannot speak to that description myself.) Granted, academia has rules of conduct that would prevent some of the things seen here. I expect it would be very rare for an advisor to get their advisees into psychedelic drugs. But on the flip side, people in Vassar’s “orbit” who grow disillusioned with him are free to leave. Grad students generally cannot do that, without a significant risk of losing years of work, and their hopes of an academic career.
If anything, I think the ability to say “this person is a terrible influence, and also we can acknowledge the good they have done” may be protective from a failure mode that I have anecdotally heard of in academia multiple times: the PI who is abusive in some way, and the “grapevine” is somewhat aware of this, but whose work is too valuable (e.g. in terms of grant money) to do anything about.
Do you have any thoughts on the risks/hazards involved here? To me that’s a much more significant consideration than the price. Some thoughts / priors:
Snorting chemicals I got from the Internet / mixed up myself without really knowing what I was doing: Superficially, seems potentially pretty risky.
Snorting peptides (assuming that the stuff ordered online was what it claimed to be, was pure and not contaminated with anything hazardous, and that I didn’t accidentally create anything hazardous in the process): Definitely not as risky as snorting arbitrary unknown substances. Seems unlikely to be directly poisonous (although that’s without reading about the other contents of the vaccine.)
Snorting COVID-19 peptides, in particular: Should I be worried about things like antibody-dependent enhancement? Are there other possible hazards specific to experimental vaccine administration that I should worry about? I’m sure the paper talks about this stuff, but I’m not a biologist so I can’t promise I’d understand it if I read it.
Is there a possibility that this vaccine is both ineffective, and interferes in some way with the effectiveness of subsequent administration of a different vaccine?
From a risk perspective, the fact that this is intranasal rather than injected makes it feel safer to self-administer, I expect, but is that feeling really justified? Obviously for this vaccine to work, it has to be creating substantial immune effects, at which point I have to ask: what are the risks involved in creating substantial immune effects in my body using a thing I found on the Internet, which has received comparatively very little testing, and without enough knowledge to really verify any of the claims myself?