my boilerplate severance agreement at a job included an NDA that couldn’t be acknowledged (I negotiated to change this).
Elizabeth
weight training?
I think it’s weird that saying a sentence with a falsehood that doesn’t change its informational content is sometimes considered worse than saying nothing, even if it leaves the person better informed than the were before.
This feels especially weird when the “lie” is creating a blank space in a map that you are capable of filling in ( e.g. changing irrelevant details in an anecdote to anonymize a story with a useful lesson), rather than creating a misrepresentation on the map.
ooooooh actual Hamming spent 10s of minutes asking people about the most important questions in their field and helping them clarify their own judgment, before asking why they weren’t working on this thing they clearly valued and spent time thinking about. That is pretty different from demanding strangers at parties justify why they’re not working on your pet cause.
My guess based on the information available is the woman in your example made the right call mathematically, but you’re plausibly pointing to something real in how the way cis women treated you changed after gender transition. I’m really curious to hear more about that, without necessarily buying into your risk analysis about this situation in particular.
I’d love to see a top level post laying this out, it seems like it’s been a crux in a few recent discussions.
I strong-upvoted this to get it out of the negative, but also marked it as unnecessarily combative. I think a lot of the vitriol is deserved by the situation as a whole but not OP in particular.
I think doing things for their own sake is fine, it’s only masturbation with negative valence if people are confused about the goal.
People talk about sharpening the axe vs. cutting down the tree, but chopping wood and sharpening axes are things we know how to do and know how to measure. When working with more abstract problems there’s often a lot of uncertainty in:
what do you want to accomplish, exactly?
what tool will help you achieve that?
what’s the ideal form of that tool?
how do you move the tool to that ideal form?
when do you hit diminish returns on improving the tool?
how do you measure the tool’s [sharpness]?
Actual axe-sharpening rarely turns into intellectual masturbation because sharpness and sharpening are well understood. There are tools for thinking that are equally well understood, like learning arithmetic and reading, but we all have a sense that more is out there and we want it. It’s really easy to end up masturbating (or epiphany addiction-ing) in the search for the upper level tools, because we are almost blind.
This suggests massive gains from something that’s the equivalent of a sharpness meter.
Much has been written about how groups tend to get more extreme over time. This is often based on evaporative cooling, but I think there’s another factor: it’s the only way to avoid the geeks->mops->sociopaths death spiral.
An EA group of 10 people would really benefit from one of those people being deeply committed to helping people but hostile to the EA approach, and another person who loves spreadsheets but is indifferent to what they’re applied to. But you can only maintain the ratio that finely when you’re very small. Eventually you need to decide if you’re going to ban scope-insensitive people or allow infinitely many, and lose what makes your group different.
“Decide” may mean consciously choose an explicit policy, but it might also mean gradually cohere around some norms. The latter is more fine-tuned in some ways but less in others.
Are impact certificates/retroactive grants the solution to grantmaking corrupting epistemics? They’re not viable for everyone, but for people like me who:
do a lot of small projects (which barely make sense to apply for grants for individually)
benefit from doing what draws their curiosity at the moment (so the delay between grant application and decision is costly)
take commitments extremely seriously (so listing a plan on a grant application is very constraining)
have enough runway that payment delays and uncertainty for any one project aren’t a big deal
They seem pretty ideal.
So why haven’t I put more effort into getting retroactive funding? The retroactive sources tend to be crowdsourced. Crowdfunding is miserable in general, and leaves you open to getting very small amounts of money, which feels worse than none at all. Right now I can always preserve the illusion I would get more money, which seems stupid. In particular even if I could get more money for a past project by selling it better and doing some follow up, that time is almost certainly better spent elsewhere.
How do you feel about people listing projects that are finished but were never funded? I think impact certificates/retroactive grants are better for epistemics, at least for the kind of work I do, and it would be great to have a place for those.
I agree with your general principles here.
I think my statement of “nearly guaranteed to be false” was an exaggeration, or at least misleading for what you can expect after applying some basic filters and a reasonable definition of epistemics. I love QURI and manifold and those do fit best in the epistemics bucket, although aren’t central examples for me for reasons that are probably unfair to the epistemics category.
Guesstimate might be a good example project. I use guesstimate and love it. If I put myself in the shoes of its creator writing a grant application 6 or 7 years, I find it really easy to write a model-based application for funding and difficult to write a vision-based statement. It’s relatively easy to spell out a model of what makes BOTECs hard and some ideas for making them easier. It’s hard to say what better BOTECs will bring in the world. I think that the ~2016 grant maker should have accepted “look lots of people you care about do BOTECs and I can clearly make BOTECs better”, without a more detailed vision of impact.
I think it’s plausible grantmakers would accept that pitch (or that it was the pitch and they did accept it, maybe @ozziegooen can tell us?). Not every individual evaluator, but some, and as you say it’s good to have multiple people valuing different things. My complaint is that I think the existing applications don’t make it obvious that that’s an okay pitch to make. My goal is some combination of “get the forms changed to make it more obvious that this kind of pitch is okay” and “spread the knowledge that that this can work even if the form seems like the form wants something else”.
In terms of me personally… I think the nudges for vision have been good for me and the push/demands for vision have been bad. Without the nudges I probably am too much of a dilettante, and thinking about scope at all is good and puts me more in contact with reality. But the big rewards (in terms of money and social status) pushed me to fake vision and I think that slowed me down. I think it’s plausible that “give Elizabeth money to exude rigor and talk to people” would have been a good[1] use of a marginal x-risk dollar in 2018.[2]
During the post-scarcity days of 2022 there was something of a pattern of people offering me open ended money, but then asking for a few examples of projects I might do, and then asking for them to be more legible and the value to be immediately obvious, and fill out forms with the vibe that I’m definitely going to do these specific things and if I don’t have committed a moral fraud… So it ended up in the worst of all possible worlds, where I was being asked for a strong commitment without time to think through what I wanted to commit to. I inevitably ended up turning these down, and was starting to do so earlier and earlier in the process when the money tap was shut off. I think if I hadn’t had the presence of mind to turn these down it would have been really bad, because I not only was committed to a multi-month plan I spent a few hours on, but I would have been committed to falsely viewing the time as free form and following my epistemics.
Honestly I think the best thing for funding me and people like me[3] might be to embrace impact certificates/retroactive grant making. It avoids the problems that stem from premature project legibilization without leaving grantmakers funding a bunch of random bullshit. That’s probably a bigger deal than wording on a form.
- ^
where by good I mean “more impactful in expectation than the marginal project funded”.
- ^
I have gotten marginal exclusive retreat invites on the theory that “look she’s not aiming very high[4] but having her here will make everyone a little more honest and a little more grounded in reality”, and I think they were happy with that decision. TBC this was a pitch someone else made on my behalf I didn’t hear about until later.
- ^
relevant features of this category: doing lots of small projects that don’t make sense to lump together, scrupulous about commitments to the point it’s easy to create poor outcomes, have enough runway that it doesn’t matter when I get paid and I can afford to gamble on projects.
- ^
although the part where I count as “not ambitious” is a huge selection effect.
Of course, that’s just the flip side of a great thing. A space like this, with tons of driven and talented people, allows for advanced intellectual conversations and remarkable collaborations.
For a while after moving to the bay I really struggled with feelings of laziness and stupidity. This stopped after I went to an outgroup friend’s wedding, where I was obviously the most ambitious person there by a mile, and at least tied smartest. It clicked for me that I wasn’t dumb or lazy, I had just selected for the smartest most ambitious people who would tolerate me, and I’d done a good job. Ever since then I’ve been much calmer about status and social capital, and when I do stress out I see it as a social problem rather than a reflection of me as a person.
I didn’t initially tell my friend about this, because it seemed arrogant and judgemental. A few years later it came up naturally, so I told him. His response: “oh yeah, ever since I met you,” [which was before I got into rationality or EA, and when I look back feel like I was wandering pointlessly] “you were obviously the person I knew who was most likely to be remembered after you died [by the world at large]”.
I’m glad it was so helpful, thanks for prompting me to formalize it and for providing elaborations. Both of your points feel important to me.
I’m glad GPT worked for you but I think it’s a risky maneuver and I’m scared of the world where it is the common solution to this problem. The push for grand vision doesn’t just make models worse, it hurts your ability to evaluate models as well. GPT is designed to create the appearance of models where none exists, and I want it as far from the grantmaking process as possible. I think solutions like “ask for a 0.1percentile vision” solve this more neatly.
I’m no longer quite sure what you were aiming for with the first paragraph in your first comment. I think projects with the goal of “improve epistemics” are very nearly guaranteed to be fake. Not quite 100%- I sent in a grant with that goal myself recently, and I have high hopes for CE’s new Research Training Program. But a stunning number of things had to go right for my project to feel viable to me. For one, I’d already done most of the work and it was easy to lay out the remaining steps (although they still ballooned and I missed my deadline).
It also feels relevant that I didn’t backchain that plan. I’d had a vague goal of improving epistemics for years without figuring out anything more useful than “be the change I wanted to see in the world”. The useful action only came when I got mad about a specific object-level thing I was investigating for unrelated reasons.
PS. I realize that using my projects as examples places you in an awkward position. I officially give you my blessing to be brutal in discussing projects I bring up as examples.
[good models + grand vision grounded in that model] > [good models + modest goals] > [mediocre model + grand vision]
There are lots of reasons for this, but the main one is: Good models imply skill at model building, and thus have a measure of self-improvement. Grand vision implies skill at building grand vision unconnected to reality, which induces more error.
[I assume we’re all on board that a good, self-improving models combined with a grand vision is great, but in short supply]
[disclaimer: mostly responding to the title]
Anxity implies Sympathetic Nervous System activation. SNS is [simplistic model incoming] great at sensory awareness and physical movement, but bad at nuanced thinking and incorporating new information. I got better at this calming my SNS using Somatic Experiencing Therapy, especially emotional titration.
The vegan nutrition project led to a lot of high-up EAs getting nutrition tested and some supplement changes, primarily due to seeking the tests out themselves after the iron post. If I was doing the project again, I’d prioritize that post and similar over offering testing. But I didn’t know when I started that iron deficiencies would be the standout issue, and even if I had I would have felt uncomfortable listing “impact by motivating others” as a plan. What if I wrote something and nobody cared? I did hope to create a virtuous cycle via word of mouth on the benefits of test-and-supplement, which has mostly not happened yet.You can argue it was a flaw in me that rendered me incapable of imagining that outcome and putting it on a grant. More recently I wrote a grant that had “motivate medical change via informative blog posts” at its core, so clearly I don’t think doing so is inherently immoral. But the flaw that kept me from predicting that path before I’d actually done it is connected to some of my virtues, and specifically the virtues that make me good at the quantified lit review work.
Or my community organizer friend. There are advantages to organizers who care deeply about x-risk and see organizing as a path to doing so. But there are serious disadvantages as well.
I think my model might be [good models + high impact vision grounded in that model] > [good models alone + modest goals] > [mediocre model + grand vision], where good model means both reasonably accurate and continually improving based on feedback loops inherent in the project, with the latter probably being more important. And I think that if you reward grand vision too much, you both select for and cause worse models with less self-correction.
Of the items you listed
technical AI safety research, wastewater monitoring for potential pandemics, institutions working on improved epistemics, and work to enhance human intelligence and decision-making
I would only count wastewater monitoring as a project. Much technical alignment research counts as a project but “do technical alignment research” is a stupid plan, you need to provide specfics. The other items on the list are goals. They’re good goals, but they’re not plans and they’re not projects and I absolutely would value a solid plan with a modest goal over a vague plan to solve any of these.
You mentioned Hack Club provides fiscal sponsorship. Once we have this, will it work with arbitrary donors or just Lightspeed?
This failure mode is definitely real. OTOH, demands for immediate, legible results can kill off many valuable options. There are major improvements in my life that are measurable (e.g. ability to take moral stands when people are yelling at me, ability to think on my feet while anxious) but can’t be attributed to any one action[1]. If you took away everything that couldn’t objectively justify itself in a few months, I’d be much worse off, even though probably a good chunk of what was cut was valueless.
or can be attributed to specific actions, but only partially. The sum of improvements with traceable causes is far less than the total improvement.