I like the edit. Be the person who you want to see in the world. Also visibly model behaviors you want to encourage.
Yeah, I really like this idea—at least in principle. The idea of looking for value agreement and where do our maps (that likely are verbally extremely different) match is something that I think we don’t do nearly enough.
To get at what worries me about some of the ‘EA needs to consider other viewpoints discourse’ (and not at all about what you just wrote, let me describe two positions:
EA needs to get better at communicating with non EA people, and seeing the ways that they have important information, and often know things we do not, even if they speak in ways that we find hard to match up with concepts like ‘bayesian updates’ or ‘expected value’ or even ‘cost effectiveness’.
EA needs to become less elitist, nerdy, jargon laden and weird so that it can have a bigger impact on the broader world.
I fully embrace 1, subject to constraints about how sometimes it is too expensive to translate an idea into a discourse we are good at understanding, how sometimes we have weird infohazard type edge cases and the like.
2 though strikes me as extremely dangerous.
To make a metaphor: Coffee is not the only type of good drink, it is bitter and filled with psychoactive substances that give some people heart palpitations. That does not mean it would be a good idea to dilute coffee with apple juice so that it can appeal to people who don’t like the taste of coffee and are caffeine sensitive.The EA community is the EA community, and it currently works (to some extent), and it currently is doing important and influential work. Part of what makes it work as a community is the unifying effect of having our own weird cultural touchstones and documents. The barrier of excluisivity created by the jargon and the elitism, and the fact that it is one of the few spaces where the majority of people are explicit utilitarians is part of what makes it able to succeed (to the extent it does).
My intuition is that an EA without all of these features wouldn’t be a more accessible and open community that is able to do more good in the world. My intuition is an EA without those features would be a dead community where everyone has gone on to other interests and that therefore does no good at all.Obviously there is a middle ground—shifts in the culture of the community that improve our pareto frontier of openness and accessibility while maintaing community cohesion and appeal.
However, I don’t think this worry is what you actually were talking about. I think you really were focusing on us having cognitive blindspots, which is obviously true, and important.
The way I’ve tended to think about these sorts of questions is to see a difference between the global portfolio of approaches, and our personal portfolio of approaches.A lot of the criticisms of EA as being too narrow, and neglecting certain types of evidence or ways of thinking make far more sense if we see EA as hoping to become the single dominant approach to charitable giving (and perhaps everything else), rather than as a particular community which consists of particular (fairly similar) individuals who are pushing particular approaches to doing good that they see as being ignored by other people.
Fun as a travelogue and description of eating great food that is making me long for eating at places that I really love again too. Mainly having non stop Mexican next time I go home to California again—Mexican food really, really, really just isn’t the same in Budapest.Though if you ever are in Hungary, you need to try langos.
My intuition is that this is an unlikely worry. The people who actually understand the math on vaccines might be slightly more cautious, but won’t actually care, and will keep saying that vaccinating despite the blood clots was the right choice. While the people who are currently scared of vaccines won’t really care, and will just point to this as an additional reason to believe what they already believed.
Sure it is. This is what I did when deciding that I would go to a concert I’d been waiting for since January that was then cancelled a couple of days later in the middle of March 2020. Guesstimate at the odds of getting it in a giant crowded outdoors venue given the background number of cases I was hearing about in Budapest. Guesstimate at the odds of dying if I got it, with another adjustment for the amount of time that I might lose from being very sick.I then noted that the expected loss in minutes of life after doing this calculation was considerably less than the time I’d be spending at this concert, and so if I cared enough about the concert to go in the first place I should go anyways. Remembering back I think I didn’t properly quantify the risks to my wife, her other partner, and his other partner, and people outside of the group who we might have given it to, but I’m not at all sure that that would have mathematically changed the decision, and it simply points to additional factors that need to be included in the calculations, and that even taking the well being of people in your bubble as exactly as valuable as your own well being does not automatically imply that you should sit at home and never do anything.
You might find the way mercenary armies functioned during the 30 years war interesting.
Huh, my instinctive (and thus likely to be wrong) hypothesis is that coronavirus hasn’t economically hurt rich people very much, so the competitive house price dynamics for big units are still going on, while it has hit poorer people much harder.
I’m a writer, not a technical person—what I’m interested in trying to do is signal boosting ideas that within the community to the sort of general tech audience that reads hard sci fi novels, in the hopes of boosting serious interest and awareness around the subject, rather than painting a particular approach as the right approach.
I think that was a great comment :)
As for how this idea can be used—I’d say that as a sort of artistic thing, as described it feels a little deus ex machina, which isn’t necessarily a bad thing, its just I’m right now personally trying to come up with stories where by the time the AI is actually on the verge of being developed, enough right choices were made earlier that it is inevitable things go well, with the idea that what is valuable now is encouraging people to build the institutions and safety procedures into their system so that it doesn’t come close. On the other hand that doesn’t optimize for strong conflicts and climax, and I think your plan could do that really well.
We’re both still just sort of guessing at what will actually help—but signal boosting existing organizations like MIRI and CHAI and the idea of explicitly taking safety really seriously sounds promising to me.
One thing I do do in my Pride and Prejudice Variations is always write an afterward talking about how I wrote the book, and then ending with telling people that they should donate to Doctors Without Borders, something like that, explicitly having a simple call to action at the end of the novel probably is a good idea.
Who specifically do you think should act differently, and in what concrete way because they are more aware of the Beyond the Reach of God narrative?
I feel like there is a lot of dystopian literature out there, but relatively little about telling a story where there is a plausible path to escaping things going horribly wrong that then works. So I’m right now intentionally trying to come up with stories that sell an utopian path while signal boosting ideas that are being put forward in FHI papers and other parts of the community as ways to get there. For example the project I’m right now the most excited about has the working title of The Windfall Clause. Also the sci fi project that I already have written that is in this context is exploring ideas about the repugnant conclusion in a far future hard sci fi setting which is organized like Scott Alexander’s archipelago, and where we managed to both get AI that did what we wanted, and then where we collectively didn’t use it to murder ourselves. (Link if anyone is interested)
I do welcome ideas about stories that people think it would be a good idea if someone wrote. Though if it is about something going horribly wrong, I’d probably try to find a way to write a story where that nearly happens, but we find a smart way to avoid it happening.
Also, honestly, I think that all of the countries would reinvest as much as they need to maintain a strategic balance, and that is the actual problem requiring coordination.
Oh that’s cool. I had known about Herzl being a central figure in Zionism, but not that he’d written a novel to push it forward.
Uh, you can’t escape the implied inflation wealth tax by going to a different country, while you could escape a wealth tax by doing so. [Edit: Oops, already said]
Having said that, I agree with you that at .5% it wouldn’t make much of a difference, though Graham might be right that even that little is still enough to start people thinking about changing their behavior due to the tax. Also Elizabeth Warran’s implied 6% on billionaires accounting for the extra amount charged to cover her healthcare plan would have definitely driven people who were expecting to ever get huge startup wealth away.
Thanks for those examples. I have been looking for cases of movies also. Also it is good that you had here an example of something that a lot of people would view as a negative case (making the invention of the hydrogen bomb faster).
What surprised me and conflicted with my intuitions is the way that works of art pushing already highly familiar ideas that already had lots of artistic works about them are capable of still having a huge effect if they catch the public imagination in either a way previous works hadn’t, or that this particular generation of movie goers hadn’t been affected.
Obviously The Day After and The Holocaust were not the first movies about those subjects, nor even the first hugely popular and successful movies about those successes (or even in the case of The Day After the first movie that is credited with substantially improving popular awareness on the subject). But despite the fact that it would seem like something which had already been done, there seems to be a clear argument that each had an important effect on the margin.
I’m pretty sure it is actually the same case with the classic slavery example of Uncle Tom’s Cabin. I mean, I don’t know much of anything about the history, but on reflection it would be very surprising to me if it was the first popular novel focused on the theme of slavery being terrible. And there had at that point been a century of abolitionist activity as a central theme of political life. But it still plausibly had an important marginal influence.
This makes me update away from my view that writing books pushing specifically an AI safety angle wouldn’t be useful because it has already been done and people are aware of the ideas. Though I still think that ideas about how to make sure that there is a decent distribution of resources that can make a post human labor society an actually good thing for almost everyone are far more neglected.
Thanks, that’s brilliant, and gave me several new ideas on keywords to look for.