Almost certainly what he means is: restrictive zoning leads to small amounts of new housing, which leads to high rents, which according to this essay we just read, leads to high homelessness.
Timothy Underwood
Yeah, I really like this idea—at least in principle. The idea of looking for value agreement and where do our maps (that likely are verbally extremely different) match is something that I think we don’t do nearly enough.
To get at what worries me about some of the ‘EA needs to consider other viewpoints discourse’ (and not at all about what you just wrote, let me describe two positions:
EA needs to get better at communicating with non EA people, and seeing the ways that they have important information, and often know things we do not, even if they speak in ways that we find hard to match up with concepts like ‘bayesian updates’ or ‘expected value’ or even ‘cost effectiveness’.
EA needs to become less elitist, nerdy, jargon laden and weird so that it can have a bigger impact on the broader world.
I fully embrace 1, subject to constraints about how sometimes it is too expensive to translate an idea into a discourse we are good at understanding, how sometimes we have weird infohazard type edge cases and the like.
2 though strikes me as extremely dangerous.
To make a metaphor: Coffee is not the only type of good drink, it is bitter and filled with psychoactive substances that give some people heart palpitations. That does not mean it would be a good idea to dilute coffee with apple juice so that it can appeal to people who don’t like the taste of coffee and are caffeine sensitive.
The EA community is the EA community, and it currently works (to some extent), and it currently is doing important and influential work. Part of what makes it work as a community is the unifying effect of having our own weird cultural touchstones and documents. The barrier of excluisivity created by the jargon and the elitism, and the fact that it is one of the few spaces where the majority of people are explicit utilitarians is part of what makes it able to succeed (to the extent it does).My intuition is that an EA without all of these features wouldn’t be a more accessible and open community that is able to do more good in the world. My intuition is an EA without those features would be a dead community where everyone has gone on to other interests and that therefore does no good at all.
Obviously there is a middle ground—shifts in the culture of the community that improve our pareto frontier of openness and accessibility while maintaing community cohesion and appeal.However, I don’t think this worry is what you actually were talking about. I think you really were focusing on us having cognitive blindspots, which is obviously true, and important.
You might find the way mercenary armies functioned during the 30 years war interesting.
It wouldn’t. First the time it takes for population changes to happen is very slow compared tithe business cycles that drive adaptations to economic changes. Second, eliminating malaria is considerably more likely to reduce population growth than increase it.
I’d note that acoup’s model of fires primacy making defence untenable between hi tech nations, while not completely disproven by the Ukraine war, is a hypothesis that seems much less likely to be true/ less true than it did in early 2022. The Ukraine war has shown in most cases a strong advantage to a prepared defender and the difficulty of taking urban environments.
The current Israel—Hamas was shows a similar tendency, where Israel is moving very slowly into the core urban concentrations (ie it has surrounded Gaza city so far, but not really entered it), though its superiority in resources relative to its opponent is vastly greater than Russia’s advantage over Ukraine was.
I don’t think that is relevant to this project.
I’m not trying to have a fictional world provide evidence that EA is true. I’m trying to write a basic intro to EA essay that people who wouldn’t read an ‘EA 101 post’ will read because it is embedded in the text of a novel that they are reading because I got them to care about what happens to the characters and how the story problems get resolved.Also, I do think works of fiction can definitely be places to create extended thought experiments that are philosophically useful. I mean something like Those Who Walk Away from Omelas is a perfectly good expression and explanation of a view about the problems with utilitarianism. I don’t like it because I bite the bullet involved and because I think vaguely pointing in a direction and saying ‘there has to be a better solution’ isn’t actually pointing at a solution. But the problem with it as a piece of philosophical evidence is not that it is fiction, any more than the problem with every single trolley problem ever is that it is a work of fiction.
A further comment about the religious history of people involved with Less Wrong, it also was heavily seeded by the 2000s decade internet atheist movement, which was itself largely a reaction to evangelical Christianity attempting to gain power politically in the US, and the reaction of young Christians of rationalist dispositions to realizing that we were confidently being ordered to believe stupid things, while at the same time also being told that noticing it was stupid was failing your religious duty.
I’d also emphasize what another comment said, that there has been a lot of interest in the community in creating secular rituals that replace the community rituals of religion without committing anyone to believing false facts (or really any facts at all).
It definitely is the case that there has been discussion of ways that parts of religion can be good for people, despite the underlying truth claims being false.
An excellent argument—though obviously not conclusive.
The first alternative idea that comes to my mind is that you could just be teaching the oldest kids in those classes to see themselves as high status and that you could get the same effect through any other intervention that encourages particular kids to see themselves as better than other kids.
Certainly this seems to be an example of school doing something.
Sure it is. This is what I did when deciding that I would go to a concert I’d been waiting for since January that was then cancelled a couple of days later in the middle of March 2020. Guesstimate at the odds of getting it in a giant crowded outdoors venue given the background number of cases I was hearing about in Budapest. Guesstimate at the odds of dying if I got it, with another adjustment for the amount of time that I might lose from being very sick.
I then noted that the expected loss in minutes of life after doing this calculation was considerably less than the time I’d be spending at this concert, and so if I cared enough about the concert to go in the first place I should go anyways. Remembering back I think I didn’t properly quantify the risks to my wife, her other partner, and his other partner, and people outside of the group who we might have given it to, but I’m not at all sure that that would have mathematically changed the decision, and it simply points to additional factors that need to be included in the calculations, and that even taking the well being of people in your bubble as exactly as valuable as your own well being does not automatically imply that you should sit at home and never do anything.
Maybe.
I feel like there is a lot of dystopian literature out there, but relatively little about telling a story where there is a plausible path to escaping things going horribly wrong that then works. So I’m right now intentionally trying to come up with stories that sell an utopian path while signal boosting ideas that are being put forward in FHI papers and other parts of the community as ways to get there. For example the project I’m right now the most excited about has the working title of The Windfall Clause. Also the sci fi project that I already have written that is in this context is exploring ideas about the repugnant conclusion in a far future hard sci fi setting which is organized like Scott Alexander’s archipelago, and where we managed to both get AI that did what we wanted, and then where we collectively didn’t use it to murder ourselves. (Link if anyone is interested)
I do welcome ideas about stories that people think it would be a good idea if someone wrote. Though if it is about something going horribly wrong, I’d probably try to find a way to write a story where that nearly happens, but we find a smart way to avoid it happening.
Also, honestly, I think that all of the countries would reinvest as much as they need to maintain a strategic balance, and that is the actual problem requiring coordination.
Or von Neumann and his contemporaries and predecessors stole all the insights that someone with merely Neumann’s intellect could develop independently, leaving future geniuses to have to be part of collaborative teams?
Prime age labor force participation rate is the standard measure the econobloggers I’ve followed (most notably Krugman and Brad DeLong, who are part of the community also pushing for this interpretation of monetary policy) tend to use to measure economic health, and there are reasons to see it as pointing most closely to what we actually care about (that and hourly productivity, which isn’t in these charts).
One part of this issue: The answer to the question is literally unknowable with our current scientific tools (though as we develop better models for simulating biology and culture this might change). We can’t run experiments that are not contaminated by culture/biology.
What is left is observational evidence.
Proving causality with observational evidence usually doesn’t work. This is especially the case with an issue like this with only a moderate effect size (a one SD effect on test scores is tiny compared to the impact of smoking on lung cancer, or stomach sleeping on SIDS), and where both factors are always present and connected.
What is left is reasoning from priors.
Personally I think HBD is unlikely because the observed outcome differences are exactly the sort of thing the known cultural forces would create even if there was no genetic difference, so the existence of these outcome differences does not serve as additional evidence of genetic differences. This means that while it is totally possible there could be major intelligence differences between groups, I don’t have any particular reason to think they actually exist.
But this argument is simply not a robust or rigorous proof. I give it around a 1⁄100 chance of being wrong, while things that I actually know, like the name of the president, have a far, far smaller chance of being wrong.
My objection is that we only have around 1^21 or so of observed improbability of intelligent civilizations ccoming per planet to burn off due to the Fermi paradox, while a strong anthropic shadow implies the odds against us reaching this position to be vastly worse than that. If you think that abiogenisis is incredibly unlikely, that reduces the pressure to think that there were lots of potential catastrophes that could have wiped out life on earth.
Perhaps the key question is what does research on burnout in general say, and are there things about the EA case that don’t match that?
Also to what extent is burnout specifically a problem, vs pepole from different places bouncing and moving onto different social groups (either wihtin a year or two, or after a long relationship)?
My response is ‘the argument from the existence of new self made billionaires’.
There are giant holes in our collective understanding of the world, and giant opportunities. There are things that everyone misses until someone doesn’t.
A much smarter than human beings thing is simply going to be able to see things that we don’t notice. That is what it means for it to be smarter than us.
Given how high dimensional the universe is, it would be really weird in my view if none of the things that something way smarter than us can notice don’t point to highly certain pathways for gaining enough power to wipe out humanity.
I mean sure, this is a handwavy, thought experiment level counter argument. And I can’t really think of any concrete physical evidence that might convince me otherwise. But, despite the weakness of this thought experiment evidenece, I’d be shocked if I ever viewed it as highly unlikely (ie less than one percent, or even less than ten percent) that a much smarter than human AI I won’t be able to kill us.
And remember: To worry, we don’t need to prove that it can, just that it might.
I do have a model where that happens, that is fairly high in my actual scenario estimates right now—basically everyone in Iran, the poor parts of the Middle East and Africa gets exposed, which is enough for the more than 10% of the global population, and then basically nobody anywhere else gets it because all of the countries with strong states that are interconnected end up using the quarantines and travel restrictions to shut down the spread of the virus, which we know works from China (probably less than 5% of the population Wuhan itself is going to come down with the virus).
I’d expect per capita war deaths to have nothing to do with offence/ defence balance as such (unless the defence gets so strong that wars simply don’t happen, in which case it goes to zero).
Per capita war deaths in this context are about the ability of states to mobilize populations, and about how much damage the warfare does to the civilian population that the battle occurs over. I don’t think there is any uncomplicated connection between that and something like ‘how much bigger does your army need to be for you to be able to successfully win against a defender who has had time to get ready’.
I think the issue is that creating an incentive system where people are rewarded for being good at an artificial game that has very little connection to their real world cericumstances, isn’t going to tell us anything very interesting about how rational people are in the real world, under their real constraints.
I have a friend who for a while was very enthused about calibration training, and at one point he even got a group of us from the local meetup + phil hazeldon to do a group exercise using a program he wrote to score our calibration on numeric questions drawn from wikipedia. The thing is that while I learned from this to be way less confident about my guesses—which improves rationality, it is actually, for the reasons specified, useless to create 90% confidence intervals about making important real world decisions.
Should I try training for a new career? The true 90% confidence interval on any difficult to pursue idea that I am seriously considering almost certainly includes ‘you won’t succeed, and the time you spend will be a complete waste’ and ‘you’ll do really well, and it will seem like an awesome decision in retrospect’.
Is there any particular reason to think that Putin is likely to try invading the Baltics? Let alone attempting to forcefully recreate the Warsaw pact with invading Poland/Czechia/Hungary?
I mean certainly, if Putin decides that WW3 is worth grabbing EU members back, it could happen. And there is always a tiny chance that he will think the West won’t fight over the Baltics, while the West actually will—but this seems to me to be a really low probability thing, and more importantly, what is happening in Ukraine tells us very little about whether that will happen.
To be clear: The Western governments told Putin, in every possible non-explicit and explicit way, that they would do nothing to physically try to stop him from taking Ukraine, but that they would attempt to harm his country through economic mechanisms. Putin did not do a Hitler like gamble of risking a world war, Putin knew with certainty that the only military force he would be fighting was Ukranian.
This is not evidence that he is willing to risk nuclear war, or actually try invading NATO members in hopes that NATO doesn’t defend them. He might be—but your estimate on that should be roughly the same today as it was yesterday.