On plans for a functional society

Vaniver

I’m going to expand on something brought up in this comment. I wrote:

A lot of my thinking over the last few months has shifted from “how do we get some sort of AI pause in place?” to “how do we win the peace?”. That is, you could have a picture of AGI as the most important problem that precedes all other problems; anti-aging research is important, but it might actually be faster to build an aligned artificial scientist who solves it for you than to solve it yourself (on this general argument, see Artificial Intelligence as a Positive and Negative Factor in Global Risk). But if alignment requires a thirty-year pause on the creation of artificial scientists to work, that belief flips—now actually it makes sense to go ahead with humans researching the biology of aging, and to do projects like Loyal.

This isn’t true of just aging; there are probably something more like twelve major areas of concern. Some of them are simply predictable catastrophes we would like to avert; others are possibly necessary to be able to safely exit the pause at all (or to keep the pause going when it would be unsafe to exit).

I think ‘solutionism’ is basically the right path, here. What I’m interested in: what’s the foundation for solutionism, or what support does it need? Why is solutionism not already the dominant view? I think one of the things I found most exciting about SENS was the sense that “someone had done the work”, had actually identified the list of seven problems, and had a plan of how to address all of the problems. Even if those specific plans didn’t pan out, the superstructure was there and the ability to pivot was there. It looked like a serious approach by serious people. What is the superstructure for solutionism such that one can be reasonably confident that marginal efforts are actually contributing to success, instead of bailing water on the Titanic?

Restating this, I think one of the marketing problems with anti-aging is that it’s an ancient wish and it’s not obvious that, even with the level of scientific mastery that we have today, it’s at all a reasonable target to attack. (The war on cancer looks like it’s still being won by cancer, for example.) The thing about SENS that I found most compelling is that they had a frame on aging where success was a reasonable thing to expect. Metabolic damage accumulates; you can possibly remove the damage; if so you can have lifespans measured in centuries instead of decades (because after all there’s still accident risk and maybe forms of metabolic damage that take longer to show up). They identified seven different sorts of damage, which felt like enough that they probably hadn’t forgotten one and few enough that it was actually reasonable to have successful treatments for all of them.

When someone thinks that aging is just about telomere shortening (or w/​e), it’s pretty easy to suspect that they’re missing something, and that even if they succeed at their goal the total effect on lifespans will be pretty small. The superstructure makes the narrow specialist efforts add up into something significant.

I strongly suspect that solutionist futurism needs a similar superstructure. The world is in ‘polycrisis’; there used to be a ‘aligned AGI soon’ meme which allowed polycrisis to be ignored (after all, the friendly AI can solve aging and climate change and political polarization and all that for you) but I think the difficulties with technical alignment work have made that meme fall apart, and it needs to be replaced by “here is the plan for sufficiently many serious people to address all of the crises simultaneously” such that sufficiently many serious people can actually show up and do the work.

kave

I don’t know how to evaluate whether or not the SENS strategy actually covers enough causes of ageing, such that if you addressed them all you would go from decades-long lifespans to centuries-long lifespans. I think I’m also a little more optimistic than you that a bunch of “bailing out the sinking ship” adds up to “your ship is floating on its own”.

I think that a nice thing about incremental and patch solutions is that each one gives you some interesting data about exactly how they worked, and details about what happened as a result. For example, it’s interesting if, when you give someone a drug to make their blood pressure lower, for example, you end up with some other system reliably failing (more often than in the untreated population). And so I have a bit of hope that if you just keep trying the immediate things, you end up in a much better vantage point for solving a bunch of the issues.

I guess this picture still factors through “we realised what the main problems were and we fixed them”, it’s just a bit more sympathetic to “we did some work that wasn’t on the main problems along the way”.

I dunno how cruxy this is for your “superstructure” picture, or what the main alternative would be. I guess there are questions like “many rationalist-types like to think about housing reform. Is that one of the crises we have to address directly, or not?”. Is that the type of thing you’re hoping to answer?

Vaniver

I think I’m also a little more optimistic than you that a bunch of “bailing out the sinking ship” adds up to “your ship is floating on its own”.

I think I’m interested in hearing about the sources of your optimism here, but I think more than that I want to investigate the relative prevalence of our beliefs.

I have a sense that lots of people are not optimistic about the future or about their efforts improving the future, and so don’t give it a serious try. There’s not a meme that being an effective civil servant is good for you or good for the world. [Like, imagine Teach For America except instead it’s Be A Functionary For America or w/​e.]

There is kind of a meme that doing research /​ tech development for climate change is helpful, but I think even then it is somewhat overpowered by the whiny activism meme. (Is the way to stop oil to throw soup on paintings or study how to make solar panels more efficient?)

It seems to me like you’re saying “just do what seems locally good” (i.e. bailing out your area of the ship) both 1) adds up to the ship floating and 2) is widely expected to add up to the ship floating, and I guess that’s not what I see when I look around.

kave

I now wonder if I understood what you meant by ‘superstructure’ correctly. For example, I was imagining a coordinating picture that tells you whether or not to be an effective civil servant, and even what kind of civil servant to be. For example SENS, guides your efforts within ageing research. Like something that enumerates the crises within the polycrisis.

But it seems like you’re imagining something that is like “do stuff that adds up rather than stuff that doesn’t”. For example, do you imagine the superstructure is encouraging of ‘be a civil servant making sanitation work well in your city’ or not? I was imagining that it might rule it out, and similarly might rule out ‘try and address renewable energy via Strategy A’, and I was saying “I feel pretty hopeful about people trying Strategy A and making sanitation work and so on, even if it’s not part of an N-Step Plan for saving civilisation”.

Vaniver

I guess there are questions like “many rationalist-types like to think about housing reform. Is that one of the crises we have to address directly, or not?”. Is that the type of thing you’re hoping to answer?

I think there’s a thing that Eliezer complains about a lot where the world is clearly insane-according-to-him and he’s sort of hopeless about it. Like, what do you expect? The world being insane is a Nash equilibrium, he wrote a whole book about the generators of that.

And part of me wants to shake him and say—if you think the FDA is making a mistake you could write them a letter! You could sue them! The world has levers that you are not pulling and part of the way the world becomes more sane is by individual people pulling those levers. (I have not shaken Eliezer in part because he is pulling levers and has done more than other people and so on.) There’s a ‘broken windows’ thing going on where fixing the visible problems makes a place less seem like the sort of place that has problems, and so people both 1) generate fewer problems and 2) invest more in fixing the problems that remain.

Like something that enumerates the crises within the polycrisis.

I think this is exactly what I’m looking for.

Like, imagine you went to the OpenPhil cause area website and learned that they will succeed at all their goals in the next 5 years. (“succeed” here meant in the least ambitious way—an AI pause instead of developing superalignment, for example.) Does that give you a sense of “great, we have fixed the polycrisis /​ exited the acute risk period /​ I am optimistic about the future”? I think currently “not yet”, and to be fair to them I don’t think they’re trying to Solve Every Problem.

To maybe restate my hypothesis more clearly:

I think if there were A Plan to make the world visibly less broken, made out of many components which are themselves made out of components that people could join and take responsibility for, this would increase the amount of world-fixing work being done and would meaningfully decrease the brokenness of the world. Further, I think there’s a lot of Common Cause of Many Causes stuff going on here, where people active in this project are likely to passively or actively support other parts of this project /​ there could be an active consulting /​ experience transfer /​ etc. scene built around it.

I think this requires genuine improvements in governance and design. I don’t think someone declaring themselves god-Emperor and giving orders either works or is reasonable to expect. I think this has to happen in deep connection to the mainstream (like, I am imagining people rewriting health care systems and working in government agencies and at insurance companies and so on, and many of the people involved not having any sort of broader commitment to The Plan).

kave

There are two reasons I can immediately feel to have a Plan.

The first is: you need a Plan in order to just make sure when you’ve finished the Plan you’re “done” (where “done” might mean “have advanced significantly”) and to make sure you’re prioritising.

The second is: it’s an organising principle to allow people to feel like they are pulling together, to see some ways in which their work is accumulating and to have a flag for a group that can have positive regard for each of its members doing useful stuff.

I feel pretty sold on the second! I’m not so sure about the first. Happy to go more into that, but also pretty happy to take it as a given for awhile and allow the conversation to move past that question.

Vaniver

Hmm I’m not sure that distinction is landing for me yet. Like I think I mostly want the second—but in order for the second to be real the plan must also be real. (If I thought the SENS plan contained significant oversights or simplifications, for example, I would not expect it to be very useful for motivating useful effort.)

kave

If I were to discuss something else, I would be pretty interested in questions like “what does the plan need to cover?”, “what are some of the levers that are going tragically unpulled? or “what does the superstructure need to be like to be sociologically/​psychologically viable (or whatever other considerations)?”

Vaniver

Yeah happy to move to specifics. I think I don’t have a complete Plan yet and so some of the specifics are fuzzy—I think I’m also somewhat pessimistic about the Plan being constructed by a single person.

kave

I guess the difference is I expect more things to help some than you do? If I believed SENS were missing lots of things, I could still imagine being excited to work on it as long as I believed the problems were fairly real, even if not complete. Admittedly, I would be a bit more predisposed to try piecemealing together a bunch of hacks and seeing where that took us.

kave

Totally makes sense to be pessimistic about the Plan being constructed by a single person. But it seems that the Plan will be constructed by people like you and I doing some kind of mental motion, and I was wondering if maybe we should just do some of that now. Sort of analogous to how the hope is that people will do the pieces of object-level work that adds up to a solved polycrisis, it seems good if people do the pieces of meta-level work that adds up to a Plan.

Vaniver

ok, so not attempting to be comprehensive:

  • Energy abundance. One of the answers for “wtf went wrong in 1970?” is energy prices stopped going down and went up instead. Having cheaper energy is generically good. Progress here looks like 1) improvements in geothermal energy /​ transitioning oil drilling to geothermal drilling, 2) continued rollouts of solar, 3) increased nuclear production. People are currently pretty excited about permitting reform for geothermal production for a number of reasons and I would probably go into something like that if I were going to work in this field.

  • Land use. The dream here is land value taxes, but there are lots of other things that make sense too. You brought to my attention the recent piece by Judge Glock about how local governments used to be pro-growth because it determined their revenues and then various changes to the legal and regulatory environment stopped that from being true, giving a lot of support to anti-growth forces. Recent successes in California have looked more like the state government (which is more pro-growth than local governments) seizing control in a lot of areas, but you could imagine various other ways to go about this that are better designed. Another thing here that is potentially underrated is just being a property developer in Berkeley/​SF; my understanding is that a lot of people working in the industry did not take advantage of Builder’s Remedy because they’re here for the long haul and don’t want to burn bridges, but I have only done a casual investigation.

  • Labor retooling. When I worked at Indeed (the job search website company) ~7 years ago there were three main types of jobs people wanted to place ads for, one of which was truckers. And so Indeed was looking ahead to when self-driving trucks would eat a bunch of those jobs, both to try to figure out how to replace the revenue for the company and to try to figure out how to help that predictable flood of additional users find jobs that are good for them. I don’t have a great sense of what works well here (people are excited about UBIs and I think they work at one half of the problem but leave the other half unaddressed). Now that I think we have economically relevant chatbots, I think this is happening (or on the horizon) for lots of jobs simultaneously.

  • Health care. The American system is a kludge that was patched together over a long time; satisfaction is low enough that I think there is potential to just redesign it from scratch. (See Amy Finkelstein’s plan, for example.)

  • Aging. It would be nice to not die, and people having longer time horizons possibly makes them more attuned to the long-term consequences of their actions.

  • Political polarization. If you take a look at partisan support for opposite-party presidents in the US, it’s declining in a pretty linear fashion, and projecting the lines forward it will not be that long until there is 0% Republican support for Democratic presidents (and vice versa). This seems catastrophically bad if you were relying on the American government as part of your global-catastrophe-avoidance-plan. More broadly, I have a sense that the American system of representative-selection is poorly fit to the modern world and satisfaction is low enough that there’s potential for reform.

  • Catastrophe avoidance. It seems like there should be some sort of global surveillance agency (either one agency or collaboration across Great Power lines or w/​e) that is ‘on top of things’ for biorisk and AI risk and so on. I’m imagining a ~30 year pause in AI development, here, which likely requires active management.

There are some things that maybe belong on this list and maybe don’t? Like I think education is a thing that people perennially love to complain about but it’s not actually obvious to me that it’s in crisis to the degree that healthcare is in crisis, or that it won’t be fixed on its own by independent offerings. (Like, Wikipedia and Khan Academy and all that are already out there; it would be nice for public schools to not be soul-destroying but I think I am more worried about ‘the world’ being soul-destroying.) I think this list would be stronger if I had more clearly negative examples of “yeah sorry we don’t care about Catalan independence” or w/​e, but this seems like the sort of thing that is solved by a market mechanism (of no one buys into the prize for fixing Catalan independence, or no one decides to work on it).

Vaniver

So one of the things that feels central to me is the question of ‘design’ in the Christopher Alexander sense; having explicitly identified the constraints and finding a form that suits all of them.

I think the naive pro-growth view is “vetocracy is terrible” – when you have to get approval from more and more stakeholders to do projects, projects are that much harder to do, and eventually nothing gets done. But I think we need to take the view that “just build it” is the thesis, “get approval” is the antithesis, and the synthesis is something like “stakeholder capitalism” where getting stakeholder approval is actually just part of the process but is streamlined instead of obstructive.

Like, as population density increases, more people are negatively affected by projects, and so the taxes on projects should actually increase. But also more people should be positively affected by projects (more people can live in an 8-story apartment building than a 4-story one) and so on net this probably still balances out in favor of building. We just need to make the markets clear more easily, which I think involves looking carefully at what the market actually is and redesigning things accordingly.

kave

As well as negative examples, I wonder if it would be good to contend with the possibility of other ‘royal solutions’ in the absence of AI. For example, human intelligence enhancement. My guess is that that isn’t a solution, but it does possibly change the landscape so much that many other problems (for example ageing and energy abundance) become trivial.

Vaniver

I think human intelligence enhancement definitely goes on the list. I think a large part of my “genuine improvements in governance and design” is something like “intelligence enhancement outside of skulls”—like if prediction markets aggregate opinions better than pundits writing opinion columns, a civilization with prediction markets is smarter than a civilization with newspapers. A civilization with caffeine is also probably smarter than a civilization with alcohol, but that’s in a within-skull sort of way. Doing both of those seems great.

kave

Designing the markets to clear more easily is quite appealing. But it also has some worrisome ‘silver bullet’ feeling to it; a sense of impracticality or of my not having engaged enough with the details of current problems for this to be the right next step.

Vaniver

Yeah so one of my feelings here is also from Matt Yglesias on Slow Boring, called Big ideas aren’t enough. Roughly speaking, his sense (as a Democratic detail-oriented policy wonk) is that the Republican policy wonks just really weren’t delivering on the details, and so a lot of their efforts failed. It’s one thing to say “we need to have energy abundance” and another thing to say “ok, here’s this specific permitting exemption that oil and gas projects have, if we extend that to geothermal projects it’ll have these positive effects which outweigh those negative effects”. It’s one thing to have spent 5 minutes thinking about healthcare and guessing at a solution, and another thing to have carefully mapped out the real constraints and why you believe they’re real and find something that might actually be a Pareto improvement for all involved (or, if it’s a Kaldor-Hicks improvement instead, figure out who needs to be bribed and how much it would take to bribe them).

I think it’s more plausible that whatever Consortium of Concerned Citizens can identify the problem areas than that they can solve them—one of the things that I think broadly needs to change is a switch from “people voting for solutions” to “people voting for prices for solutions” that are then provided by a market. If you think increasing CO2 levels in the atmosphere is the problem, it really shouldn’t concern you how CO2 levels are adjusted so long as they actually decrease, and you should let prices figure out whether that’s replacement with solar or nuclear or continuing to burn fossil fuels while sequestering the carbon or whatever. [Of course this is assuming that you’re pricing basically every externality well enough; you don’t want the system to be perpetually sweeping the pollution under the next rug.]

kave

There’s also a thing with solutions rather than prices for solutions where it’s often easier to check that inputs are conforming than outputs.

A friend was trying to get fire insurance for their venue, and the fire insurers needed them to upgrade their fire alarm system. They asked the insurer “how much more would it be not to upgrade the fire alarm system?” and the answer was “No. We do not offer insurance if you don’t upgrade the system”, presumably because the bespoke evaluation was too expensive.

[I don’t quite know the import of this to what you wrote above, but it’s a heuristic and anecdote that I have ping me when this sort of stuff comes up]

kave

So, we wrapped there because of time constraints. Thanks for chatting. I enjoyed this. I would be interested in picking up again in the future.