The Dial of Progress

Link post

“There is a single light of science. To brighten it anywhere is to brighten it everywhere.” – Isaac Asimov

You cannot stand what I’ve become
You much prefer the gentleman I was before
I was so easy to defeat, I was so easy to control
I didn’t even know there was a war

– Leonard Cohen, There is a War

“Pick a side, we’re at war.”

– Steven Colbert, The Colbert Report

Recently, both Tyler Cowen in response to the letter establishing consensus on the presence of AI extinction risk, and Marc Andreessen on the topic of the wide range of AI dangers and upsides, have come out with posts whose arguments seem bizarrely poor.

These are both excellent, highly intelligent thinkers. Both clearly want good things to happen to humanity and the world. I am confident they both mean well. And yet.

So what is happening?

A Theory

My theory is they and similar others believe discourse in 2023 cannot handle nuance.

Instead, effectively there is a single Dial of Destiny Progress, based on the extent our civilization places restrictions, requires permissions and places strangleholds on human activity, from AI to energy to housing and beyond.

If we turn this dial down, and slow or end such progress across the board, our civilization will perish. If you turn the dial up, we will prosper.

In this view, any talk of either extinction risks or other AI downsides is functionally an argument for turning the dial down, saying Boo Progress, when we instead desperately need it turn the dial up, and say Yay Progress.

It would, again in this view, be first best to say Yay Progress in most places, while making a careful narrow exception that lets us guard against extinction risks. Progress is no use if you are dead.

Alas, this is too nuanced, and thus impossible. Trying will not result in the narrow thing that would protect us. Instead, trying turns the dial down which does harm everywhere, and also does harm within AI because the new rules will favor insiders and target mundane utility without guarding against danger, and the harms you do elsewhere inhibit sane behavior.

Thus, the correct thing to do is shout Yay Progress from the rooftops, by whatever means are effective. One must think in terms of the effects of the rhetoric on the dial and the vibe, not on whether the individual points track underlying physical reality. Caring about individual points and how they relate to physical reality, in this model, is completely missing the point.

This doesn’t imply there is nothing to be done to reduce extinction risks. Tyler Cowen in particular has supported at least technical private efforts to do this. Perhaps people in National Security or high in government, or various others who could help, could have their minds changed in good directions that would let us do nuanced useful things. But such efforts must be done quietly, among the cognoscenti and behind the scenes, a la ‘secret congress.’

While I find this model incomplete, and wish for higher epistemic standards throughout, I agree that in practice this single Dial of Progress somewhat exists.

Also Yay Progress.

Robin Hanson explicitly endorses the maximalist Yay Progress position. He expects this will result in lots of change, including the replacement of biological humans with machine-based entities that are very different from us and mostly do not share many of our values, in a way that I would consider human extinction. He considers such machines to be our descendants, and considers the alternative worse..

This post fleshes out the model, its implications, and my view of both.

Consider a Dial

What if, like the metaphorical single light of science, there was also a single knob of (technological, scientific, economic) progress?

Collectively, the sum of little decisions, the lever is moved.

If we turn the dial up, towards Yay Progress, we get more progress.

If we turn the dial down, towards Boo Progress, we get less progress.

As the dial is turned up, people are increasingly empowered to do a wide variety of useful and productive things, without needing to seek out permission from those with power or other veto points. It is, as Marc Andreessen puts it, time to build. Buildings rise up. Electrical power and spice flow. Revolutionary science is done. Technologies get developed. Business is done. The pie grows.

There are also downsides. Accidents happen. People get hurt. People lose their jobs, whether or not the total quantity and quality of jobs increases. Inequality might rise, distribution of gains might not be fair. Change occurs. Our lives feel less safe and harder to understand. Adaptations are invalidated. Entrenched interests suffer. There might be big downsides where things go horribly wrong.

As the dial is turned to the left, people are increasingly restricted from doing a wide variety of useful and productive things. To do things, you need permission from those with power or other veto points. Things are not built. Buildings do not rise. Electrical power and spice do not flow. Revolutionary science is not done. Technology plateaus. Business levels off. The pie shrinks.

There are also upsides. Accidents are prevented. People don’t get hurt in particular prevented ways. People’s current jobs are more often protected, whether or not the total quantity and quality of jobs increases. Inequality might fall if decisions are made to prioritize that, although it also might rise as an elite increasingly takes control. Redistribution might make things more fair, although it might also make things less fair. Change is slowed. Our lives feel safer and easier to understand. Adaptations are sustained. Entrenched interests prosper. You may never know what we missed out on.

It would be great if there was not one but many dials. So we could build more houses where people want to live and deploy increasing numbers of solar panels and ship things between our ports, while perhaps choosing to apply restraint to gain of function research, chemical weapons and the Torment Nexus.

Alas, we mostly don’t have the ‘good things dial’ and the ‘bad things dial’ let alone more nuance than that. In practice, there’s one dial.

While I do not think it is that simple, there is still a lot of truth to this model.

One Dial Covid

Consider the parallel to Covid.

The first best solution would have been to look individually at proposed actions, consider their physical consequences, and choose the best possible actions that strike a balance between economic costs and health risks and other considerations, and adapt nimbly as we got more information and circumstances changed.

That’s mostly not what we got. What did we mostly get? One dial.

We had those who were ‘Boo Covid’ and always advocated Doing More. We had those who said ‘Yay Covid’ (or as they would say ‘Yay Freedom’ or ‘Yay Life’) and advocated returning to normal. The two sides then fought over the dial.

Tyler Cowen was quite explicit about this on March 31, 2020, in response to Robin Hanson’s proposal to deliberately infect the young to minimize total harm:

Robin Hanson: @tylercowen gives name “Hansonian Netherlands” to article on that nation’s weak lockdown “allowing large numbers to contract the illness at a controlled pace”. But I’ve argued only for COMBINATION of local controlled infection + isolation, NOT for just letting it run wild.

Tyler Cowen: have edited, but I think de facto it is what your alternative would boil down to.

Robin Hanson: Care to make an argument for such a strong and non-obvious claim?

Tyler Cowen: It all gets filtered through public choice, it is not a technocracy where you are in charge. Netherlands and Sweden are the closest Western instantiations of your approach.

Robin Hanson: I’m arguing mainly to ALLOW small groups to choose to variolate; I haven’t proposed a government program on it. Are you suggesting that merely allowing this freedom is itself likely to result in governments letting the pandemic run wild?

Tyler Cowen: Only a very blunt set of messages can be sent, and those have to be fairly universal at that.

Robin Hanson: So that’s a “yes”? The message to allow this freedom to variolate would get mixed up with “run wild” advocacy messages, and so that’s what would happen?

Tyler Cowen: Yes and keep in mind de facto law enforcement is minimal now, and I don’t think many are doing this, recklessness and indifference aside.

We didn’t entirely get only one dial. Those who cared about the physical consequences of various actions did, at least some of the time, manage to pull this particular rope sideways. We got increasingly (relatively) sane over time on masks, on surfaces, on outdoors versus indoors, and on especially dangerous activities like singing.

That was only possible because some people cared about that. With less of that kind of push, we would have had less affordance for such nuance. With of that kind of push, we would have perhaps had somewhat more. The people who were willing to say ‘I support the sensible version of X, but oppose the dumb version’ are the reason there’s any incentive to choose the sensible versions of things.

There was also very much a ‘this is what happens when you turn the dial on Boo Covid up, and it’s not what you’d prefer, and you mostly have to choose direction on the dial’ aspect to everything. A lot of people have come around to the position ‘there was a plausible version of Boo Covid that would have been worthwhile, but given what we know now, we should have gone Yay Freedom instead and accepted the consequences.’

Suppose, counterfactually, that mutations of Covid-19 threatened to turn it into an extinction risk if it wasn’t suppressed, and you figured this out. We needed to take extraordinary measures, or else. You have strong evidence suggesting this is 50% to happen if Covid isn’t suppressed worldwide. You shout from the rooftops, yet others mostly aren’t buying it or don’t seem able to grapple with the implications. ‘Slightly harsher suppression measures’ would have a minimal impact on our chances – to actually prevent this, we’d need some combination of a highly bold research project and actual suppression, and fast. This is well outside the Overton Window. Simply saying ‘Boo Covid’ seems likely to only make things worse and not get you what you want. What should you have done?

Good question.

Yay Progress

Suppose there was indeed a Dial of Progress, and they gave me access to it.

What would I do?

On any practical margin, I would crank that sucker as high as it can go. There’s a setting that would be too high even for me, but I don’t expect the dial to offer it.

What about AI? Wouldn’t that get us all killed?

Well, maybe. That is a very real risk.

I’d still consider the upsides too big to ignore. Being able to have an overall sane, prosperous society, where people would have the slack to experiment and think, and not be at each others’ throats, with an expanding pie and envisioning a positive future, would put is in a much better place. That includes making much better decisions on AI. People would feel less like they have no choice, either personally or as part of a civilization, less like they couldn’t speak up if something wasn’t right.

People need something to protect, to hope for and fight for, if we want them to sacrifice in the name of the future. Right now, too many don’t have that.

This includes Cowen and Andreessen. Suppose instead of one dial there were two dials, one for AI capabilities and one for everything else. If we could turn the everything else dial up to 11, there would be less pressure to keep the AI one at 10, and much more willingness to suggest using caution.

Importantly, moving the dial up would differentially assist places where a Just Do It, It’s Time to Build attitude is insanely great, boosting our prospects quite a lot. And I do think those places are very important, including indirectly for AI extinction risk.

There are definitely worlds where this still gets us killed, or killed a lot faster, than otherwise. But there are enough worlds where that’s not the case, or the opposite is true, that I’d roll those dice without my voice or hand trembling.

Alas, those who believe in the dial and in turning the dial up to Yay Progress are fighting an overall losing battle, and as a result they are lately focusing differentially on what they see as their last best hope no matter the risks, which is AI.

Arguments as Soldiers

If you think nuance and detail and technical accuracy don’t matter, and the stakes are high, it is easy to see how you can decide to use arguments as soldiers.

It is easy to sympathize. There is [important good true cause], locked in conflict with [dastardly opposition]. Being in the tank for good cause is plausibly the right thing to do, the epistemic consequences be damned, it’s not like that nuance gets noticed.

Thus the resorting to Bulverism and name calling and amplifying every anti-cost or anti-risk argument and advocate, of limitless isolated demands for rigor. The saying things that don’t make sense, or have known or obvious knock-down counterarguments, often repeatedly.

Or, in the language of Tim Urban’s book, What’s Our Problem?, the step down from acting like sports fans to acting like lawyers or zealots.

And yet here I am, once again, asking everyone to please stop doing that.

I don’t care what the other side is doing. I know what the stakes are.

I don’t care. Life isn’t fair. Be better. Here, more than ever, exactly because it is only finding and implementing carefully crafted solutions that care about such details that we can hope to get out of this mess alive.

A lot of people are doing quite well at this. Even they must do better.

Huge If True

Here, you say. Let me help you off your high horse.

You might not like that the world rejects most nuance and mostly people are fighting to move a single dial. That does not make it untrue. What are you going to do about it?

We can all have sympathy for both positions – those that believe in one dial (Dialism? Onedialism?) who prioritize fighting their good fight and their war, and those who fight the other good fight for nuance and truth and physically modeling the world and maybe actually finding solutions that let us not die.

We can create common knowledge of what is happening. The alternative is a bunch of people acting mind-killed, and thinking other people have lost their minds. A cycle where people say words shaped like arguments not intended to hold water, and others point out the water those words are failing to hold. A waste of time, at best. Once we stop pretending, we can discuss and strategize.

From my perspective, those saying that which is not, using obvious nonsense arguments, in order to dismiss attempts to make us all not die, are defecting.

From their perspective, I am the one defecting, as I keep trying to move the dial in the wrong way when I clearly know better.

I would like to engage in dialog and trade. We both want to move the dial up, not down. We both actually want to not die.

What would mutually recognized cooperation look like?

I will offer some speculations.

A Trade Offer Has Arrived

One trade we can make is to engage in real discussions aimed at figuring things out. To what extent is the one dial theory true? What interventions will have what results? What is actually necessary to increase our chances of survival and improve our future? How does any of this work? What would be convincing information either way? We can’t do that with the masks on. With the masks off, why not? If the one dial theory is largely true, then discussing it will be nuance most people will ignore. If the one dial theory is mostly false, then building good models is the important goal.

If this attempt to understand is wrong, I want to know what is really going on. Whether or not it is right, it would be great to see a similar effort in reverse.

A potential additional trade would be a shift to emphasis on private efforts for targeted interventions, where we agree nuance is possible. Efforts to alert, convince and recruit the cognoscenti in plain sight would continue, but be focused outside the mainstream.

In exchange, perhaps private support could be offered in those venues. This could involve efforts with key private actors like labs, and also key government officials and decision makers.

Another potential trade could be a shift of focus away from asking to slow down towards calls to invest equally heavily in finding solutions while moving forward, as Jason Crawford suggests in his Plea for Solutionism. Geoff Hinton suggests a common sense approach that for every dollar or unit of effort put into foundational capabilities work, we put a similar amount of money and effort into ensuring this result does not kill us.

That sounds like a lot, and far exceeds the ratios observed in places such as Anthropic, yet it does not seem so absurd or impossible to me. Humans pay a much higher than this ‘alignment tax’ to civilize and align ourselves with each other, a task that consumes most of our resources. Why should we expect this new less forgiving task to be easier?

A third potential trade is emphasis across domains. Those worried about AI extinction risks put additional emphasis on the need for progress and sensible action across a wide variety of potential human activity – we drag the dial up by making and drawing attention to true arguments on housing and transportation, energy and climate, work and immigration, healthcare and science, and even on the mundane utility aspects of AI. We work to crank up the dial. I’m trying to be part of the solution here as much as I can, as I sincerely think helping in those other domains remains critical.

In exchange, advocates of the dial can also shift their focus to those other domains. And we can place an emphasis on details that check out and achieve their objectives. As Tyler put it in his Hayek lecture, he tires of talking about extinction risks and hesitates to mention them. So don’t mention them, at least in public. Much better to respectfully decline to engage, and make it clear why, everyone involved gets to save time and avoid foolishness.

Perhaps there is even something in the form of, rather than us mostly calling for interventions rather than focusing more on finding good implementations and others insisting that good implementations and details are impossible in order to make them so and convince us to abandon hope and not try, we could together focus on finding and getting better implementations and details.

Conclusion

Seeing highly intelligent thinkers who are otherwise natural partners and allies making a variety of obvious nonsense arguments, in ways that seem immune to correction, in ways that seem designed to prevent humanity from taking action to prevent its own extinction, is extremely frustrating. Even more frustrating is not knowing why it is happening, and responding in unproductive ways.

At the same time, it must be similarly frustrating for those who see people who they see as natural partners and allies, talking and acting in ways that seem like doomed strategic moves that will only doom civilization further, seeming to live in some sort of dreamland where nuance and details and arguments can win out and a narrow targeted intervention might work, whereas in other domains we seem to know better, and why aren’t we wising up and getting with the program?

Hopefully this new picture can lead to more productive engagement and responses, or even profitable trade. Everyone involved wants good outcomes for everyone. Let’s figure it out together.