I believe that we will win.
An echo of an old ad for the 2014 US men’s World Cup team. It did not win.
I was in Berkeley for the 2025 Secular Solstice. We gather to sing and to reflect.
The night’s theme was the opposite: ‘I don’t think we’re going to make it.’
As in: Sufficiently advanced AI is coming. We don’t know exactly when, or what form it will take, but it is probably coming. When it does, we, humanity, probably won’t make it. It’s a live question. Could easily go either way. We are not resigned to it. There’s so much to be done that can tilt the odds. But we’re not the favorite.
Raymond Arnold, who ran the event, believes that. I believe that.
Yet in the middle of the event, the echo was there. Defiant.
I believe that we will win.
There is a recording of the event. I highly encourage you to set aside three hours at some point in December, to listen, and to participate and sing along. Be earnest.
If you don’t believe it, I encourage this all the more. If you don’t understand the mindset, or the culture behind it, or consider it an opponent or dislike it, and especially if yours is a different fight? I encourage this all the more than that. You can also attend New York’s Solstice on the 20th.
You will sing songs you know, and songs you don’t. You will hear tales of struggles, of facing impossible odds or unbearable loss and fighting anyway, of how to face it all and hopefully stay sane. To have the end, if it happens, find us doing well.
I live a wonderful life.
I am crying as I write this. But when I am done, I will open a different Chrome window. I will spend the day with friends I love dearly and watching football games. This evening my wife and I will attend a not wedding of two of them, that is totally a wedding. We will fly home to our wonderful kids, and enjoy endless wonders greater than any king in the beating heart of the world. I want for nothing other than time.
Almost every day, I will mostly reject those wonders. I will instead return to my computer. I will confront waves of events and information. The avalanche will accelerate. Release after release, argument after argument, policies, papers, events, one battle after another. People will be determined to handle events with less dignity than one could imagine, despite having read this sentence. I fight to not be driven into rages. I will triage. I will process. I will change my mind. I will try to explain, just one more time. I will move pieces around multiple chessboards.
We continue. Don’t tell me to stop. Someone has to, and no one else will.
I know if I ignored it, anything else would soon turn to ash in my mouth.
I will look at events, and say to myself as I see the moves unfolding, the consequences of choices I made or influenced, for good and ill: This is the world we made.
It aint over till its over. Never leave a ballgame early. Leave it all on the field, for when the dust covers the sun and all you hope for is undone. You play to win the game.
The odds are against us and the situation is grim. By default, we lose. I act accordingly, and employ some of the unteachable methods of sanity and the mirror version of others, all of which are indeed unteachable but do totally work.
Yet the echo is there. In my head. It doesn’t care.
I believe that we will win.
See: @AnnaSalamon’s Believing In.
I’ve recently been meditating on Eliezer’s:
I think Anna Salamon is right that there are two separate things people call beliefs, one of which is about probabilities, and one is about what things you want to invest in.
I think it’s a dangling thread of rationality discourse of how to fully integrate Believing In. Fortunately, it’s The Review Season and it’s a good time to back to the Believing In post and review it.
I wonder if some of the conflation between belief-as-prediction and belief-as-investment is actually a functional social technology for solving coordination problems. To avoid multi-polar traps, people need to trust eachother to act against individual incentives- to rationally pre-commit to acting irrationally in the future. Just telling people “I’m planning act against my incentives, even though I know that doing so will be irrational at the time” might not be very convincing, but instead claiming to have irrationally certain beliefs that would change your incentives were that certainty warranted can be more convincing. Even if people strongly suspect that you’re exaggerating, they know that the social pressure to avoid a loss of status by admitting that you were wrong will make you less likely to defect.
For example, say you’re planning to start a band with some friends. You all think the effort and investment will be worth it so long as there’s a 50% chance of the band succeeding, and you all privately think there’s about about a 70% chance of the band succeeding if everyone stays committed, and a near 0% chance if anybody drops out. Say there’s enough random epistemic noise that you think it’s pretty likely someone in the band will eventually drop their odds below that 50% threshold, even when you personally still give success conditional on commitment much better odds. So, unless you can trust everyone to stay committed even if they come to believe it’s not worth the effort, you might as well give up on the band before starting it. Classic multi-polar trap. If, however, everyone at the start is willing to say “I’m certain we’ll succeed”, putting more of their reputation on the line, that might build enough trust to overcome the coordination problem.
Of course, this can create all sorts of epistemic problems. Maybe everyone in the band comes to believe that it’s not worth the effort, but incorrectly think that saying so will be a defection. Maybe their exaggerated certainty misleads other people in ways that cause them to make bad investments or to dangerously misunderstand the music industry.
Maybe there’s a sense in which this solution to individual coordination problems is part of a larger coordination problem- everyone incentivized to reap the value of greater trust, but causing a greater loss of value to people more broadly by damaging the epistemic commons.
There might be some motivated reasoning on that last point, however, since I definitely find it emotionally uncomfortable when people say inaccurate things for social reasons.
So you form the band and try to figure out how to keep everyone working together so that no one’s confidence drops below 50%. If you’re not sure you can do that, consider the value of trying anyway and seeing if you can do it. If the expected values still don’t work out, don’t start the band.
Like conflation around “belief”, it’s better to have a particular meaning in mind when calling something “rational”, such as methods that help more with finding truth, or with making effective plans.
(If there’s something you should precommit to, it’s not centraly “irrational” to do that thing. Or if it is indeed centrally “irrational” to do it, maybe you shouldn’t precommit to it. In this case, it’s only “irrational” according to a myopic problem statement that is itself not the right thing to follow. And in the above narrow sense of “rationality” as preference towards better methods rather than merely correctness of individual beliefs and decisions according to given methods, none of these things are either “rational” or “irrational”.)
I agree, though if we’re defining rationality as a preference for better methods, I think we ought to further disambiguate between “a decision theory that will dissolve apparent conflicts between what we currently want our future selves to do and what those future selves actually want to do” and “practical strategies for aligning our future incentives with our current ones”
Suppose someone tells you that they’ll offer you $100 tomorrow and $10,000 today if you make a good-faith effort to prevent yourself from accepting the $100 tomorrow. The best outcome would be to make a genuine attempt to disincentivize yourself from accepting the money tomorrow, but fail and accept the money anyway- however, you can’t actually try and make that happen without violating the terms of the deal.
if your effort to constrain your future self on day one does fail, I don’t think there’s a reasonable decision theory that would argue you should reject the money anyway. On day one, you’re being paid to temporarily adopt preferences misaligned with your preferences on day two. You can try to make that change in preferences permanent, or to build an incentive structure to enforce that preference, or maybe even strike an acausal bargain with your day two self, but if all of that fails, you ought to go ahead and accept the $100.
I think coordination problems are a lot like that. They reward you for adopting preferences genuinely at odds with those you may have later on. And what’s rational according to one set of preferences will be irrational according to another.
That’s one of the things motivating UDT. On day two, you still ask what global policy you should follow (that in particular encompasses your actions in the past, and in the counterfactuals relative to what you actually observe in the current situation). Then you see where/when you actually are, what you actually observe, and enact what the best policy says you do in the current situation. You don’t constrain yourself on day one, but still enact the global policy on day two.
Adopting preferences is a lot like enacting a policy, but when enacting a policy you don’t need to adopt preferences, a policy is something external, an algorithmic action (instead of choosing Cooperate, you choose to follow some algorithm that decides what to do, even if that algorithm gets no further input). Contracts in the usual sense act like that, assurance contracts is an example where you are explicitly establishing coordination. You can judge an algorithmic action like you judge an explicit action, but there are more algorithmic actions than there are explicit actions, and algorithmic actions taken by you and your opponents can themselves reason about each other, which enables coordination.
– Viktor Frankl “Man’s Search for Meaning”
Pieces of these two passages have been used for decades, but rarely do I see the whole paragraphs with context. I believe the full context applies here. We can give up, or we can triumph, the difference “depends on decisions but not on circumstances”. Some humans have it within them to persevere in any situation. Remember them, and always strive to be more like them.
I am long the human race (in the economic sense).
I only realized halfway through that this was a quote. Suggestion: format it as one. (On desktop, by selecting all the quote text and then choosing the quotation mark symbol.)
I edited it.
Thank you!
Don’t have access on desktop. Is there a way to format via mobile?
MondSemmel is correct but if you don’t want to use the menu, type “> ” at the start of a new line and it will begin a quote block (you can also use >! for spoiler tags).
Thanks!
I just tried it on mobile in a browser, and it works the same there: edit your comment via the ⋮ menu in the top right, and select the text of the paragraphs you want to turn into a quote. Then a popup with formatting options opens where you can find the quotation formatting behind another ⋮ option.
Minor reference that I agree wasn’t worth spelling out in the post but seemed nice to include: A Little Echo is a song I wrote in 2012 as “a cryonics funeral song”, about the various ways that echoes of people can survive.
It hasn’t turned out to be a mainstay Solstice song. I was actually a bit sad that this solstice turned out last-minute-accidentally to be the most cryonics-heavy Solstice I’ve led (as a recurring B Plot), but it didn’t really make sense to do the song because other songs were filling it’s niche as a singalong.