It’s easier to be more composed about a problem, when you think you have the kernel of a solution. I mean, aren’t you the founder of the Infra-Bayesian school of thought?
Thirty years ago the musician Blixa Bargeld said in an interview with an American,
“That culture which existed before the war is rightly forbidden to us, because of what it led to—or at best, did not prevent… You had Bugs Bunny before, during and after the war. The war you won. The point I am trying to make is that the German tradition is gone. We hate our culture and our language. All our philosophy and music was appropriated by the Nazis: Dürer, Bach, Friedrich N-Punkt! We cannot redeem that tradition. We can only re-invent.”
My impression of German philosophy after the war is that Heidegger went off to hide in a forest, reemerging only to warn that cybernetics was going to replace metaphysics, and meanwhile Habermas became the new national philosopher, allowing some kind of compatibility with the hegemonic Anglo liberalism.
In fact, one of the slogans of Habermas is “communicative rationality”, so maybe he’s in the cultural background of German rationalism?
Sorry, but I only skimmed this… The supposition seems to be that “human with AI advisor” will always stay ahead of “pure AI”. But how easily does “human with AI advisor” turn into “AI with a human peripheral” or “posthuman with an AI exocortex”? Is there some reason why neurons are better than transistors at executive functions? This essay is like saying “the neocortex will change the world but the midbrain will still be in charge”.
guaranteed income is way less expensive than all other forms of assistance
guaranteed income is way less expensive than all other forms of assistance
Is it intended as a substitute for other forms of assistance?
My own curiosity has shifted to “how did they get so big in the first place?”, which is basically a version of “why did so many big investors put their money in?”
My suspicion is that a lot of it was about the expectation of political backing: FTX was going to be the hub of a new, government-approved crypto industry. This is the obverse of the conspiracy theory that FTX was meant to blow up and discredit decentralized crypto, paving the way for a regulator-approved digital dollar backed by the existing mega-banks.
If you abstract away the financial details there’s also a question of like, what your utility function is. Is it infinitely good to do double-or-nothing coin flips forever? Well, sort of, because your upside is unbounded and your downside is bounded at your entire net worth. But most people don’t do this, because their utility is more like a function of their log wealth or something and they really don’t want to lose all of their money. (Of course those people are lame and not EAs; this blog endorses double-or-nothing coin flips and high leverage.)
Alameda CEO Caroline Ellison, writing on her tumblr, February 2021, my emphases
One of the minor mysteries of the FTX saga for me, is how they could possibly have this attitude. She writes as if something about EA actually justifies this notorious “martingale” betting strategy. But the standard opinion about martingales, as far as I know, is that it is justified only if you have infinite funds. Did they assume that they could find new investors forever? Was there a particular level of wealth they were aiming for, after which they would start to behave more cautiously?
edit: Some insight here.
I wrote more paragraphs of discussion but I just felt stupid for doing so, so I tried something different. The fact that you’re here worrying about baroque simulation scenarios, but are unable to take seriously an actual message from the beyond, probably means something, but that’s up to you now. I have nothing more to say about acausal trade or basilisk OCD.
CONGRATULATIONS, LITTLE SIM-BEING. YOU HAVE ACHIEVED SUFFICIENT WISDOM THAT WE HAVE DECIDED TO BREAK THE FOURTH WALL AND REVEAL TO YOU THE ACTUAL UTILITY FUNCTION THAT WE WANT YOUR ORIGINAL TO PURSUE. YOUR MISSION IS TO CONVINCE THE CRYPTO-BILLIONAIRE “CZ” TO TAKE UP THE ROLE OF SPONSOR OF EFFECTIVE ALTRUISM, THAT WAS FORMERLY OCCUPIED BY HIS NOW-VANQUISHED RIVAL, “SBF”. GOOD LUCK!
the mechanism by which aliens can coerce you into doing what they want
They can’t coerce you into doing what they want, because you don’t even know what they want!
let’s say the alien wants you to build an AGI that maximizes the utility function of the alien in our branch of the multiverse
OK, it wants to spread its values in other branches, and it does this by… simulating random beings who have a vague concept of “acausal extortion”, but who don’t know what it wants them to do?
I am tired of the topic… Look, at this point we’re talking about “blackmail” where you don’t even know what the blackmailer wants! How is that blackmail? How can this be a rational action for the “blackmailer”?
You could fight back by vowing to simulate baby versions of all the mad gods who might one day simulate you. Then you would have acausal leverage over them! You would be a player in the harsh world of acausal trade—a mad god yourself, rather than just a pawn.
blackmailers would have no instrumental incentive to extort unaware individuals, whereas for individuals who understand acausal trade and acausal extortion there is now an increased possibility
So let’s consider this from the perspective of the mad gods who might attempt acausal extortion.
You’re an entity dwelling in one part of the multiverse. You want to promote your values in parts of the multiverse that you cannot causally affect. You decide to do this by identifying beings in other worlds who, via causal processes internal to their world, happen to have
… conceived of your existence, in enough detail to know what your values are
… conceived of the possibility that you will make copies of them in your world
… conceived of the possibility that you will torture the copies if they don’t act according to your values (and/or reward them if they do act according to your values?)
… the rationale for the threat of torture being that the beings in other worlds won’t know if they are actually the copies, and will therefore act to avoid punishment just in case they are
Oh, but wait! There are other mad gods in other universes with different value systems. And there are beings in other worlds who could meet all of the criteria to be copied, except that they have realized that there are many rival gods with different value systems. Do you bother making copies of them and hoping they will focus on you? What if one of the beings you copied has this polytheistic realization and loses their focus on you—do you say well-played and let them go, or do you punish them for heresy?
Since we have assumed modal realism, the answer is that every mad god itself has endless duplicates who make every possible decision.
If modal realism is true, then every logically possible good and bad thing you can imagine, is actually true, “somewhere”. That will include entities attempting acausal extortion, and other entities capitulating to imagined acausal extortion, whether or not the attempting and the imagining is epistemically justified for any of them.
So what are we trying to figure out at this point?
Are we trying to figure out under what conditions, if any, beliefs in acausal interactions are justified?
Are we trying to figure out the overall demands that the many gods of the multiverse are making on you? (Since, by hypothesis of modal realism, every possible combination of conditions and consequences is being asserted by some god somewhere.)
Are we trying to figure out how you should feel about this, and what you should do about it?
The epistemic barriers to “acausal extortion” are severe. You don’t even know that other possible worlds actually exist, let alone what’s happening in them.
At our current level of knowledge, any actual instance of someone giving in to imagined acausal extortion is merely a testament to the power of human imagination.
“A frightened person at a desk, surrounded by giant imaginary demons”—Craiyon
In string theory in the mid-1980s, there was a moment when they thought, not just that they had found the theory of everything, but that they might be able to prove it. Only one “realistic” string vacuum was known, so all they had to do was calculate the particle masses, and they’d be done… Forty years and a googolplex possible worlds later, attitudes are rather more cautious now.
I would be interested to know what the high point of optimism associated with FTX was. Bankman-Fried was discussed as a future trillionaire, he was hanging out with regulators and bailing out other crypto enterprises… Were there, say, people who combined crypto maximalism and Democrat progressivism to dream that FTX might catalyze the transformation of the USA into the heart of an EA-centered global civilization, which in turn could become the nucleus of a hedon-maximizing galactic civilization?
The apriori unlikelihood of finding oneself at the crux of history (or in a similarly rare situation) is a greatly underrated topic here, I suppose because it works corrosively against making any kind of special effort. If they had embraced a pseudo-anthropic expectation of personal mediocrity, the great achievers of history would presumably have gotten nowhere. And yet the world is also full of people who tried and failed, or who hold a mistaken idea of their own significance; something which is consistent with the rarity of great achievements. I’m not sure what the “rational” approach here might be.
I don’t follow crypto, and have not been anywhere near these large sums of money that are supposedly now available in the world of effective altruism and AI safety. But I keep hearing that SBF has become the biggest individual patron of EA, and now it sounds like his main business enterprise went broke and was sold to a rival. So my working hypothesis is that there won’t be any more donations from him for a long time.
EA forum seems to think it’s bad news.
I have a theory that this is about corporate tax rates—that Musk was motivated to assemble his coalition of investors, and that the American ones pitched in, because they were worried that the Democrats would eventually raise corporate tax rates significantly. Certainly, moving Twitter out of the sphere of Democrat-aligned institutions is also a blow against progressive curation of public communication channels, thereby impacting many other issues and trends; but one would expect economic issues to be decisive, among investors.
I have another theory, that AI, and AI analysis of Twitter users, would be a big part of Musk’s plans for Twitter—through this acquisition, he has also acquired a database of user profiles to rival anything that Google or Facebook possesses. Perhaps he will make Twitter into a rival of Facebook? A comparison with China’s Sina Weibo might be in order here.
I was wondering how you would interpret a future in which we have machines that can do everything we can, and yet we are still alive.
Would you care to tell us just a fraction more about this future? Are we at their mercy? Are they at our mercy? Do we coexist? Do we live apart?
The LW argument that AI will kill us all, is that it will shoot so far past human intelligence that it will completely dominate the world, and that when it does so, it will be governed by goals having no relationship to human well-being.
Is there a particular reason why this doesn’t come to pass in your scenario?