RSS

Values handshakes

TagLast edit: 6 Apr 2021 16:05 UTC by Yoav Ravid

Values handshakes are a proposed form of trade between superintelligences. From The Hour I First Believed by Scott Alexander:

Suppose that humans make an AI which wants to convert the universe into paperclips. And suppose that aliens in the Andromeda Galaxy make an AI which wants to convert the universe into thumbtacks.

When they meet in the middle, they might be tempted to fight for the fate of the galaxy. But this has many disadvantages. First, there’s the usual risk of losing and being wiped out completely. Second, there’s the usual deadweight loss of war, devoting resources to military buildup instead of paperclip production or whatever. Third, there’s the risk of a Pyrrhic victory that leaves you weakened and easy prey for some third party. Fourth, nobody knows what kind of scorched-earth strategy a losing superintelligence might be able to use to thwart its conqueror, but it could potentially be really bad – eg initiating vacuum collapse and destroying the universe. Also, since both parties would have superintelligent prediction abilities, they might both know who would win the war and how before actually fighting. This would make the fighting redundant and kind of stupid.

Although they would have the usual peace treaty options, like giving half the universe to each of them, superintelligences that trusted each other would have an additional, more attractive option. They could merge into a superintelligence that shared the values of both parent intelligences in proportion to their strength (or chance of military victory, or whatever). So if there’s a 60% chance our AI would win, and a 40% chance their AI would win, and both AIs know and agree on these odds, they might both rewrite their own programming with that of a previously-agreed-upon child superintelligence trying to convert the universe to paperclips and thumbtacks in a 60-40 mix.

This has a lot of advantages over the half-the-universe-each treaty proposal. For one thing, if some resources were better for making paperclips, and others for making thumbtacks, both AIs could use all their resources maximally efficiently without having to trade. And if they were ever threatened by a third party, they would be able to present a completely unified front.

Su­per­ra­tional Agents Kelly Bet In­fluence!

abramdemski16 Apr 2021 22:08 UTC
47 points
7 comments5 min readLW link

[REPOST] The Demiurge’s Older Brother

Scott Alexander22 Mar 2017 2:03 UTC
96 points
2 comments6 min readLW link

How LDT helps re­duce the AI arms race

Tamsin Leake10 Dec 2023 16:21 UTC
65 points
13 comments4 min readLW link
(carado.moe)

Ex­pected Utility, Geo­met­ric Utility, and Other Equiv­a­lent Representations

StrivingForLegibility20 Nov 2024 23:28 UTC
10 points
0 comments11 min readLW link

Acausal trade nat­u­rally re­sults in the Nash bar­gain­ing solution

Christopher King8 May 2023 18:13 UTC
3 points
0 comments4 min readLW link

Acausal Now: We could to­tally acausally bar­gain with aliens at our cur­rent tech level if desired

Christopher King9 Aug 2023 0:50 UTC
1 point
5 comments4 min readLW link

Ne­go­ti­at­ing Up and Down the Si­mu­la­tion Hier­ar­chy: Why We Might Sur­vive the Unal­igned Singularity

David Udell4 May 2022 4:21 UTC
26 points
14 comments2 min readLW link

Threat-Re­sis­tant Bar­gain­ing Me­ga­post: In­tro­duc­ing the ROSE Value

Diffractor28 Sep 2022 1:20 UTC
152 points
19 comments53 min readLW link2 reviews

Could Roko’s basilisk acausally bar­gain with a pa­per­clip max­i­mizer?

Christopher King13 Mar 2023 18:21 UTC
1 point
8 comments1 min readLW link

Even if we lose, we win

Morphism15 Jan 2024 2:15 UTC
24 points
17 comments4 min readLW link

Geo­met­ric Utili­tar­i­anism (And Why It Mat­ters)

StrivingForLegibility12 May 2024 3:41 UTC
26 points
2 comments11 min readLW link
No comments.