If you’re stating that generic intelligence was not likely simulated, but generic intelligence in our situation was likely simulated...
Doesn’t that fall afoul of the mediocrity principle applied to generic intelligence overall?
(As an aside, this does somewhat conflate ‘intelligence’ and ‘computation’; I am assuming that intelligence requires at least some non-zero amount of computation. It’s good to make this assumption explicit I suppose.)
I should probably reread the paper.
That being said:
No, it doesn’t, any more than “Godel’s theorem” or “Turing’s proof” proves simulations are impossible or “problems are NP-hard and so AGI is impossible”.
I don’t follow your logic here, which probably means I’m missing something. I agree that your latter cases are invalid logic. I don’t see why that’s relevant.
simulators can simply approximate
This does not evade this argument. If nested simulations successively approximate, total computation decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
simulate smaller sections
This does not evade this argument. If nested simulations successively simulate smaller sections, total computation decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
tamper with observers inside the simulation
This does not evade this argument. If nested simulations successively tamper with observers, this does not affect total computation—total computation still decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
slow down the simulation
This does not evade this argument. If nested simulations successively slow down, total computation decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
cache results like HashLife
This does not evade this argument. Using HashLife, total computation still decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
(How do we simulate anything already...?)
By accepting a multiplicative slowdown per level of simulation in the infinite limit, and not infinitely nesting.
See note 2 in the parent: “Note: I’m using ‘amount of computation’ as shorthand for ‘operations / second / Joule’. This is a little bit different than normal, but meh.”
You absolutely can, in certain cases, get no slowdown or even a speedup by doing a finite number of levels of simulation. However, this does not work in the limit.
Said argument applies if we cannot recursively self-simulate, regardless of reason (Margolus–Levitin theorem, parent turning the simulation off or resetting it before we could, etc).
In order for ‘almost all’ computation to be simulated, most simulations have to be recursively self-simulating. So either we can recursively self-simulate (which would be interesting), we’re rare (which would also be interesting), or we have a non-zero chance we’re in the ‘real’ universe.
I am also skeptical of the simulation argument, but for different reasons.
My main issue is: the normal simulation argument requires violating the Margolus–Levitin theorem, as it requires that you can do an arbitrary amount of computation via recursively simulating.
This either means that the Margolus–Levitin theorem is false in our universe (which would be interesting), we’re a ‘leaf’ simulation where the Margolus–Levitin theorem holds, but there’s many universes where it does not (which would also be interesting), or we have a non-zero chance of not being in a simulation.
This is essentially a justification for ‘almost exactly all such civilizations don’t go on to build many simulations’.
A fundamental limit on computation: ≤6∗1033operations/second/Joule
Note: I’m using ‘amount of computation’ as shorthand for ‘operations / second / Joule’. This is a little bit different than normal, but meh.
Call the scaling factor—of amount of computation necessary to simulate X amount of computation - C. So e.g.C=0.5 means that to simulate 1 unit of computation you need 2 units of computation. If C≥1, then you can violate the Margolus–Levitin theorem simply by recursively sub-simulating far enough. If C<1, then a universe that can do X computation can simulate no more than CX total computation regardless of how deep the tree is, in which case there’s at least a 1−C chance that we’re in the ‘real’ universe.
Playing less wouldn’t decrease my score
Interesting. Is this typically the case with chess? Humans tend to do better with tasks when they are repeated more frequently, albeit with strongly diminishing returns.
being distracted is one of the effects of stress.
Absolutely, which makes it very difficult to tease apart ‘being distracted as a result of stress caused by X causing a drop’ and ‘being distracted due to X causing a drop’.
solar+batteries are dropping exponentially in price
Pulling the data from this chart from your source:
...and fitting an exponential trend with offset, I get:
(Pardon the very rough chart.)
This appears to be a fairly good fit, and results in the following trend/formula:
This is an exponentially-decreasing trend… but towards a decidedly positive horizontal asymptote.
This essentially indicates that we will get minimal future scaling, if any. $37.71/MWh is already within the given range.
For reference, here’s what the best fit looks like if you try to force a zero asymptote:
This is fairly obviously a significantly worse fit.
Why do you believe that solar has an asymptote towards zero cost?
fossils usually don’t need storage
Absolutely, which is one of the reasons why in the absence of wanting clean energy people tend to lean towards fossil fuels.
Nonlinear least squares.
I’m treating the high and low as two different data points for each year, which isn’t quite right, but meh.
Admittedly, just from eyeballing it.
Yes, this could be simplified. That being said, I get numerical stability issues if I don’t include the year offset; it’s easier to just include said offset.
Admittedly, this is a 2-parameter fit not a 3-parameter fit; I don’t know offhand of a good alternative third parameter to add to the fit to make it more of an apples-to-apples comparison.
As an aside, people fitting exponential trends without including an offset term and then naively extrapolating, when exponential trends with offset terms fit significantly better and don’t result in absurd conclusions, is a bit of a pet peeve of mine.
Too bad. My suspects for confounders for that sort of thing would be ‘you played less at the start/end of term’ or ‘you were more distracted at the start/end of term’.
First, nuclear power is expensive compared to the cheapest forms of renewable energy and is even outcompeted by other “conventional” generation sources [...] The consequence of the current price tag of nuclear power is that in competitive electricity markets it often just can’t compete with cheaper forms of generation.[snip chart]Source: Lazard
First, nuclear power is expensive compared to the cheapest forms of renewable energy and is even outcompeted by other “conventional” generation sources [...] The consequence of the current price tag of nuclear power is that in competitive electricity markets it often just can’t compete with cheaper forms of generation.
This estimate does not seem to include capacity factors or cost of required energy storage, assuming I read it correctly. Do you have an estimate that does?
Thirdly, nuclear power gives you energy independence. This became very clear during Russia’s invasion of Ukraine. France, for example, had much fewer problems cutting ties with Russia than e.g. Germany. While countries might still have to import Uranium, the global supplies are distributed more evenly than with fossil fuels, thereby decreasing geopolitical relevance. Uranium can be found nearly everywhere.
Also, you can extract uranium from seawater. This has its own problems, and is still more expensive than mines currently. However, this puts a cap on the cost of uranium for any (non-landlocked) country, which is a very good thing for contingency purposes.
(Also, there are silly amounts of uranium in seawater. 35 million tons of land-based reserves, and 4.6 billion in seawater. At very low concentrations, but still.)
How much did you play during the start / end of term compared to normal?
Here’s an example game tree:
(Kindly ignore the zeros below each game node; I’m using the dev version of GTE, which has a few quirks.)
Bob either has something up his sleeve (an exploit of some sort), or does not.
Offers a flat agreement (well, really surrender) to Alice.
Offers a (binding once both sides agree to it) arbitrated (mediated in the above; I am not going to bother redoing the screenshot above) agreement by a third party (Simon) to Alice.
Goes directly to war with Alice.
Assuming Bob didn’t go directly to war, Alice either:
Accepts (and abides by) said agreement.
Goes to war with Bob.
Does an internal audit/preparation and goes to war with Bob.
Wars cost the winner 2 and the loser 10, and also transfers 20 from the loser to the winner. (So normally war is normally +18 / −30 for the winner/loser.).
Alice doing an audit/preparing costs 3, on top of the usual, regardless of if there’s actually an exploit.
Alice wins all the time unless Bob has something up his sleeve and Alice doesn’t prepare. (+18/-30, or +15/-30 if Alice prepared.) Even in that case, Alice wins 50% of the time. (-6 / −6). Bob has something up his sleeve 50% of the time.
Flat offer here means ‘do the transfer as though there was a war, but don’t destroy anything’. A flat offer is then always +20 for Alice and −20 for Bob.
Arbitrated means ‘do a transfer based on the third party’s evaluation of the probability of Bob winning, but don’t actually destroy anything’. So if Bob has something up his sleeve, Charlie comes back with a coin flip and the result is 0, otherwise it’s +20/-20.
There are some 17 different Nash equilibria here, with an EP from +6 to +8 for Alice and −18 to −13 for Bob. As this is a lot, I’m not going to list them all. I’ll summarize:
There are 4 different equilibria with Bob always going to war immediately. Payoff in all of these cases is +6 / −18.
There are 10 different equilibria with Bob always trying an arbitrated agreement if Bob has nothing up his sleeve, and going to war 2/3rds of the time if Bob has something up his sleeve (otherwise trying a mediated agreement in the other 1/3rd), with various mixed strategies in response. All of these cases are +8 / −14.
There are 3 different equilibria with Bob always going to war if Bob has something up his sleeve, and always trying a flat agreement otherwise, with Alice always accepting a flat agreement and doing various strategies otherwise. All of these cases are +7 / −13.
Notably, Alice and Bob always offering/accepting an arbitrated agreement is not an equilibrium of this game. None of these equilibria result in Alice and Bob always doing arbitration. (Also notably: all of these equilibria have the two sides going to war at least occasionally.)
There are likely other cases with different payoffs that have an equilibrium of arbitration/accepting arbitration; this example suffices to show that not all such games lead to said result as an equilibrium.
I use ‘audit’ in most of this; I used ‘prep’ for the game tree because otherwise two options started with A.
read: go ‘uhoh’ and spend a bunch of effort finding/fixing Bob’s presumed exploit.
This is because, roughly, a Nash equilibrium requires that both sides choose a strategy that is best for them given the other party’s response, but if Bob chooses MediatedS / MediatedN, then Alice is better off with PrepM over AcceptM. Average payout of 15 instead of 10. Hence, this is not an equilibrium.
As TLW’s comment notes, the disclosure process itself might be really computationally expensive.
I was actually thinking of the cost of physical demonstrations, and/or the cost of convincing others that simulations are accurate, not so much direct simulation costs.
That being said, this is still a valid point, just not one that I should be credited for.
Imagine trying to convince someone of atomic weapons purely with simulations, without anyone ever having detonated one, for instance. It may be doable; it’d be nowhere near cheap.
Now imagine trying to do so without allowing the other side to figure out how to make atomic bombs in the process...
To be clear: as in alt-history-style ‘Trinity / etc never happened’. Not just as in someone today convincing another that their particular atomic weapon works.
Ignore the suit of the cards. So you can draw a 1 (Ace) through 13 (King). Pulling two cards is a range of 2 to 26. Divide by 2 and add 1 means you get the same roll distribution as rolling two dice.
That’s not the same roll distribution as rolling two dice. For instance, rolling a 14 (pulling 2 kings) has a probability of 4∗352∗51≈0.0045249, not 17∗7≈0.020408.
(The actual distribution is weird. It’s not even symmetrical, due to the division (and associated floor). Rounding to even/odd would help this, but would cause other issues.)
This also supposes you shuffle every draw. If you don’t, things get worse (e.g. you can’t ‘roll’ a 14 at all if at least 3 kings have already been drawn).
Fundamentally: you’re pulling out 2 cards from the deck. There are 52 possible choices for the first card, and 51 for the second card. This means that you have 52*51 possibilities. Without rejection sampling this means that you’re necessarily limited to probabilities that are a multiple of 152∗51. Meanwhile, rolling N S-sided dice and getting exactly e.g. N occurs with a probability of 1NS. As N and S are both integers, and 52=2*2*13, and 51=3*17, the only combinations of dice you can handle without rejection sampling are:
1d2, 1d3, 1d4, 1d13, 1d17, 1d26, …, 1d(52*51)
...and even then many of these don’t actually involve both cards. For instance, to get 2d2 with 2 pulled cards ignore the second card and just look at the suit of the first card.
Alice and Bob won’t always cheat because they will get good rolls sometimes that will look like cheats but won’t be.
Wait, do you mean:
Decide to cheat or not cheat, then if not cheating do a random roll, or
Do a random roll, and then decide to cheat or not?
I was assuming 1, but your argument is more suited for 2...
Aside from rolling a strange combination of a strangely-labelled d52 and a strangely-labelled d51, or somesuch.
cards = list(range(1, 14))*4
dice_results = collections.Counter(a+b for a in range(1, 8) for b in range(1, 8))
dice_denom = sum(dice_results.values())
card_results = collections.Counter((a+b)//2+1 for a, b in itertools.permutations(cards, r=2))
card_denom = sum(card_results.values())
for val in range(2, 15):
print(val, fractions.Fraction(card_results[val], card_denom), fractions.Fraction(dice_results[val], dice_denom), sep='\t')
2 11/663 1/49
3 9/221 2/49
4 43/663 3/49
5 59/663 4/49
6 25/221 5/49
7 7/51 6/49
8 33/221 1/7
9 83/663 6/49
10 67/663 5/49
11 1/13 4/49
12 35/663 3/49
13 19/663 2/49
14 1/221 1/49
This is somewhat trivial, but I figured it was worth mentioning.
The other player gets to determine your next dice roll (again, either manually or randomly).
Could you elaborate here?
Alice cheats and say she got a 6. Bob calls her on it. Is it now Bob’s turn, and hence effectively a result of 0? Or is it still Alice’s turn? If the latter, what happens if Alice cheats again?
I’m not sure how you avoid the stalemate of both players ‘always’ cheating and both players ‘always’ calling out the other player.
Instead of dice, a shuffled deck of playing cards would work better. To determine your dice roll, just pull two cards from a shuffled deck of cards without revealing them to anyone but yourself, then for posterity you put those two cards face down on top of that deck.
How do you go from a d52 and a d51 to a single potentially-loaded d2? I don’t see what to do with said cards.
How does said binding treaty come about? I don’t see any reason for Alice to accept such a treaty in the first place.
Alice would instead propose (or counter-propose) a treaty that always takes the terms that would result from the simulation according to Alice’s estimate.
Alice is always at least indifferent to this, and the only case where Bob is not at least indifferent to this is if Bob is stronger than Alice’s estimate, in which case accepting said treaty would not be in Alice’s best interest. (Alice should instead stall and hunt for exploits, give or take.)
Let’s look at a relatively simple game along these lines:
Person A either cheats an outcome or rolls a d4. Then person B either accuses, or doesn’t. If person B accuses, the game ends immediately, with person B winning (losing) if their accusation was correct (incorrect). Otherwise, repeat a second time. At the end, assuming person B accused neither time, person A wins if the total sum is at least 6. (Note that person A has a lower bound of a winrate of 3/8ths simply by never cheating.)
Let’s look at the second round first.
First subcase: the first roll (legitimate or uncaught) was 1. Trivial win for person B.Second subcase: the first roll was 2. Subgame tree is as follows:
Person A always rolls.
Person B allows 1-3, and either always calls 4, or calls 4 1⁄4 of the time.
Either way, expected value is −1/2 for Person A, which makes sense given person A plays randomly (1/4 die rolls win for person A).
Third subcase: the first roll was 3. Simplified subgame tree is as follows:
There are 5 (five) equlibria for this one:
Person A always plays randomly.
Person B always allows 1 and 2, and either always calls both 3 and 4, or always calls one of 3 or 4 and allows the other 50% of the time, or 50⁄50 allows 3/calls 4 or allows 4/calls 3, or 50⁄50 allows both / calls both.
Overall expected value is 0, which makes sense given person A plays randomly (2/4 die rolls win for person A).
Fourth subcase: the first roll was 4. I’m not going to enumerate the equlibria here, as there are 40 of them (!). Suffice to say, the result is, yet again, person A always playing randomly, with person B allowing 1 and calling 2-4 always or probabilistically in various combinations, with an expected value of +1/2.
And then the first round:
Overall equlibria are:
Person A plays randomly 3⁄4 of the time, cheats 3 3/16th of the time, and cheats 4 1/16th of the time.
Person B always allows 1 and 2, and does one of two mixes of calling/allowing 3 and 4. (0 | 5⁄32 | 7⁄16 | 13⁄32, or 5⁄32 | 0 | 9⁄32 | 9⁄16 of call/call | call/allow | allow/call | allow/allow).
Either way, expected value for person A is −5/32.
This (over)simplified game agrees with your intuition. There are mixed strategies on both sides, and cases where you ‘may as well’ always call, and cases where you want to cheat to a value below the max value.
(Most of this was done with http://app.test.logos.bg/ - it’s quite a handy tool for small games, although note that it doesn’t compute equilibria for single giant games. You need to break them down into smaller pieces, or fiddle with the browser debugger to remove the hard-coded 22 node limit.)
What’s the drawback to always accusing here?
Though this does suggest a (unrealistically) high-coordination solution to at least this version of the problem: have both sides declare all their capabilities to a trusted third party who then figures out the likely costs and chances of winning for each side.
Is that enough?
Say Alice thinks her army is overwhelmingly stronger than Bob. (In fact Bob has a one-time exploit that allows Bob to have a decent chance to win.) The third party says that Bob has a 50% chance of winning. Alice can then update P(exploit), and go ‘uhoh’ and go back and scrub for exploits.
(So… the third-party scheme might still work, but only once I think.)
Conversely, if FDR wants a chicken in every pot, and then finds out that chickens don’t exist, he would change his values to want a beef roast in every pot, or some such.
I do not believe his value function is “a chicken in every pot”. It’s likely closer to ‘I don’t want anyone to be unable to feed themselves’, although even this is likely an over-approximation of the true utility function. ‘A chicken in every pot’ is one way of doing well on said utility function. If he found out that chickens didn’t exist, the ‘next best thing’ might be a roast beef in every pot, or somesuch. This is not changing the value function itself, merely the optimum solution.
If FDR’s true value function was literally ” a chicken in every pot”, with no tiebreaker, then he has no incentive to change his values, and a weak incentive to not change his values (after all, it’s possible that everyone was mistaken, or that he could invent chicken).
If FDR’s true value function was e.g. “a chicken in every pot, or barring that some other similar food”, then again he has no incentive to change his values. He may lean toward ‘ok, it’s very unlikely that chickens exist so it’s better in expected value to work towards roast beef in every pot’, but that again hasn’t changed the underlying utility function.
This isn’t likely to be the optimum, but at least is a ‘good’ point.
Demonstrating military strength is itself often a significant cost.
Say your opponent has a military of strength 1.1x, and is demonstrating it.
If you have the choice of keeping and demonstrating a military of strength x, or keeping a military of strength 1.2 and not demonstrating at all...