Karma: 462

# Re­cently I bought a new laptop

10 Apr 2021 20:29 UTC
29 points
• One big advantage of getting a hemispherectomy for life extension is that, if you don’t tell the Metaculus community before you do it, you can predict much higher than the community median of 16% - I would have 71 Metaculus points to gain from this, for example, much greater than the 21 in expectation I would get if the community median was otherwise accurate.

• This looks like the hyperreal numbers, with your equal to their .

• The real number 0.20 isn’t a probability, it’s just the same odds but written in a different way to make it possible to multiply (specifically you want some odds product * such that A:B * C:D = AC:BD). You are right about how you would convert the odds into a probability at the end.

• Just before she is able to open the envelope, a freak magical-electrical accident sends a shower of sparks down, setting it alight. Or some other thing necessiated by Time to ensure that the loop is consistent. Similar kinds of problems to what would happen if Harry was more committed to not copying “DO NOT MESS WITH TIME”.

• I have used this post quite a few times as a citation when I want to motivate the use of expected utility theory as an ideal for making decisions, because it explains how it’s not just an elegant decisionmaking procedure from nowhere but a mathematical inevitability of the requirements to not leave money on the table or to accept guaranteed losses. I find the concept of coherence theorems a better foundation than the normal way this is explained, by pointing at the von Neumann-Morgensten axioms and saying “they look true”.

• The number of observers in a universe is solely a function of the physics of that universe, so the claim that a theory that implies 2Y observers is a third as likely as a theory that implies Y observers (even before the anthropic update) is just a claim that the two theories don’t have an equal posterior probability of being true.

• This post uses the example of GPT-2 to highlight something that’s very important generally—that if you’re not concentrating, you can’t distinguish GPT-2 generated text that is known to be gibberish from non-gibberish.

And hence gives the important lesson, which might be hard to learn oneself if they’re not concentrating, that you can’t really get away with not concentrating.

• This is self-sampling assumption-like reasoning: you are reasoning as if experience is chosen from a random point in your life, and since most of an immortal’s life is spent being old, but most of a mortal’s life is spent being young, you should hence update away from being immortal.

You could apply self-indication assumption-like reasoning to this: as if your experience is chosen from a random point in any life. Then, since you are also conditioning on being young, and both immortals and mortals have one youthhood each, just being young doesn’t give you any evidence for or against being immortal that you don’t already have. (This is somewhat in line with your intuitions about civilisations: immortal people live longer, so they have more Measure/​prior probability, and this cancels out with the unlikelihood of being young given you’re immortal)

• Yes requiring the possibility of no has been something I’ve intuitively been aware of in social situations (anywhere where one could claim “you would have said that anyway”).

This post does a good job of applying more examples and consequences of this (the examples cover a wide range of decisions), and tying to to the mathematical law of conservation of evidence.

• In The Age of Em, I was somewhat confused by the talk of reversible computing, since I assumed that the Laudauer limit was some distant sci-fi thing, probably derived by doing all your computation on the event horizon of a black hole. That we’re only three orders of magnitude away from it was surprising and definitely gives me something to give more consideration to. The future is reversible!

I did a back-of-the-envelope calculation about what a Landauer limit computer would look like to rejiggle my intuitions with respect to this, because “amazing sci-fi future” to “15 years at current rates of progress” is quite an update.

Then, the lower limit is with or [...] A current estimate for the number of transistor switches per FLOP is .

The peak of human computational ingenuity is of course the games console. When doing something very intensive, the PS5 consumes 200 watts and does 10 teraFLOPs ( FLOPs). At the Landauer limit, that power would do bit erasures per second. The difference is − 6 orders of magnitude from FLOPs to bit erasure conversion, 1 order of magnitude from inefficiency, 3 orders of magnitude from physical limits, perhaps.

• :0, information on the original AI box games!

In that round, the ASI convinced me that I would not have created it if I wanted to keep it in a virtual jail.

What’s interesting about this is that, despite the framing of Player B being the creator of the AGI, they are not. They’re still only playing the AI box game, in which Player B loses by saying that they lose, and otherwise they win.

For a time I suspected that the only way that Player A could win a serious game is by going meta, but apparently this was done just by keeping Player B swept up in their role enough to act how they would think the creator of the AGI would act. (Well, saying “take on the role of [someone who would lose]” is meta, in a sense.)

• Smarkets is currently selling shares in Trump conceding if he loses at 57.14%. The Good Judgement Project’s superforecasters predict that any major presidential candidate will concede with probability 88%. I assign <30% probability to Biden conceding* (scenarios where Biden concedes are probably overwhelmingly ones where court cases/​recounts mean states were called wrong, which Betfair assigns ~10% probability to, and FTX kind of** assigns 15% probability to, and even these seem high), so I think it’s a good bet to take.

* I think that the Trump concedes if he loses market is now unconditional, because by Smarkets’ standards (projected electoral votes from major news networks) Biden has won.

** Kind of, because some TRUMP shares expired at 1 TRUMFEB share - $0.10, rather than$0 as expected, and some TRUMP shares haven’t expired yet, because TRUMP holders asked. So it’s possible that the value of a TRUMPFEB share might also include the value of a hypothetical TRUMPMAR share, or that TRUMPFEB trades will be nullified at some point, or some other retrospective rule change on FTX’s part.

UPDATE 2020-11-16: Trump… kind of conceded? Emphasis mine:

He won because the Election was Rigged. NO VOTE WATCHERS OR OBSERVERS allowed, vote tabulated by a Radical Left privately owned company, Dominion, with a bad reputation & bum equipment that couldn’t even qualify for Texas (which I won by a lot!), the Fake & Silent Media, & more!

While he has retracted this, it met Smarkets’ standards, so I’m £22.34 richer.

• I bet £10 on Biden winning on Smarkets upon reading the GJP prediction, because I trust superforecasters more than prediction markets. I bet another £10 after reading Demski’s post on Kelly betting—my bankroll is much larger than £33 (!! Kelly bets are enormous!) but as far as my System 1 is concerned I’m still a broke student who would have to sheepishly ask their parents to cover any losses.

Very pleased about the tenner I won, might spend it on a celebratory beer.

• The problem I have and wish to solve is, of course, the accurséd Akrasia that stops me from working on AI safety.

Let’s begin with the easy ones:

1 Stop doing this babble challenge early and go try to solve AI safety.

2 Stop doing this babble challenge early; at 11 pm, specifically, and immediately sleep, in order to be better able to solve AI safety tomorrow.

In fact generally sleep seems to be a problem, I spend 10 hours doing it every day (could be spent solving AI safety) and if I fall short I am tired. No good! So working on this instrumental goal.

3 Get blackout curtains to improve sleep quality

4 Get sleep mask to improve sleep quality

5 Get better mattress to improve sleep quality

6 Find a beverage with more caffeine to reduce the need for sleep

7 Order modafinil online to reduce the need for sleep

And heck while we’re on the topic of stimulants

8 Order adderall online or from a friend to increase ability to focus

9 Look up good nootropics stacks to improve cognitive ability and hence ability to do AI safety

Now another constraint when doing AI safety is that I don’t have a good shovel-ready list of things to try, and it’s easy for me to get distracted if I can’t just pick something from the task list

10 Check if complice solves this problem

11 Check if some ordinary getting-things-done (that I can stick into roam) solves this problem

12 Make a giant checklist and go down this list

13 Make a personal kanban board of things that would be nice for solving AI safety

And instrumentally useful for creating these task lists?

14 Ask friends who know about AI safety for things to do

15 Apophatically ask for suggestions for things to do via an entry on a list of 50 items for a lesswrong babble challenge

Anyway, I digress. I’m here to solve akrasia, not make a checklist. Unless I need more items on this list, in which case I will go back to checklist construction. Is this pruning? Never mind. Back to the point:

16 Set up some desktop shortcut macro thing in order to automatically start pomodoros when I open my laptop

17 Track time spent doing things useful to AI safety on a spreadsheet

18 Hey, I said “laptop”! Get a better mouse to make using the laptop more fun so I’m more likely to do hard things when using it

19 Get a better desk for more space for notes and to require less expensive shifting into/​out of AI safety mode

20 On notes, use the index cards I have to make a proper zettelkasten as a cognitive aid

(Does this solve akrasia? Well, if I have better cognitive aids, then doing cognitively expensive things is easier, so I’m less likely to fail even with my current levels of willpower)

21 Start doing accountability things like promising to review a paper every X time period

22 I said levels of willpower—Google for interventions that increase conscientiousness (there’s gotta be some dodgy big-5 based things) and do those?

Back to the top of the tree

23 Quit my job because it’s using up energy that I could be using to do AI safety

24 Instead of doing my job, pretend to do my job while actually doing AI safety

25 Set up an AI safety screen on work laptop so it’s easy to switch over to doing AI safety during breaks or lunches

Hey, I said lunch

26 Use nutritionally complete meal replacements to save time/​willpower that would be spent on food preparation

27 Use nutritionally complete meal replacements to ensure that nutrient intake keeps me in top physical form

28 Exercise (this improves everything, apparently) by running on a treadmill

29 By lifting weights

30 By jogging in a large circle

31 Become a monk and live an austere lifestyle without the distractions of rich food, wine, and lust

32 Become an anti-monk and live a rich lifestyle to ensure that no willpower is wasted on distractions

33 Specifically in vice use nicotine as a performance enhancing stimulant by smoking. Back to stimulants again I guess

34 … or by using nicotine patches or gum or something

35 By using nicotine only if I do AI safety things, in order to develop an addiction to AI safety

Hey, develop an addiction to doing AI safety! People go to serious lengths for addictions, so why not gate it on math?

36 Do so with something very addictive, like opioids

37 Use electric shocks to do classical conditioning

etc. there was a short sci-fi story about this kind of thing let me see if I can find it. Hey, actually, since I said sci-fi, adn this is a babble challenge:

38 Promise very hard to time travel back to this exact point in time, meet future self, recieve advice

(They’re not here :( Oh well) Back on that akrasia-solving:

39 Make up a far-future person who I am specifically working to save (they’re called Dub See Wun). Get invested in their internal life (they want to make their own star!). Feel an emotional connection to them. I’m doing it for them!

40 Specifically put up a “do it for them” poster modelled off the one in the Simpsons

41 DuckDuckGo “how to beat akrasia” and do the top suggestion

42 Adopt strategic probably false beliefs (the world will end in 1 year!! :0) in order to encourage a more aggressive search for strategies

“Aggressive search for strategies” is the virtue that the Sequences call “actually trying”, so in the Sequences-sphere

43 Go to a CFAR workshop, which I heard might be kind of useful towards this sort of thing

44 Or just read the CFAR booklet and apply the wisdom found in there

45 Or some sequence on Lesswrong with exercises that applies some CFARy wisdom

Of course all this willpower boosting and efficiency and stuff wouldn’t help if I was just doing the wrong thing faster (like that one Shen comic, you know the one). So:

46 Consider how much of what I think is working on AI safety is actually just self-actualisy math/​CS stuff, throw that out, and actually try to solve the problem

47 Deliberately create and encourage a subagent in my mind that wants to do AI safety (call em Dub See Wun)

48 Adopt strategic infohazards in order to encourage a more focused and aggressive search for strategies

49 Post a lot about AI safety in public forums like Lesswrong so that I feel compelled to do AI safety in my private life in order to maintain the illusion that I’m some kind of AI-safety-doing-person

50 Stop doing this babble challenge at the correct time, and continue to do AI safety or sleep as in 1) or 2). Hey, this one seems good. Think I might try it now!

• This means you can build an action that says something like “if I am observable, then I am not observable. If I am not observable, I am observable” because the swapping doesn’t work properly.

Constructing this more explicitly: Suppose that and . Then must be empty. This is because for any action in the set , if was in then it would have to equal which is not in , and if was not in it would have to equal which is in .

Since is empty, is not observable.

• How does your CooperateBot work (if you want to share?). Mine is OscillatingTwoThreeBot which IIRC cooperates in the dumbest possible way by outputting the fixed string “2323232323...”.

• I have two questions on Metaculus that compare how good elements of a pair of cryonics techniques are: preservation by Alcor vs preservation by CI, and preservation using fixatives vs preservation without fixatives. They are forecasts of the value (% of people preserved with technique A who are revived by 2200)/​(% of people preserved with technique B who are revived by 2200), which barring weird things happening with identity is the likelihood ratio of someone waking up if you learn that they’ve been preserved with one technique vs the other.

Interpreting these predictions in a way that’s directly useful requires some extra work—you need some model for turning the ratio P(revival|technique A)/​P(revival|technique B) into plain P(revival|technique X), which is the thing you care about when deciding how much to pay for a cryopreservation.

One toy model is to assume that one technique works (P(revival) = x), but the other technique may be flawed (P(revival) < x). If r < 1, it’s the technique in the numerator that’s flawed, and if r > 1, it’s the technique in the denominator that’s flawed. This is what I guess is behind the trimodality in the Metaculus community median: there are peaks at the high end, the low end, and at exactly 1, perhaps corresponding to one working, the other working, and both working.

For the current community medians (as of 2021-04-18), using that model, using the Ergo library, normalizing the working technique to 100%, I find:

Alcor vs CI:

• EV(Preserved with Alcor) = 69%

• EV(Preserved with Cryonics Institue) = 76%

Fixatives vs non-Fixatives

• EV(Preserved using Fixatives) = 83%

• EV(Preserved without using Fixatives) = 34%