There are about 8 billion people, so your 24,000 QALYs should be 24,000,000.
Tetraspace
I don’t mean to say that it’s additional reason to respect him as an authority or accept his communication norms above what you would have done for other reasons (and I don’t think people particularly are here), just that it’s the meaning of that jokey aside.
Maybe you got into trouble for talking about that because you are rude and presumptive?
I think this is just a nod to how he’s literally Roko, for whom googling “Roko simulation” gives a Wikipedia article on what happened last time.
What, I wonder, shall such an AGI end up “thinking” about us?
IMO: “Oh look, undefended atoms!” (Well, not in that format. But maybe you get the picture.)
You kind of mix together two notions of irrationality:
(1-2, 4-6) Humans are bad at getting what they want (they’re instrumentally and epistemically irrational)
(3, 7) Humans want complicated things that are hard to locate mathematically (the complexity of value thesis)
I think only the first one is really deserving of the name “irrationality”. I want what I want, and if what I want is a very complicated thing that takes into account my emotions, well, so be it. Humans might be bad at getting what they want, they might be mistaken a lot of the time about what they want and constantly step on their own toes, but there’s no objective reason why they shouldn’t want that.
Still, when up against a superintelligence, I think that both value being fragile and humans being bad at getting what they want count against humans getting anything they want out of the interaction:
Superintelligences are good at getting what they want (this is really what it means to be a superintelligence)
Superintelligences will have whatever goal they have, and I don’t think that there’s any reason why this goal would be anything to do with what humans want (the orthogonality thesis; the goals that a superintelligence has are orthogonal to how good it is at achieving them)
This together adds up to a superintelligence sees humans using resources that it could be using for something else (and it would want them to use them for something else, not just what the humans are trying to do but more, because it has its own goals), and because it’s good at getting what it wants it gets those resources, which is very unfortunate for the humans.
Boycotting LLMs reduces the financial benefit of doing research that is upstream to AGI in the tech tree.
Arbital gives a distinction between “logical decision theory” and “functional decision theory” as:
Logical decision theories are a class of decision theories that have a logical counterfactual (vs. the causal counterfactual that CDT has and the evidential counterfactual EDT has).
Functional decision theory is the type of logical decision theory where the logical counterfactual is fully specified, and correctly gives the logical consequences of “decision function X outputs action A”.
More recently, I’ve seen in Decision theory does not imply that we get to have nice things:
Logical decision theory is the decision theory where the logical counterfactual is fully specified.
Functional decision theory is the incomplete variant of logical decision theory where the logical consequences of “decision function X outputs action A” have to be provided by the setup of the thought experiment.
Any preferences? How have you been using it?
Further to it being legally considered murder, tricky plans to get around this are things that appear to the state like possibly a tricky plan to get around murder, and result in an autopsy which at best and only if the cryonics organisation cooperates leaves one sitting around warm for over a day with no chance of cryoprotectant perfusion later.
Rereading a bit of Hieronym’s PMMM fanfic “To The Stars” and noticing how much my picture of dath ilan’s attempt at competent government was influenced / inspired by Governance there, including the word itself.
https://archiveofourown.org/works/777002/chapters/1461984
For some inspiration, put both memes side by side and listen to Landsailor. (The mechanism by which one listens to it, in turn, is also complex. I love civilisation.)
Relevant Manifold: Will Russia conduct a nuclear test during 2022?, currently at 26%.
Beemium (the subscription tier that allows pledgeless goals) is $40/mo currently, increased in January 2021 from $32/mo and in 2014 from the original $25/mo.
The essay What Motivated Rescuers During the Holocaust is on Lesswrong under the title Research: Rescuers during the Holocaust—it was renamed because all of the essay titles in Curiosity are questions, which I just noticed now and is cute. I found it via the URL lesswrong.com/2018/rescue, which is listed in the back of the book.
The bystander effect is an explanation of the whole story:
Because of the bystander effect, most people weren’t rescuers during the Holocaust, even though that was obviously the morally correct thing to do; they were in a large group of people who could have intervened but didn’t.
The standard way to break the bystander effect is by pointing out a single individual in the crowd to intervene, which is effectively what happened to the people who became rescuers by circumstances that forced them into action.
Why would you wait until ? It seems like at any time the expected payoff will be , which is strictly decreasing with .
[Question] Is there a “coherent decisions imply consistent utilities”-style argument for non-lexicographic preferences?
Recently I bought a new laptop
One big advantage of getting a hemispherectomy for life extension is that, if you don’t tell the Metaculus community before you do it, you can predict much higher than the community median of 16% - I would have 71 Metaculus points to gain from this, for example, much greater than the 21 in expectation I would get if the community median was otherwise accurate.
This looks like the hyperreal numbers, with your equal to their .
The real number 0.20 isn’t a probability, it’s just the same odds but written in a different way to make it possible to multiply (specifically you want some odds product
*
such thatA:B * C:D = AC:BD
). You are right about how you would convert the odds into a probability at the end.
Just before she is able to open the envelope, a freak magical-electrical accident sends a shower of sparks down, setting it alight. Or some other thing necessiated by Time to ensure that the loop is consistent. Similar kinds of problems to what would happen if Harry was more committed to not copying “DO NOT MESS WITH TIME”.
80,000 Hours’ job board lets you filter by city. As of the time of writing, roles in their AI Safety & Policy tag are 61⁄112 San Francisco, 16⁄112 London, 35⁄112 other (including remote).