Researcher at Forethought.
Previously Longview Philanthropy and Future of Humanity Institute.
I interview people on Hear This Idea and ForeCast. My writing lives at finmoorhouse.com/writing. I myself live in London.
Researcher at Forethought.
Previously Longview Philanthropy and Future of Humanity Institute.
I interview people on Hear This Idea and ForeCast. My writing lives at finmoorhouse.com/writing. I myself live in London.
On (2), I think I say a bit about this in the piece, but my guess is that it’s not much easier to launch debris from the Moon into Earth orbit, than to launch it from Earth into Earth orbit. Although the Moon has the “high ground” in some sense, you need to decelerate the debris to settle into low orbit, which requires some kind of active thrust near periapsis. My sense is that launching debris from Earth to orbit around the Moon is even harder — both because Earth is stuck in a bigger gravity well, and because the Moon is gravitationally lumpy.
(3) is an interesting point. But, as I say, I don’t think a debris cascade is very likely to actually trap civilisation on Earth for any meaningful amount of time. Somewhat more likely is if a first-mover used it to make catching up more expensive, after they themselves escape Earth. And more likely still is that it’s (as you say) conflict promoting for more prosaic natsec reasons, or the result of a conflict and ∴ evidence civilisation is in a bad state. Either way, it seems good to prevent on trajectory-improving grounds too, in my view.
Thanks for the comment. Note (for others) the sentence you are quoting is from Brian Tomasik.
I don’t think it’s necessarily misunderstanding how language works. I think there are plenty of cases where it doesn’t make sense to ask questions with a similar linguistic or grammatical structure. For example, “Does a simulation of a phage (or a virus, or a self-replicating robot) really instantiate life?”, or “You may enjoy liquorice ice cream, but is it really tasty?”
It’s appropriate to complain that it doesn’t make sense to speculate whether X really instantiates Y when Y is vague, ambiguous, subject-dependent, etc., and where it’s pretty clear extant usage of “Y” is compatible with different more precise operationalisations which give different answers. To these questions the answer is, “Well, it obviously depends what you mean by “Y”, and otherwise the question doesn’t have a single discoverable or consistent answer.” The life sciences did not discover the ‘true’ referent of “life”, and not for lack of data or good explanations.
Cases where it is less appropriate to complain that it doesn’t make sense to speculate whether X really instantiates Y, are cases where either we do agree on a crisp definition of Y, or we have some reason to believe we will discover one. I’m arguing that we have less reason than we might intuitively think to believe that we’ll ever “discover what the ontological nature of conscious states is”. I agree that if you’re confident there is a crisp, discoverable answer to exactly which things should count as “conscious”, then it totally does make sense to speculate about which things are conscious.
On your point about qualia computations, the standard questions pop up: if qualia are functionally inert computations, how would the ‘subject’ of consciousness know it is experiencing them, and so on. And isn’t the idea of computationalism about consciousness that the ‘computations’ can be pinned down by the computational relationship between inputs and outputs; in which case wouldn’t the qualia-generating computations be abstracted away?
Thanks for the comment.
In denying certain properties of consciousness, illusionists are typically also denying that basic moral/axiological intuitions need to be grounded in them.
Obviously illusionists deny the inferences you are drawing, i.e. that it’s fine to kill people, that people don’t exist, or that they have no grounds to avoid being punched in the face. For those points to be more forceful, you need to show that (for example) it can pretty much only be bad to kill people because of the kinds of deep, extra phenomenal consciousness stuff which illusionists deny. That is, you are saying “if illusionism then [patently crazy conclusion], not [patently crazy conclusion], ∴ not illusionism”. But the illusionist just denies “if illusionism then [patently crazy conclusion]”.
Of course, one reaction is “it’s totally obvious that illusionism implies crazy conclusions, if you can’t see that, then we’re living on different planets”.
One reason I think a non-realist or illusionist research agenda is not inherently timid and mediocre, is that it is trying to answer very hard but pretty well-scoped empirical questions (e.g. the meta-problem) which don’t currently have good answers, and where hypotheses are falsifiable. And I think that’s a hallmark of successful scientific agendas: at the very least, it’ll either generate interesting and testable new explanations, or it will fail. Compare realist approaches, where it’s less clear to me how to decisively rule out bad explanations (because it’s often unclear what testable predictions they are supposed to make). So I worry realist framings lack the engine for proper, cumulative progress.
There are some social reasons for writing and reading blogs.
One reason is that “a blog post is a very long and complex search query to find fascinating people and make them route interesting stuff to your inbox”. I expect to continue to value finding new people who share my interests after AI starts writing better blog posts than me, which could be very soon. I’m less sure about whether this continues to be a good reason to write them, since I imagine blog posts will become a less credible signal of what I’m like.
Another property that makes me want to read a blog or blogger is the audience: I value that it’s likely my peers will also have read what I’m reading, so I can discuss it. This gives the human bloggers some kind of first-mover advantage, because it might only be worth switching your attention to the AI bloggers if the rest of the audience coordinates to switch with you. Famous bloggers might then switch into more of a curation role.
To some extent I also intrinsically care about reading true autobiography (the same reason I might intrinsically care about watching stunts performed by real humans, rather than CGI or robots).
I think these are relatively minor factors, though, compared to the straightforward quality of reasoning and writing.
Yes.
As Buck points out, Toby’s estimate of P(AI doom) is closer to the ‘mainstream’ than MIRI’s, and close enough that “so low” doesn’t seem like a good description.
I can’t really speak on behalf of others at FHI, of course, by I don’t think there is some ‘FHI consensus’ that is markedly higher or lower than Toby’s estimate.
Also, I just want to point out that Toby’s 1⁄10 figure is not for human extinction, it is for existential catastrophe caused by AI, which includes scenarios which don’t involve extinction (forms of ‘lock-in’). Therefore his estimate for extinction caused by AI is lower than 1⁄10.
Yes, I’m almost certain it’s too ‘galaxy brained’! But does the case rely on entities outside our light cone? Aren’t there many ‘worlds’ within our light cone? (I literally have no idea, you may be right, and someone who knows should intervene)
I’m more confident that this needn’t relate to the literature on infinite ethics, since I don’t think any of this relies on inifinities.
Thanks, this is useful.
There are some interesting and tangentially related comments in the discussion of this post (incidentally, the first time I’ve been ‘ratioed’ on LW).
Thanks, really appreciate it!
Was wondering the same thing — would it be possible to set others’ answers as hidden by default on a post until the reader makes a prediction?
I interviewed Kent Berridge a while ago about this experiment and others. If folks are interested, I wrote something about it here, mostly trying to explain his work on addiction. You can listen to the audio on the same page.
Got it, thanks very much for explaining.
Thanks, that’s a nice framing.
Thanks for the response. I’m bumping up against my lack of technical knowledge here, but a few thoughts about the idea of a ‘measure of existence’ — I like how UDASSA tries to explain how the Born probabilities drop out of a kind of sampling rule, and why, intuitively, I should give more ‘weight’ to minds instantiated by brains rather than a mug of coffee. But this idea of ‘weight’ is ambiguous to me. Why should sampling weight (you’re more likely to find yourself as a real vs Boltzmann brain, or ‘thick’ vs ‘arbitrary’ computation) imply ethical weight (the experiences of Boltzmann brains matter far less than real brains)? Here’s Lev Vaidman, suggesting it shouldn’t: “there is a sense in which some worlds are larger than others”, but “note that I do not directly experience the measure of my existence. I feel the same weight, see the same brightness, etc. irrespectively of how tiny my measure of existence might be.” So in order to think that minds matter in proportion to the mesaure of the world they’re in, while recognising they ‘feel’ precisely the same, it looks like you end up having to say that something beyond what a conscious experience is subjectively like makes an enormous difference to how much it matters morally. There’s no contradiction, but that seems strange to me — I would have thought that all there is to how much a conscious experience matters is just what it feels like — because that’s all I mean by ‘conscious experience’. After all, if I’m understanding this right, you’re in a ‘branch’ right now that is many orders of magnitude less real than the larger, ‘parent’ branch you were in yesterday. Does that mean that your present welfare now matters orders of magnitude less than yesterday? Another approach might be to deny that arbitrary computations are conscious on independent grounds, and explain the observed Born probabilities without ‘diluting’ the weight of future experiences over time.
Also, presumably there’s some technical way of actually cashing out the idea of something being ‘less real’? Literally speaking, I’m guessing it’s best not to treat reality as a predicate at all (let alone one that comes in degrees). But that seems like a surmountable issue.
I’m afraid I’m confused by what you mean about including the Hilbert measure as part of the definition of MWI. My understanding was that MWI is something like what you get when you don’t add a collapse postulate, or any other definitional gubbins at all, to the bare formalism.
Still don’t know what to think about all this!
Something very like the view I’m suggesting can be found in Albert & Loewer (1988) and their so-called ‘many minds’ interpretation. This is interesting to read about, but the whole idea strikes me as extremely hand-wavey and silly. Here’s David Wallace with a dunk: “If it is just a fundamental law that consciousness is associated with some given basis, clearly there is no hope of a functional explanation of how consciousness emerges from basic physics.”
I should also mention that I tried explaining this idea to another philosopher of physics, who took it as a reductio of MWI! I suppose you might also take it as a reductio of any kind of total consequentialism also. One man’s modus ponens...
David Lewis briefly discusses the ethical implications of his modal realism (warning: massive pdf), concluding that there aren’t any. This may be of interest, but not sufficiently similar to the case at hand to be directly relevant, I think.
Another potential ethical implication: Hal Finney makes the point that MWI should steer you towards maximising good outcomes in expectation if you weren’t already doing so (e.g. if you were previously risk-averse, risk-seeking, or just somehow insensitive to very small probabilities of extreme outcomes). The whole thread is a nice slice of LW history and worth reading.
Thanks, that’s far more relevant!
From Wikipedia: An Experiment on a Bird in the Air Pump is a 1768 oil-on-canvas painting by Joseph Wright of Derby, one of a number of candlelit scenes that Wright painted during the 1760s. The painting departed from convention of the time by depicting a scientific subject in the reverential manner formerly reserved for scenes of historical or religious significance. Wright was intimately involved in depicting the Industrial Revolution and the scientific advances of the Enlightenment. While his paintings were recognized as exceptional by his contemporaries, his provincial status and choice of subjects meant the style was never widely imitated. The picture has been owned by the National Gallery in London since 1863 and is regarded as a masterpiece of British art.

Thanks for the qs!
I think the considerations for shielding vs redundancy are very different. Redundancy quickly becomes non-viable because the amount of redundant launches you need blows up with the amount of debris.[1] But yeah, I think shielding probably helps a lot, and it’s a big reason I’m not very worried that a decently well-resourced actor would be locked out of space after 100 Starships of debris get released into orbit.
Yep that’s right. One thought is that debris eventually has to pass through those altitudes as it de-orbits, so there would be less of it (because it’s more transient) but maybe still enough to make no lower orbits much more viable. Also my impression is that sats are already sitting about as low-orbit as they can go before (below ~500km) it quickly becomes a lot more expensive to operate them because you need to overcome drag. As you say it might make more sense to sit at higher altitudes. In either case I do think the most likely outcome (especially from ‘accidental’ Kessler syndrome) is that operating sats just becomes a lot more expensive.
The two big considerations here are the amount of time you have to spend passing through different altitudes, and the amount of debris at different altitudes. Launches spend about 10 mins passing through LEO. Satellites in LEO spend… as long as you want to operate them. To make launching through orbit non-viable, vs making it non-viable to operate a satellites in LEO for a year, I think you need about 500x more debris. You could place sats at higher orbits and I think that would help a lot, though it would suck a bit because e.g. higher latencies. If the adversary is still on the loose, they could place debris at whatever altitude you picked, and at higher altitudes it would not de-orbit naturally for centuries or more. I don’t know how much more debris that would take vs sats in LEO, but it’s upper-bounded at 500x. 5–100x?
I guess an overall vibe I should convey a bit more is: before investigating deliberate space debris, it seemed like it could be a very big deal for macrostrategy. Having investigated it, I think it’s much less of a big deal (but still probably underrated by the world). I also think there could be much better launch-blocking strategies which don’t involve debris.
If one rocket collides with 7 bits of debris on average (and collisions are always catastrophic and Poisson distributed), you need to send about 1,000 launches for >50% chance at least one makes it. But after roughly doubling the amount of debris, so one rocket collides with 15 bits of debris on average, you need more than 2 million launches.