Yeah, you are right, the only hope of getting somewhere is when you address their true objections. That’s not easy though because they might not even be aware of what they are, and refuse to acknowledge it when pointed out (again, see the examples in the thread I linked), often because acknowledging them would clash with their self-image. Successfully addressing their real arguments, not the chaff on top, is a difficult skill. If you can do it, it would feel like magic. Or a superpower.
“You just pretended to care about price gouging, so why did you do that” seems like a good way to confront such statements, at least some of the time.
I… don’t think this works, not even in the LW circles, as some of the reaction to my old post show https://www.lesswrong.com/posts/a4HzwhvoH7zZEw4vZ/wirehead-your-chickens
If you manage to get through that, maybe you can summarize it? Even Logan’s accessible explanation makes my eyes glaze over.
I just wanted to mention that you assume consequentialist thinking, specifically of the type “what should we do to change X for the better?” This is not at all how most people think. “Price gouging is unfair” is enough to pass a legislation, without heeding the consequences. “Abortion is against a sacred rule from God” is enough to fight to prohibit it. “But I can change my mind” is argument enough to two-box. And I’m not even touching issues where people don’t reason at all, or, like politicians, optimize something other than the stated goal.
You might find a recent book review useful: https://www.lesswrong.com/posts/jr3sxQb6HDS87ve3m.
I commented on the topic there.
Also, see my favorite quote by Ambrose Bierce on the subject, from another post:
“There’s no free will,” says the philosopher; “To hang is most unjust.”″There is no free will,” assents the officer; “We hang because we must.”
Basically, all an embedded agent does is discovering what the world is really like. There is no ability to steer the world in a desired direction, but also no ability to not to steer it. We are all NPCs, but most of us think we are PCs.
Sean Carroll talked about this just recently, in the context of Bayesianism https://www.preposterousuniverse.com/podcast/2021/09/16/ama-september-2021/
It’s around 2:11:08, or Ctrl-F in transcript.
It is tricky to talk in self-consistent ways about lack of free will. Obviously any kind of prescriptivism is right out: since there is no free will you can’t consciously steer the future in any specific direction, you can only have an illusion of it. It is possible to talk about lack of free will in descriptive terms, however. For example, a statement like “one should hold people accountable even though free will does not exist, for the benefit of the society as a whole”, can be expressed as “societies where people are held accountable for their actions tend to be more successful, by some relevant metric, than those where they are not”.
It is also easy to misinterpret a self-consistent view as self-contradictory, if one is not careful (this is not an urge to be careful, that would be inconsistent in itself, just an observation I had no control over talking about). For example when he says “needn’t” a reader can interpret it as “shouldn’t”, even though that’s not what he meant. I haven’t read Harris’s book, but my guess would be that he takes appropriate care not to sound like he does more than describing the world he sees.
Without technological artifacts, it would be very hard to identify such an intelligence.
Indeed. And I think it’s a crucial question to consider in terms of identifying anything intelligent (or even alive) that isn’t “like us”.
Somewhat off topic...
A Bayesian superintelligence? There is no natural example.
How would you tell if some “natural” phenomenon is or is not a Bayesian superintelligence, if it does not look like us or produces human-recognizable artifacts like cars, buildings, ships etc?
Right, good point, I think it’s very close. I guess when you are a professional philosopher stating the obvious it often comes across as profound.
Though I’m trying to do more than to just state it, but to construct a model of the meta-problem: that it is a side effect of the specific optimization computation. I wish I could tease out some testable predictions from this model that are different from alternatives.
How much free will does a monkey have? A cat? A fish? An amoeba? A virus? A vapor bubble in a boiling pot? A raspberry shoot jockeying for a sunny spot? An octopus arm? A solar flare? A chess bot?
Hint: the same amount as a human.
Answer: We just happen to have a feeling of free will that is an artifact of some optimization subroutine that runs in our brains and is not fully available to introspection. Do octopuses have that feeling? Chess bots? That question might get answered one day once we understand that how the feeling of free will is formed in humans.
How would you define thoughts? Is it something you can notice happening, as opposed to a feeling or an urge, that just bubbles up?
I think that rationality as a competing approach to the scientific method is a particularly bad take that leads a lot of aspiring rationalists astray, into the cultish land of “I know more and better than experts in the field because I am a rationalist”. Data analysis uses plenty of Bayesian reasoning. Scientists are humans and so are prone to the biases and bad decisions that instrumental rationality is supposed to help with. CFAR-taught skills are likely to be useful for scientists and non-scientists alike.
I agree that the point was not to teach you physics. It was a tool to teach you rationality. Personally, I think it failed at that, and instead created a local lore guided by the teacher’s password, “MWI is obviously right”. And yes, I think he said nearly as much on multiple occasions. This post https://www.lesswrong.com/posts/8njamAu4vgJYxbJzN/bloggingheads-yudkowsky-and-aaronson-talk-about-ai-and-many links a video of him saying as much: https://bloggingheads.tv/videos/2220?in=29:28
Note that Aaronson’s position is much weaker, more like “if you were to extrapolate micro to macro assuming nothing new happens...”, see, for example https://www.scottaaronson.com/blog/?p=1103
we do more-or-less know what could be discovered that would make it reasonable to privilege “our” world over the other MWI branches. Namely, any kind of “dynamical collapse” process, any source of fundamentally-irreversible decoherence between the microscopic realm and that of experience, any physical account of the origin of the Born rule, would do the trick.
Admittedly, like most quantum folks, I used to dismiss the notion of “dynamical collapse” as so contrived and ugly as not to be worth bothering with. But while I remain unimpressed by the specific models on the table (like the GRW theory), I’m now agnostic about the possibility itself. Yes, the linearity of quantum mechanics does indeed seem incredibly hard to tinker with. But as Roger Penrose never tires of pointing out, there’s at least one phenomenon—gravity—that we understand how to combine with quantum-mechanical linearity only in various special cases (like 2+1 dimensions, or supersymmetric anti-deSitter space), and whose reconciliation with quantum mechanics seems to raise fundamental problems (i.e., what does it even mean to have a superposition over different causal structures, with different Hilbert spaces potentially associated to them?).
I guess you don’t mean simulating the relevant parts of the rat brain in silico like OpenWorm, but “a rat-equivalent Bayesian reasoning machine out of silicon”, which is probably different.
Do you think it’s reasonable to push for rat-level AI before we can create a C.Elegans-level AI?
The book is great for improving one’s thinking. My long-standing advice is to ignore anything in it with the word “quantum,” it detracts from the book’s message. If you want to learn physics, read a physics book. For a good review of that link in Nature, see Scott Aaronson’s post https://www.scottaaronson.com/blog/?p=3975, and he also has a review of interpretations in https://www.scottaaronson.com/blog/?p=3628
I think the Umbridge version is uncontroversial: someone who uses existing rules or creates new rules (like the lifeguard in the Scott’s description, or the agencies making it intentionally hard to get reimbursed) to disguise their real intentions, that have nothing to do with following the rules and everything with achieving nefarious goals, be it torturing HP, getting rid of a kid they don’t like, or maybe getting a bonus for minimizing expenses.
I don’t know if that last paragraph is the author’s view, and whether there is any evidence/consensus for it. I go by what I see, and this is a person possessed to overcome obstacles over and over again. Musk is an extreme example, but in general all the classic tech moguls are “natural heroes” in that sense. The burning need inside to do “world optimization” cannot be quenched.