Avoiding all such knowledge is a perfect precommitment strategy. It’s hard to come up with better strategies than that, and even if your alternative strategy is sound blackmailer might very well not believe it and give it a try (if he can get you to know it, then are you really perfectly consistent?). If you can guarantee you won’t even know, there’s no point in even trying to blackmail you and this is obvious to even a very dumb blackmailer.
By the way, are there lower and upper bounds on number of paperclips in the universe? Is it possible for universe to have negative number of paperclips somehow. Or more paperclips than its numbers of atoms? Is this risk-neutral? (1% chance of 100 paperclips exactly as valuable as 1 paperclip?). I’ve been trying to get humans to describe their utility function to me, but they can never come with anything consistent, so I though I’d ask you this time.
Avoiding all such knowledge is a perfect precommitment strategy.
Not plausible: it would necessarily entail you avoiding “good” knowledge. More generally, a decision theory that can be hurt by knowledge is one that you will want to abandon in favor of a better decision theory and is reflectively inconsistent. The example you gave would involve you cutting yourself off from significant good knowledge.
By the way, are there lower and upper bounds on number of paperclips in the universe?
Mass of the universe divided by minimum mass of a true paperclip, minus net unreusable overhead.
Humans are just amazing at refusing to acknowledge existence of evidence. Try throwing some evidence of faith healing or homeopathy at an average lesswronger, and see how they come with refusal to acknowledge its existence before even looking at data (or how they recently reacted to peer-reviewed statistically significant results showing precognition—it passed all scientific standards, and yet everyone still refused it without really looking at data). Every human seems to have some basic patterns of information they automatically ignore. Not believing offers from blackmailers and automatically thinking they’d do what they threat anyway is one of such common filters.
It’s true that humans cut themselves from a significant good this way, but upside is worth it.
minimum mass of a true paperclip
Any idea what it would be? It makes little sense to manufacture a few big paperclips if you can just as easily manufacture a lot more tiny paperclips if they’re just as good.
Humans are just amazing at refusing to acknowledge existence of evidence.
And those humans would be the reflectively inconsistent ones.
It’s true that humans cut themselves from a significant good this way, but upside is worth it.
Not as judged from the standpoint of reflective equilibrium.
Any idea what it would be? It makes little sense to manufacture a few big paperclips if you can just as easily manufacture a lot more tiny paperclips if they’re just as good.
I already make small paperclips in preference to larger ones (up to the limit of clippiambiguity).
And those humans would be the reflectively inconsistent ones.
Wait, you didn’t know that humans are inherently inconsistent and use aggressive compartmentalization mechanisms to think effectively in presence of inconsistency, ambiguity of data, and limited computational resources? No wonder you get into so many misunderstandings with humans.
Avoiding all such knowledge is a perfect precommitment strategy. It’s hard to come up with better strategies than that, and even if your alternative strategy is sound blackmailer might very well not believe it and give it a try (if he can get you to know it, then are you really perfectly consistent?). If you can guarantee you won’t even know, there’s no point in even trying to blackmail you and this is obvious to even a very dumb blackmailer.
By the way, are there lower and upper bounds on number of paperclips in the universe? Is it possible for universe to have negative number of paperclips somehow. Or more paperclips than its numbers of atoms? Is this risk-neutral? (1% chance of 100 paperclips exactly as valuable as 1 paperclip?). I’ve been trying to get humans to describe their utility function to me, but they can never come with anything consistent, so I though I’d ask you this time.
Not plausible: it would necessarily entail you avoiding “good” knowledge. More generally, a decision theory that can be hurt by knowledge is one that you will want to abandon in favor of a better decision theory and is reflectively inconsistent. The example you gave would involve you cutting yourself off from significant good knowledge.
Mass of the universe divided by minimum mass of a true paperclip, minus net unreusable overhead.
Up to the level of precision we can handle, yes.
Humans are just amazing at refusing to acknowledge existence of evidence. Try throwing some evidence of faith healing or homeopathy at an average lesswronger, and see how they come with refusal to acknowledge its existence before even looking at data (or how they recently reacted to peer-reviewed statistically significant results showing precognition—it passed all scientific standards, and yet everyone still refused it without really looking at data). Every human seems to have some basic patterns of information they automatically ignore. Not believing offers from blackmailers and automatically thinking they’d do what they threat anyway is one of such common filters.
It’s true that humans cut themselves from a significant good this way, but upside is worth it.
Any idea what it would be? It makes little sense to manufacture a few big paperclips if you can just as easily manufacture a lot more tiny paperclips if they’re just as good.
And those humans would be the reflectively inconsistent ones.
Not as judged from the standpoint of reflective equilibrium.
I already make small paperclips in preference to larger ones (up to the limit of clippiambiguity).
Wait, you didn’t know that humans are inherently inconsistent and use aggressive compartmentalization mechanisms to think effectively in presence of inconsistency, ambiguity of data, and limited computational resources? No wonder you get into so many misunderstandings with humans.