The fact that people have different understanding of the same texts and have to “translate” them through an inferential distance is a necessary evil. Just because something is a necessary evil doesn’t mean it’s good, and certainly doesn’t mean that we should be fine with deliberately creating more of it.
Jiro
Under some circumstances, it seems that option 4 would result in the predictor trying to solve the Halting Problem since figuring out your best option may in effect involve simulating the predictor.
(Of course, you wouldn’t be simulating the entire predictor, but you may be simulating enough of the predictor’s chain of reasoning that the predictor essentially has to predict itself in order to predict you.)
Generate several “random” numbers in your head, trying to generate them randomly but falling prey to the usual problems of trying to generate them in your head. Then add them together and take them mod X to produce a result that is more like a real random number.
Remember the original post about epistemic learned helplessness: making people literate in some things may be bad, because the fact that they don’t understand things prevents them from doing good in those areas, but it also prevents them from falling prey to scams and fallacies in the same areas.
You might want the average person to fail to get excited about a 6% increase in battery energy density, because if too many people get excited about such things, the politicians, media machines, and advertisers will do their best to exploit this little bit of knowledge to extract momey from the general public while producing as few actual improvements to energy density as possible. I’m sure you could name plenty of issues where the public understands that they are important without having the breadth of knowledge to not fall for “we have to do omething, it’s important!”
Bets have fixed costs to them in addition to the change in utility from the money gained or lost. The smaller the bet, the more those fixed costs dominate. And at some point, even the hassle from just trying to figure out that the bet is a good deal dwarfs the gain in utility from the bet. You may be better off arbitrarily refusing to take all bets below a certain threshhold because you gain from not having overhead. Even if you lose out on some good bets by having such a policy, you also spend less overhead on bad bets, which makes up for that loss.
The fixed costs also change arbitrarily; if I have to go to the ATM to get more money because I lost a $10.00 bet, the disutility from that is probably going to dwarf any utility I get from a $0.10 profit, but whether the ATM trip is necessary is essentially random.
Of course you could model those fixed costs as a reduction in utility, in which case the utility function is indeed no longer logarithmic, but you need to be very careful about what conclusions you draw from that. For instance, you can’t exploit such fixed costs to money pump someone.
“I merely wrote those inscriptions on two boxes, and then I put the dagger in the second one.”
Statements can have inconsistent truth values. The logical analysis done by the jester is wrong because the jester is assuming that the statements are either true or false. This assumption is unwarranted, and given the actual box contents, the statements aren’t true or false.
In other words, the jester didn’t correctly analyze the logic of the inscriptions, but mess up because the result has no connection to the real world. The jester incorrectly analyzed the logic of the inscriptions. If he had done so correctly, he would have figured out that the contents of the boxes could be anything.
This is similar to the simulation hypothesis, and in fact is sometimes used as a response to the simulation hypothesis.
Consider this recent column by the excellent Matt Levine. It vividly describes the conflict between engineering, which requires people communicate information and keep accurate records, and the legal system and public relations, which tell you that keeping accurate records is insane.
It certainly sounds like a contradiction, but the spin that article puts on it is unconvincing:
In other words, if you are trying to build a good engineering culture, you might want to encourage your employees to send hyperbolic, overstated, highly quotable emails to a broad internal distribution list when they object to a decision. On the other hand your lawyers, and your public relations people, will obviously and correctly tell you that that is insane: If anything goes wrong, those emails will come out, and the headlines will say “Designed by Clowns,”
This argument is essentially “truth is bad”.
We try to pretend that making problems sound worse than they really are, in order to compel action, is not lying. But it really is. This complaint sounds like “we want to get the benefits of lying, but not the harm”. If you’re overstating a problem in order to get group A to act in ways that they normally wouldn’t, don’t be surprised if group B also reacts in ways that they normally wouldn’t, even if A’s reaction helps you and B’s reaction hurts you. The core of the problem is not that B gets to hear it, the core of the problem is that you’re being deceitful, even if you’re exaggerating something that does contain some truth.
(Also, this will result in a ratchet where every decision that engineers object to is always the worst, most disastrous, decision ever, because if your goal is to get someone to listen, you should always describe the current problem as the worst problem ever.)
The epistemic immune system serves a purpose—some things are very difficult to reason out in full and some pitfalls are easy to fall in unknowingly. If you were a perfect reasoner, of course, this wouldn’t matter, but the epistemic immune system is necessary because you’re not a perfect reasoner. You’re running on corrupted hardware, and you’ve just proposed dumping the error-checking that protects you from flaws in the corrupted hardware.
And saying “we should disable them if they get in the way of accurate beliefs” is, to mix metaphors, like saying “we should dispense with the idea of needing a warrant for the police to search your house, as long as you’re guilty”. Everyone thinks their own beliefs are accurate; saying “we should get rid of our epistemic immune system if it gets in the way of accurate beliefs” is equivalent to getting rid of it all the time.
Under what circumstances do you get people telling you they are fine? That doesn’t happen to me very much—”I’m fine” as part of normal conversation does not literally mean that they are fine.
“if it’s ok to do A or B then it’s fine to run an experiment on A vs B”
Allowing A and B, and allowing an experiment on A vs. B, may create different incentives, and these incentives may be different enough to change whether we should allow the experiment versus allowing A and B.
Luckily for you, there definitely exists a rule that tells you the best possible move to play for every given configuration of pieces—the rule that tells you the move that maximizes the probability of victory (or since draws exist and may be acceptable, the move that minimizes the probability of defeat.
If your opponent is a perfect player, each move has a 0% or 100% probability of victory. You can only maximize it in a trivial sense.
If your opponent is an imperfect player, your best move is the one that maximizes the probability of victory given your opponent’s pattern of imperfection. Depending on what this pattern is, this may also mean that each move has a 0% or 100% probability of victory.
Your process of deciding what to do may at some point include simulating Omega and Omicron. If so, this means that when Omega and Omicron are simulating you, they are now trying to solve the Halting Problem. I am skeptical that Omega or Omicron can solve the Halting Problem.
I would suggest that this is ameliorated by the following:
-
Nobody actually believes that you are to blame for every bad consequence of things you do, no matter how indirect. A conscientious person is expected to research and know some of the indirect consequences of his actions, but this expectation doesn’t go out to infinity.
-
While you don’t get credit for unintended good consequences in general, you do get such credit in some situations. Specifically, if the good consequence is associated with a bad consequence, you are allowed to get credit for the good consequence and trade it off against the bad consequence. If I buy a tomato, bad consequences of this (someone else can’t get one) are balanced off against good consequences (the store knows to order extra tomatoes next week) because they are both part of the same process. On the other hand, I can’t offset a murder by saving two drowning victims, because the acts are not entwined and I could do one without doing the other.
-
How can you (in general) conclude something by examining the source code of an agent, without potentially implicating the Halting Problem?
I think there’s a difference between “Most of the IRS tax code is reasonable” and “Most of the instances where the IRS tax code does something are instances where it does reasonable things.” Not all parts of the tax code are used equally often. Furthermore, most unreasonable instances of a lot of things will be rare as a percentage of the whole because there is a large set of uncontroversial background uses. For instance, consider a completely corrupt politician who takes bribes—he’s not going to be taking a bribe for every decision he makes and most of the ones he does make will be uncontroversial things like “approve $X for this thing which everyone thinks should be approved anyway”.
I want employees to ask themselves whether they are willing to have any contemplated act appear the next day on the front page of their local paper—to be read by their spouses, children and friends—with the reporting done by an informed and critical reporter.”
Leaving out “parents” gets rid of some of the obvious objections, but even then, I don’t want my children to know about my sexual fetishes. Other objections may include, for instance, letting your friends know that you voted for someone who they think will ruin the country. And I certainly wouldn’t want rationalist-but-unpopular opinions I hold to be on the front page of the local paper to be seen by everyone (Go ahead, see what happens when the front page of the newspaper announces that you think that you should kill a fat man to stop a trolley.) This aphorism amounts to “never compartmentalize your life” which doesn’t seem very justifiable.
Bob does not know X. That’s why Alice is telling Bob in the first place.
Conversational phrases aren’t supposed to be interpreted literally. “Everybody knows” never means “literally every single person knows”. This is about equivalent to complaining that people say “you’re welcome” when the person really wouldn’t be welcome under some circumstances.
Don’t be the literal Internet guy who thinks this way.
I think the word “unbiased” there may be a typo; your statement would make a lot more sense if the word you meant to put there was actually “biased”.
I meant “unbiased” in scare quotes. Typical newsfeeds that are claimed to be unbiased in the real world (but actually may not be).
Tell this to the people who named GIMP.