If you want to read the full thing, rather than just the description, you can download the ebook here. I certainly enjoyed it.
kybernetikos
I’ve heard this contrasted as ‘knowledge’, where you intellectually assent to something and can make predictions from it and ‘belief’, where you order your life according to that knowledge, but this distinction is certainly not made in normal speech.
A common illustration of this distinction (often told by preachers) is that Blondin the tightrope walker asked the crowd if they believed he could safely carry someone across the Niagra falls on a tightrope, and almost the whole crowd shouted ‘yes’. Then he asked for a volunteer to become the first man ever so carried, at which point the crowd shut up. In the end the only person he could find to accept was his manager.
Yeah, that is a problem with the illustration. However, I don’t think it’s completely devoid of use.
Taking a risk based on some knowledge is a very strong sign of having internalised that knowledge.
Imagine a raffle where the winner is chosen by some quantum process. Presumably under the many worlds interpretation you can see it as a way of shifting money from lots of your potential selves to just one of them. If you have a goal you are absolutely determined to achieve and a large sum of money would help towards it, then it might make a lot of sense to take part, since the self that wins will also have that desire, and could be trusted to make good use of that money.
Now, I wonder if anyone would take part in such a raffle if all the entrants who didn’t win were killed on the spot. That would mean that everyone would win in some universe, and cease to exist in the other universes where they entered. Could that be a kind of intellectual assent vs belief test for Many Worlds?
I suppose the goal you were going to spend the money on would have to be of sufficient utility if achieved to offset that in order to make the scenario work. Maybe saving the world, or creating lots of happy simulations of yourself, or finding a way to communicate between them.
I have to admire the cunning of your last sentence.
Or have I accidentally defected? I can’t tell.
EDIT: I think the ‘wizened’ correction was intended to be a joke. When I read your piece originally the idea of you ‘wizening up’ made me smile, and I suspect that the corrector just wanted to share that idea with others who may have missed it.
Most of us allocate a particular percentage to charity, despite the fact that most people would say that nearly nothing we spend money on is as important as saving childrens lives.
I don’t know whether you think it’s that we overestimate how much we value saving childrens lives, or underestimate how important xbox games, social events, large tvs and eating tasty food are to us. Or perhaps you think it’s none of that, and that we’re being simply irrational.
I doubt that anyone could consistently live as if the difference between choice of renting a nice flat and renting a dive was one life per month, or that halving normal grocery consumption for a month was a childs life that month, etc. If that’s really the aim, we’re going to have to do a significant amount of emotional engineering.
I also want to stick up for the necessity of analysing the way that a charity works, not just what they do. For example, charities that employ local people and local equipment may save fewer people per dollar in the short term, but may be less likely to create a culture of dependence, and may be more sustainable in the long term. These considerations are important too.
The good you do can compound too. If you save a childs life at $500, that child might go on to save other childrens lives. I think you might well get a higher rate of interest on the good you do than 5%. There will be a savings rate at which you should save instead of give, but I don’t think we’re near it at the moment.
that I started to wonder why adults would want children to believe in Santa Claus, and whether their reasons for it were actually good.
I think that lots of people have a kind of compulsion to lie to anyone they care about who is credulous, particularly children, about things that don’t matter very much. I assume it’s adaptive behaviour, to try to toughen up their reasoning skills on matters that aren’t so important—to teach them that they can’t rely on even good people to tell them stuff that is true.
A parole board considers the release of a prisoner: Will he be violent again?
I think this is the kind of question that Miller is talking about. Just because a system is correct more often, doesn’t necessarily mean it’s better.
For example if the human experts allowed more people out who went on to commit relatively minor violent offences and the SPRs do this less often, but are more likely to release prisoners who go on to commit murder then there would be legitimate discussion over whether the SPR is actually better.
I think this is exactly what he is talking about when he says
Where AI’s compete well generally they beat trained humans fairly marginally on easy (or even most) cases, and then fail miserably at border or novel cases. This can make it dangerous to use them if the extreme failures are dangerous.
Whether or not there is evidence that says this is a real effect I don’t know, but to address it what you really need to measure is total utility of outcomes rather than accuracy.
I have all of the english wikipedia available for offline searching on my phone. It’s big, sure, but it doesn’t fill the memory card by any means (and this is just the default one that came with the phone).
For offline access on a windows computer, WikiTaxi is a reasonable solution.
I’d recommend that everyone who can carry around offline versions of wikipedia. I consider it part of my disaster preparedness, not to mention the fun of learning new things by hitting the ‘random article’ button.
I sneeze quite often. When someone says ‘bless you’, my usual response is ‘and may you also be blessed’. I’ve heard a number of people who had apparently never wondered before say ‘why do we say that?’ after receiving that response.
eliminativists want to prove that humans, like the blue-minimizing robot, don’t have anything of the sort until you start looking at high level abstractions.
Just because something only exists at high levels of abstraction doesn’t mean it’s not real or explanatory. Surely the important question is whether humans genuinely have preferences that explain their behaviour (or at least whether a preference system can occasionally explain their behaviour—even if their behaviour is truly explained by the interaction of numerous systems) rather than how these preferences are encoded.
The information in a jpeg file that indicates a particular pixel should be red cannot be analysed down to a single bit that doesn’t do anything else, that doesn’t mean that there isn’t a sense in which the red pixel genuinely exists. Preferences could exist and be encoded holographically in the brain. Whether you can find a specific neuron or not is completely irrelevant to their reality.
I agree. In particular I often find these discussions very frustrating because people arguing for elimination seem to think they are arguing about the ‘reality’ of things when in fact they’re arguing about the scale of things. (And sometimes about the specificity of the underlying structures that the higher level systems are implemented on). I don’t think anyone ever expected to be able to locate anything important in a single neuron or atom. Nearly everything interesting in the universe is found in the interactions of the parts not the parts themselves. (Also—why would we expect any biological system to do one thing and one thing only?).
I regard almost all these questions as very similar to the demarcation problem. A higher level abstraction is real if it provides predictions that often turn out to be true. It’s acceptable for it to be an incomplete / imperfect model, although generally speaking if there is another that provides better predictions we should adopt it instead.
This is what would convince me that preferences were not real: At the moment I model other people by imagining that they have preferences. Most of the time this works. The eliminativist needs to provide me with an alternate model that reliably provides better predictions. Arguments about theory will not sway me. Show me the model.
Set up questions that require you assume something odd in the preamble, and then conclude with something unpalatable (and quite possibly false). This tests to see if people can apply rationality even when it goes against their emotional involvement and current beliefs. As well as checking that they reach the conclusion demanded (logic), also give them an opportunity as part of a later question to flag up the premise that they feel caused the odd conclusion.
Something bayesian—like the medical test questions where the incidence in the general population is really low, but that specific one has been done so much loads of people know it. Maybe take some stats from newspaper reports and see if appropriate conclusions can be drawn.
“When was the last time you changed your mind about something you believed?” tests peoples ability to apply their rationality.
But what if the doctor is confident of keeping it a secret? Well, then causal decision theory would indeed tell her to harvest his organs, but TDT (and also UDT) would strongly advise her against it. Because if TDT endorsed the action, then other people would be able to deduce that TDT endorsed the action, and that (whether or not it had happened in any particular case) their lives would be in danger in any hospital run by a timeless decision theorist, and then we’d be in much the same boat. Therefore TDT calculates that the correct thing for TDT to output in order to maximize utility is “Don’t kill the traveler,”5 and thus the doctor doesn’t kill the traveler.
This doesn’t follow the spirit of the keeping it secret part of the setup. If we know the exact mechanism that the doctor uses to make decisions then we would be able to deduce that he probably saved those five patients with the organs from the missing traveller, so it’s no longer secret. To fairly accept the thought experiment, the doctor has to be certain that nobody will be able to deduce what he’s done.
It seems to me that you haven’t really denied the central point, which is that under consequentialism the doctor should harvest the organs if he is certain that nobody will be able to deduce what he has done.
I noticed that if I’m apathetic about doing a task, then I also tend to be apathetic about thinking about doing the task, whereas tasks that I get done I tend to be so enthusiastic about that I have planned them and done them in my head long before I do them in physicality. My conclusion: apathy starts in the mind and the cure for it starts in the mind too.
The success is said to be by a researcher who has previously studied the effect of “geomagnetic pulsations” on ESP, but I could not locate it online.
Can we have a prejudicial summary of the previous studies of the 6 researchers who failed to replicate the effect too?
Actually, I’m not sure this matters. If the simulated agent knows he’s not getting a reward, he’d still want to choose so that the nonsimulated version of himself gets the best reward.
So the problem is that the best answer is unavailable to the simulated agent: in the simulation you should one box and in the ‘real’ problem you’d like to two box, but you have no way of knowing whether you’re in the simulation or the real problem.
Agents that Omega didn’t simulate don’t have the problem of worrying whether they’re making the decision in a simulation or not, so two boxing is the correct answer for them.
The decisions being made are very different between an agent that has to make the decision twice and the first decision will affect the payoff of the second versus an agent that has to make the decision only once, so I think that in reality perhaps the problem does collapse down to an ‘unfair’ one because the TDT agent is presented with an essentially different problem to a nonTDT agent.
There’s a fairly obvious answer to that stuff in my opinion. Ventus by Schroeder (scifi) covers it nicely. It would be a structure set up by the atlantians for control of nature, before they ascended probably and left Earth for the stars.
Edit: It occurs to me that the other possibility would be a simulation, originally invented by the atlantians for them to upload themselves into, or perhaps muggles were supposed to be NPCs.