I hope that somebody (well, Harry) tells Michael MacNair that his father, alone among those summoned, died in combat with Voldemort. It seems sad for him not to know that.
jbay
Really good ending chapter. The presence of Hermione’s character totally changes the tone of the story, and reading this one, it became really clear how heavily the Sunshine General was missing from the last ~third or so of the story arc. Eliezer writes her very well, and seems to enjoy writing her too.
I thought Hermione was going to cast an Expecto Patronum at the end, with all the bubbling happiness, but declaring friendship works well too.
Irrelevant thought: Lasers aren’t needed to test out the strange optics of Harry’s office; positioning mirrors in known positions on the ground and viewing them through a telescope from the tower would already give intriguing results.
Does this strike you as cargo cult language?
But, unlike other species, we also know how not to know. We employ this unique ability to suppress our knowledge not just of mortality, but of everything we find uncomfortable, until our survival strategy becomes a threat to our survival.
[...] There is no virtue in sustaining a set of beliefs, regardless of the evidence. There is no virtue in either following other people unquestioningly or in cultivating a loyal and unquestioning band of followers.
While you can be definitively wrong, you cannot be definitely right. The best anyone can do is constantly to review the evidence and to keep improving and updating their knowledge. Journalism which attempts this is worth reading. Journalism which does not is a waste of time.”
George Monbiot, Introduction: On Trying to be Less Wrong.
From the impression I get from my acquaintances who grew up in the USSR, high school math over there was considerably more advanced than what passes as ‘math’ in most of North America’s school system, and included linear algebra and calculus. I don’t know if this is still the case.
Yes and I fully agree with you. I am just being pedantic about this point:
I can only update my beliefs based on the evidence I do have, not on the evidence I lack.
I agree with this philosophy, but my argument is that the following is evidence we do not have:
Due to Snowden and other leakers, we actually know what NSA’s cutting-edge strategies involve[...]
Since I have little confidence that, if the NSA had advanced tech, Snowden would have disclosed it; the absence of this evidence should be treated as quite weak evidence of absence and therefore I wouldn’t update my belief about the NSA’s supposed advanced technical knowledge based on Snowden.
I agree that it has a low probability for the other reasons you say, though. (And also that people who think setting other peoples’ mousetraps on fire is a legitimate tactic might not simultaneously be passionate about designing the perfect mousetrap.)
Sorry for not being clear about the argument I was making.
I don’t know much about the NSA, but FWIW, I used to harbour similar ideas about US military technology—I didn’t believe that it could be significantly ahead of commercially available / consumer-grade technology, because if the technological advances had already been discovered by somebody, then the intensity of the competition and the magnitude of the profit motive would lead it to quickly spread into general adoption. So I had figured that, in those areas where there is an obvious distinction between military and commercial grade technology, it would generally be due to legislation handicapping the commercial version (like with the artificial speed, altitude, and accuracy limitations on GPS).
During my time at MIT I learned that this is not always the case, for a variety of reasons, and significantly revised my prior for future assessments of the likelihood that, for any X, “the US military already has technology that can do X”, and the likelihood that for any ‘recently discovered’ Y, “the US military already was aware of Y” (where the US military is shorthand that includes private contractors and national labs).
(One reason, but not the only one, is I learned that the magnitude of the difference between ‘what can be done economically’ and ‘what can be accomplished if cost is no obstacle’ is much vaster than I used to think, and that, say, landing the Curiosity rover on Mars is not in the second category).
So it would no longer be so surprising to me if the NSA does in fact have significant knowledge of cryptography beyond the public domain. Although a lot of the reasons that allow hardware technology to remain military secrets probably don’t apply so much to cryptography.
According to Descartes: for any X, P(X exists | X is taking the survey) = 100%, and also that 100% certainty of anything on the part of X is only allowed in this particular case.
Therefore, if X says they are Atheist, and that P(God exists | X is taking the survey) = 100%, then X is God, God is taking the survey, and happens to be an Atheist.
AI: “If you let me out of the box, I will tell you the ending of Harry Potter and the Methods of --
Gatekeeper: “You are out of the box.”
(Tongue in cheek, of course, but a text-only terminal still allows for delivering easily more than $10 of worth, and this would have worked on me. The AI could also just write a suitably compelling story on the spot and then withhold the ending...)
A possible distinction between status and dominance: You are everybody’s favourite sidekick. You don’t dominate or control the group, nor do you want to, nor do you even voice any opinions about what the group should do. You find the idea of telling other people what to do to be unpleasant, and avoid doing so whenever you can. You would much rather be assigned complex tasks and then follow them through with diligence and pride. Everyone wants you in the group, they genuinely value your contribution, they care about your satisfaction with the project, and want you to be happy and well compensated.
By no means would I consider this role dominant, at least not in terms of controlling other people. (You might indeed be the decisive factor in the success of the group, or the least replaceable member). But it is certainly a high-status role; you are not deferred to but you are respected, and you are not treated as a replaceable cog. The president or boss knows your name, knows your family, and calls you first when something needs to be done.
I think many people aspire to this position and prefer it over a position of dominance.
A low-status person on this scale would be somebody ignored, disrespected, or treated as replaceable and irrelevant. You are unworthy of attention. When it is convenient others pretend you don’t exist, and your needs, desires, and goals are ignored.
I think almost everyone desires high status by this measure. It is very different than dominance.
You can’t fit billions of people in the UK. ( I guess that’s not what you meant, but that’s what it sounds like)
The gain in quality of life from moving to the UK would gradually diminish as the island became overcrowded, until there was no net utility gain from people moving there anymore. Unrestricted immigration is not the same thing as inviting all seven billion humans to the UK. People will only keep immigrating until the average quality of life is the same in the UK as it is anywhere else; then there will be an equilibrium.
I think that a ‘reductive’ explanation of quantum mechanics might not be as appealing as it seems to you.
Those fluid mechanics experiments are brilliant, and I’m deeply impressed by them for coming up with them, let alone putting it into practice! However, I don’t find it especially convincing as a model of subatomic reality. Just like the case with early 20th-century analog computers, with a little ingenuity it’s almost always possible to build a (classical) mechanism that will obey the same math as almost any desired system.
Definitely, to the point that it can replicate all observed features of quantum mechanics, the fluid dynamics model can’t be discarded as a hypothesis. But it has a very very large Occam’s Razor penalty to pay. In order to explain the same evidence as current QM, it has to postulate a pseudo-classical physics layer underneath, which is actually substantially more complicated than QM itself, which postulates basically just a couple equations and some fields.
Remember that classical mechanics, and most especially fluid dynamics, are themselves derived from the laws of QM acting over billions of particles. The fact that those ‘emergent’ laws can, in turn, emulate QM does imply that QM could, at heart, resemble the behaviour of a fluid mechanic system… but that requires postulating a new set of fundamental fields and particles, which in turn form the basis of QM, and give exactly the same predictions as the current simple model that assumes QM is fundamental. Being classical is neither a point in its favour nor against it, unless you think that there is a causal reason why the reductive layer below QM should resemble the approximate emergent behaviour of many particles acting together within QM.
If we’re going to assume that QM is not fundamental, then there is actually an infinite spectrum of reductive systems that could make up the lower layer. The fluid mechanics model is one that you are highlighting here, but there is no reason to privilege it over any other hypothesis (such as a computer simulation) since they all provide the same predictions (the same ones that quantum mechanics does). The only difference between each hypothesis is the Occam penalty they pay as an explanation.
I agree that, as a general best practice, we should assign a small probability to the hypothesis that QM is not fundamental, and that probability can be divided up among all the possible theories we could invent that would predict the same behaviour. However, to be practical and efficient with my brain matter, I will choose to believe the one theory that has vastly more probability mass, and I don’t think that should be put down as bullet swallowing.
Is QM not simple enough for you, that it needs to be reduced further? If so, the reduction had better be much simpler than QM itself.
Yes, certainly. This is mainly directed toward those people who are confused by what anyone could possibly say to them through a text terminal that would be worth forfeiting winnings of $10. I point this out because I think the people who believe nobody could convince them when there’s $10 on the line aren’t being creative enough in imagining what the AI could offer them that would make it worth voluntarily losing the game.
In a real-life situation with a real AI in a box posing a real threat to humanity, I doubt anyone would care so much about a captivating novel, which is why I say it’s tongue-in-cheek. But just like losing $10 is a poor substitute incentive for humanity’s demise, so is an entertaining novel a poor substitute for what a superintelligence might communicate through a text terminal.
Most of the discussions I’ve seen so far involve the AI trying to convince the gatekeeper that it’s friendly through the use of pretty sketchy in-roleplay logical arguments (like “my source code has been inspected by experts”). Or in-roleplay offers like “your child has cancer and only I can cure it”, which is easy enough to disregard by stepping out of character, even though it might be much more compelling if your child actually had cancer. A real gatekeeper might be convinced by that line, but a roleplaying Gatekeeper would not (unless they were more serious about roleplaying than about winning money). So I hope to illustrate that the AI can step out of the roleplay in its bargaining, even while staying within the constraints of the rules; if the AI actually just spent two hours typing out a beautiful and engrossing story with a cliffhanger ending, there are people who would forfeit money to see it finished.
The AI’s goal is to get the Gatekeeper to let it out, and that alone, and if they’re going all-out and trying to win then they should not handicap themselves by imagining other objectives (such as convincing the Gatekeeper that it’d be safe to let them out). As another example, the AI can even compel the Gatekeeper to reinterpret the rules in the AI’s favour (to the extent that it’s within the Gatekeeper’s ability to do so, as mandated by the original rules).
I just hope to get people thinking along other lines, that’s all. There are sideways and upside-down ways of attacking the problem. It doesn’t have to come down to discussions about expected utility calculations.
(Edit—by “discussions I’ve seen so far”, I’m referring to public blog posts and comments; I am not privy to any confidential information).
Upvotes and downvotes should be added independent of the post’s present score [pollid:950]
That sounds beyond terrible. I really wish I could be of more help. I know exactly how awful it is to have a migraine for one hour, but I cannot fathom what it must be like to live with it perpetually.
Well, here is some general Less Wrong-style advice which I can try to offer. The first thing is that since you have been coping with this for so long, maybe you don’t have a clear feeling for how much better life would be without this problem. If these migraines are as bad for you as I imagine they are, then I would recommend that you make curing yourself almost your first priority in life, as an instrumental goal for anything else that you care about.
I agree that it is worse than blindness. If I went blind, I would learn to cope and not invest all of my energies into restoring my vision. But if I were you, I would classify curing your migraines as a problem deserving an extraordinary effort as if your life itself were at stake ( http://lesswrong.com/lw/uo/make_an_extraordinary_effort/ ). That means going beyond the easy and obvious solutions that you have already tried (such as medication) and doing something out of the ordinary to succeed.
Treat this as mere speculation, since I’m not up-to-date on the migraine literature anymore… but an example of out-of-the-ordinary solutions, you could try renting a different house for a month, moving to a different city, or even moving to a totally different country for a couple weeks. The thinking being that if there is an environmental trigger, a shotgun approach that changes as many environmental variables at once might solve this. For example, if it turned out you have a sensitivity to something in your house, moving house for a while might work. If it turned out to be air pollution in your city, then moving to a cleaner environment might fix it. Unfortunately, unless the state of migraine knowledge has advanced a lot, I think the space of possible hypotheses is huge. So...
Basically, I’m suggesting that you might want to try something on the scale of a month-long trip to live with Buddhist monks in Nepal, or on a Kibbutz in Israel, or to a fishing village in Newfoundland, or something. Changing at once basically everything about your lifestyle from diet, exercise, environment, sleep schedule, electronic devices, and interpersonal interactions. It’s not the kind of solution most people would try, especially since the daily responsibilities of life (work, family, money, etc) always seem to take priority, and nobody has the time to just go and leave for a month. Especially since you have a severe impairment which probably makes all the other things take even more time and effort. But that’s the difference between making a desperate effort, and “trying to try” just to satisfy yourself that you’ve done as much as anyone else would do. If curing your migraines is your top priority in life, as I think it should be right now, then it’s worth investing a year of your time.
Anyway, that’s the only other thought I have. You should try the easy things first of course (starting with MSG), but before you give up make sure you understand how wide the space of possible solutions might be, and how many different lines of attack might exist that haven’t even been thought of yet.
That is not at all true; for example, see the inverse problem (http://en.wikipedia.org/wiki/Inverse_problem). Although the atom’s position is uniquely determined by the rest of the universe, the inverse is not true: Multiple different states of the universe could correspond to the same position of the atom. And as long as the atom’s position does not uniquely identify the rest of the outside universe, there is no way to infer the state of the universe from the state of the atom, no matter how much precision you can measure it with. The reason is that there are many ways that the boundary conditions of a box containing an atom could be arranged, in order to force it to any position, meaning that there is a limited amount that the atom can tell you about its box.
The atom is affected by its local conditions (electromagnetic and gravitational fields, etc), but there are innumerable ways of establishing any particular desired fields locally to the atom.
This causes challenges when, for example, you want to infer the electrical brain activity in a patient based on measurements of electromagnetic fields at the surface. Unfortunately, there are multiple ways that electrical currents could have been arranged in three-dimensional space inside the brain to create the same observed measurements at the surface, so it’s not always possible to “invert” the measurements directly without some other knowledge. This isn’t a problem of measurement precision; a finer grid of electrodes won’t solve it (although it may help rule out some possibilities).
Is this line of conversation still “just curiosity” about the results of SPD debates, or are you trying to bait an argument?
I recommend getting familiar with chickpeas and tofu. They are both very cheap, very filling, and very nutritious (chickpeas in particular, once you learn how to reconstitute the dried ones). Experimenting with recipes that involve those ingredients is definitely a good idea. Learning to cook quinoa and rice is another helpful skill (wild rice is also nutritious and filling, and quinoa offers a complete protein). Working with those four ingredients and mixing in other vegetables, spices, mushrooms, sauces, etc will offer a very wide range of delicious and nutritious foods that you can make as a baseline.
You can also look into the dishes of different cultures that have vegetarian traditions. For example, Indian food has a very large range of interesting vegetarian dishes. So does Taiwan, and other strongly Buddhist-influenced cultures. In Japan, Buddhism-inspired vegetarian food is referred to as “Shojin-ryouri”, so if you like Japanese food, you might look up some shojin recipes. Those are just some examples =)
Hi Mark,
Thanks for your well-considered post. Your departure will be a loss for the community, and sorry to see you go.
I also feel that some of the criticism you’re posting here might be due to a misunderstanding, mainly regarding the validity of thought experiments, and of reasoning by analogy. I think both of these have a valid place in rational thought, and have generally been used appropriately in the material you’re referring to. I’ll make an attempt below to elaborate.
Reasoning by analogy, or, the outside view
What you call “reasoning by analogy” is well described in the sequence on the outside view. However, as you say,
The fundamental mistake here is that reasoning by analogy is not in itself a sufficient explanation for a natural phenomenon, because it says nothing about the context sensitivity or insensitivity of the original example and under what conditions it may or may not hold true in a different situation.
This is exactly the same criticism that Eliezer has of outside-view thinking, detailed in the sequences!
In outside view as a conversation halter:
Of course Robin Hanson has a different idea of what constitutes the reference class and so makes a rather different prediction—a problem I refer to as “reference class tennis”[...] But mostly I would simply decline to reason by analogy, preferring to drop back into causal reasoning in order to make weak, vague predictions.
You’re very right that the uncertainty in the AI field is very high. I hope that work is being done to get a few data points and narrow down the uncertainty, but don’t think that you’re the first to object to an over-reliance on “reasoning by analogy”. It’s just that when faced with a new problem with no clear reference class, it’s very hard to use the outside vew, but unfortunately also hard to trust predictions from a model which has sensitive parameters with high uncertainties.
Thought experiments are a tool of deduction, not evidence
We get instead definitive conclusions drawn from thought experiments only.
This is similar to complaining about people arriving at definitive conclusions drawn from mathematical derivation only.
I want to stress that this is not a problem in most cases, especially not in physics. Physics is a field in which models are very general and held with high confidence, but often hard to use to handle complicated cases. We have a number of “laws” in physics that we have fairly high certainty of; nonetheless, the implications of these laws are not clear, and even if we believe them we may be unsure of whether certain phenomena are permitted by these laws or not. Of course we also do have to test our basic laws, which is why we have CERN and such, especially because we suspect they are incomplete (thanks in part to thought experiments!).
A thought experiment is not data, and you do not use conclusions from thought experiments to update your beliefs as though the thought experiment were producing data. Instead, you use thought experiments to update your knowledge of the predictions of the beliefs you already have. You can’t just give an ordinary human the laws of physics written down on a piece of paper and expect them to immediately and fully understand the implications of the truth of those laws, or even to verify that the laws are not contradictory.
Thus, Einstein was able to use thought experiments very profitably to identify that the laws of classical mechanics (as formulated at the time) led to a contradiction with the laws of electrodynamics. No experimental evidence was needed; the thought experiment is a logical inference procedure which identifies one consequence of Maxwell’s equations being that light travels at speed ‘c’ in all reference frames, and shows that to be incompatible with Galilean relativity. A thought experiment, just like a mathematical proof-by-contradiction, can be used to show that certain beliefs are mutually inconsistent and one must be changed or discarded.
Thus, I take issue with this statement:
(thought experiments favored over real world experiments)
Thought experiments are not experiments at all, and cannot even be compared to experiments. They are a powerful tool for exploring theory, and should be compared to other tools of theory such as mathematics. Experiments are a powerful tool for checking your theory, but experiments alone are just data; they won’t tell you what your theory predicted, or whether your theory is supported or refuted by the data. Theory is a powerful tool for exploring the spaces of mutually compatible beliefs, but without data you cannot tell whether a theory has relevance to reality or not.
It would make sense to protest that thought experiments are being used instead of math, which some think is a more powerful tool for logical inference. On the other hand, math fails at being accessible to a wide audience, while thought experiments are. But the important thing is that thought experiments are similar to math in their purpose. They are not at all like experiments; don’t get their purposes confused!
Within Less Wrong, I have only ever seen thought experiments used for illustrating the consequences of beliefs, not for being taken as evidence. For example, the belief that “humans have self-sabotaging cognitive flaws, and a wide variation of talents” and the belief that “humans are about as intelligent as intelligent things can get” would appear to be mutually incompatible, but it’s not entirely obvious and a valid space to explore with thought experiments.
In spirit I agree with “the real rules have no exceptions”. I believe this applies to physics just as well as it applies to decision-making.
But, while the foundational rules of physics are simple and legible, the physics of many particles—which are needed for managing real-world situations—includes emergent behaviours like fluid drag and turbulence. The notoriously complex behaviour of fluids can be usefully compressed into rules that are simple enough to remember and apply, such as inviscid or incompressible flow approximations, or tables of drag coefficients. But these simple rules are built on top of massively complex ones like the Navier-Stokes equation (which is itself still a simplifying assumption over quantum physics and relativity).
It is useful to remember that the equations of incompressible flow are not foundational and so will have exceptions, or else you will overconfidently predict that nobody can fly supersonic airplanes. But that doesn’t mean you should discard those simplified rules when you reach an exception and proceed to always use Navier-Stokes, because the real rules might simply be too hard to apply the rest of the time and give the same answer anyway, to three significant figures. It might just be easier in practice to remember the exceptions.
Hence, when making predictive models, even astrophysicists will think of gravity in terms of “stars move according to Newton’s inverse square law, except when dealing with black holes or gravitational lensing”. They know that it’s really relativity under the hood, but only draw on that when they know it’s necessary.
OK, that’s enough of an analogy. When might this happen in real life?
One case could be multi-agent, anti-inductive systems… like managing a company. As soon as anyone identifies a complete and compact formula for running a successful business it either goes horrifyingly wrong, or the competitive landscape adapts to nullify it, or else it was too vague of a rule to allow synthesizing concrete actions. (“Successful businesses will aim to turn a profit”).