traditional culture =/= the human brain
halcyon
uh, are decision theorists really this moronic or is yudkowsky pulling my leg? the only thing even remotely vexing about this is in judging the accuracy of the predictor’s algorithm. dunno if a table is doable, but here goes nothing: (redoing it, since the table doesn’t display)
Accuracy: perfect
Judgement: perfect: B (1,000,000) imperfect: both (1,000)
Accuracy: imperfect
Judgement: (provided luck prevails) perfect: B (1,000,000*) imperfect: both (1,001,000)
therefore, for high risk & high profit, pick both; for low risk & low profit, pick B. “problem” solved.
(*but self-righteous pleasure from having Defied Reason. stupid theistic and classical rationalistic thought processes that abstract away underlying complexities in matters of reason and faith. the western proletariat (dunno how else to refer to the class that collectively falls for this shit, really) needs to learn the difference between rationalism and rationalizationism, especially as adopted wholesale from christianity by an alarming number of modern philosophers.)
this seems so bloody obvious to me! if i’m wrong, i’d really appreciate an explanation.
but then the claim of the philosophers responsible for working on this problem is simply incorrect. either that, or they have a despicably narrow-minded view of reason, in which case the thing they call “reason” is an uninteresting artifact of 21st century philosophical categorization. they might as well be using “reason n. dog poo,” for all the help this is in charting out the role cold calculation plays in this scenario, and my definition would at least require an interesting defense to justify the novel usage. this is a gamble, plain and simple, with fairly straightforward odds:
accuracy of algorithm: perfect
judgement of accuracy: perfect: 1,000,000 imperfect: 1,000
accuracy of algorithm: imperfect and generous
judgement of accuracy: perfect: 1,000,000 imperfect: 1,001,000
accuracy of algorithm: imperfect and miserly
judgement of accuracy: perfect: 0 imperfect: 1,000
as there can be no question that the entire problem revolves around the nature of the predictor’s algorithm, decision theorists must be morons if this is an important problem to them. the only reason this may appear interesting is because in the way they framed everything, the whole thing defies their narrow definition of “reason”. this is a definition-oriented tug-of-war which dissolves as soon as you apply what eliezer says about splitting words. unfettered reason, being free to transcend the manner in which the problem has been set up, has no use for boundaries as artificial as conventional definitions. substitute “reason” with, say, “calculation”, and i no longer see what’s so special about it anymore.
Something along those lines, but anyway, how does that NOT bring this decision into the realm of calculation?
Thinking about it soberly, the framing of this problem reveals even more of a lack of serious scrutiny of its premises. A rational thinker’s first question ought to be: How is it even possible to construct a decision tree that predicts my intentions with near-perfect success before I myself am aware of them? The accuracy of such a system would depend on knowledge of human neurology, time travel, and/or who knows what else, that our civilization is nowhere near obtaining, placing the calculation of odds associated with this problem far beyond the purview of present day science. (IOW, I believe the failure to reason along lines that combine statistics with real world scientific understanding is responsible for the problem’s rather mystical overtones at first sight. Pay no attention to the man behind the curtain! And really, rare events are rare, but they do happen, and are no less real on account of their rarity.)
In any case, thanks for the response.
(Actually, I’m not even clear on the direction of causality under the predictor’s hood. Suppose the alien gazes into a crystal ball showing a probable future and notes down my choice. If so, then he can see the course of action he’d probably go with as well! If he changes that choice, does that say anything about my fidelity to the future he saw? Depends on the mechanism of his crystal ball, among many other things. Or does he scan my brain and simply simulate the chemical reactions it will undergo in the next five minutes? How accurate is the model carrying out this simulation? How predictable is the outcome via these techniques in the first place? There are such murky depths here that no matter what method one imagines, the considerations based on which he ultimately places the million dollars is of supreme importance.)
(What, total karma doesn’t reach the negatives? Why not?)
Well, the more I think about this, the more it seems to me that we’re dealing with a classic case of the unspecified problem.
You are standing on the observation deck of Starship Dog Poo orbiting a newly discovered planet. The captain inquires as to its color. What do you answer him?
Uh, do I get to look at the planet?
No.
… Let me look up the most common color of planets across the universe.
In the given account, the ability attributed to our alien friend is not described in terms that are meaningful in any sense, but is instead ascribed to his “superintelligence”, which is totally irrelevant as far as our search for solutions is concerned. And yet, we’re getting distracted from the problem’s fundamentally unspecified nature by these yarn balls of superintelligence and paradoxical choice, which are automatically Clever Things to bring up in our futurist iconography.
If you think I’m mistaken, then I’d really appreciate criticism. Thanks!
(The above problem is actually a more sensible one since the relationship of the query to our cache of observed data is at least clear. Newcomb’s Problem, OTOH, leaves the domain of well-understood science completely behind. If, with our current scientific knowledge, we find the alien’s ability utterly baffling at the stage of understanding his methods which the problem has set out for us, then it would be sheer hubris to label either choice “rational”, because if the very basis for such a judgement exists, then I for one cannot see it. What if you pick B and it turns out to be empty? If that is impossible, then what are the details of the guarantee that that outcome could never occur? The problem’s wrappings, so to speak, makes this look like an incomprehensible matter of faith to me. If I have misunderstood something, could someone smart please explain it to me?)
(At the very least, it must be admitted that in our current understanding of the universe, a world of chaotic systems and unidirectional causality, a perfect predictor’s algorithm is a near-impossibility, “superintelligence” or no. All this reminds me of what Eliezer said in his autobiographical sequence: If you want to treat a complete lack of understanding of a subject as an unknown variable and shift it around at your own convenience, then there are definitely limits to that kind of thing.)
(Based on a recommendation, I am now reading Yudkowsky’s paper on Timeless Decision Theory. I’m 7 pages in, but before I come across Yudkowsky’s solution, I’d like to note that choosing evidential decision theory over causal decision theory or vice-versa, in itself, looks like a completely arbitrary decision to me. Based on what objective standards could either side possibly justify its subjective priorities as being more “rational”?)
But even thought experiments ought to make sense, and I’m not yet convinced this one does, for the reasons I’ve been ranting about. If the problem does not make sense to begin with, what is its “answer” worth? For me, this is like seeing the smartest minds in the world divided over whether 5 + Goldfish = Sky or 0. I’m asking what the operator “+” signifies in this context, but the problem is carefully crafted to make that very question seem like an unfair imposition.
Here, the power ascribed to the alien, without further clarification, appears incoherent to me. Which mental modules, or other aspects of reality, does it read to predict my intentions? Without that being specified, this remains a trick question. Because if it directly reads my future decision, and that decision does not yet exist, then causality runs backwards. And if causality runs backwards, then the money already being in box B or not makes no difference, because your actual decision NOW is going to determine whether it will have been placed there in the past. So if you’re defying causality, and then equating reason with causality, then obviously the “irrational”, ie. acausal, decision will be rewarded, because the acausal decision is the calculating one. God I wish I could draw a chart in here.
To whoever keeps downvoting my comments: The faster I get to negative infinity, the happier I’ll be, but care to explain why?
Yes, now I have long-term goals within the community! Or will no one read what I say if it gets too low? That’d be lame, but no matter. I could always keep this account for speaking the truth, and another one for posting the stuff I want other people to see.
You don’t seem to understand what I’m getting at. I’ve already addressed this ineptly, but at some length. If causality does not run backwards, then the actual set of rules involved in the alien’s predictive method, the mode of input it requires from reality, its accuracy, etc, become the focus of calculation. If nothing is known about this stuff, then the problem has not been specified in sufficient detail to propose customized solutions, and we can only make general guesses as to the optimal course of action. (lol The hubris of trying to outsmart unimaginably advanced technology as though it were a crude lie detector reminds me of Artemis Fowl. The third book was awesome.) I only mentioned one ungameable system to explain why I ruled it out as being a trivial consideration in the first place. (Sorry, it isn’t Sunday. No incomprehensible ranting today, only tangents involving childrens’ literature.)
That begs the question as to whether anything analogous to “code” exists, whether anything is modifiable simply by willing it, etc. What if my mind looks like it’s going to opt for B when the alien reads me, and I change my mind by the time it’s my turn to choose? If no such thing ever happens, the problem ought to specify why that is the case, because I don’t buy the premise as it stands.
People predict the behavior of other people all the time.
And they’re proved wrong all the time. So what you’re saying is, the alien predicts my behavior using the same superficial heuristics that others use to guess at my reactions under ordinary circumstances, except he uses a more refined process? How well can that kind of thing handle indecision if my choice is a really close thing? If he’s going with a best guess informed by everyday psychological traits, the inaccuracies of his method would probably be revealed before long, and I’d be at the numbers immediately.
“be the sort of person who picks one box, then pick both boxes”
I agree, I would pick both boxes if that were the case, hoping I’d lived enough of a one box picking life before.
but that the way to be the sort of person that picks one box is to pick one box, because your future decisions are entangled with your traits, which can leak information and thus become entangled with other peoples’ decisions.
I beg to differ on this point. Whether or not I knew I would meet Dr. Superintelligence one day, an entire range of more or less likely behaviors is very much conceivable that violate this assertion, from “I had lived a one box picking life when comparatively little was at stake,” to “I just felt like picking differently that day.” You’re taking your reification of selfhood WAY too far if you think Being a One Box Picker by picking one box when the judgement is already over makes sense. I’m not even sure I understand what you’re saying here, so please clarify if I’ve misunderstood things. Unlike my (present) traits, my future decisions don’t yet exist, and hence cannot leak anything or become entangled with anyone.
But what this disagreement boils down to is, I don’t believe that either quality is necessarily manifest in every personality with anything resembling steadfastness. For instance, I neither see myself as the kind of person who would pick one box, nor as the kind who would pick both boxes. If the test were administered to me a hundred times, I wouldn’t be surprised to see a 50-50 split. Surely I would be exaggerating if I said you claim that I already belong to one of these two types, and that I’m merely unaware of my true inner box-picking nature? If my traits haven’t specialized into either category, (and I have no rational motive to hasten the process) does the alien place a million dollars or not? I pity the good doctor. His dilemma is incomparably more black and white than mine.
To summarize, even if I have mostly picked one box in similar situations in the past, how concrete is such a trait? This process comes nowhere near the alien’s implied infallibility, it seems to me. Therefore, either this process or the method’s imputed infallibility has got to go if his power is to be coherent.
Not only that, if that’s all there is to the alien’s ability, what does this thought experiment say, except that it’s indeed possible for a rational agent to reward others for their past irrationality? (to grant the most meaningful conclusion I DO perceive) That doesn’t look like a particularly interesting result to me. Such figures are seen in authoritarian governments, religions, etc.
knowing the value of Current Observation gives you information about Future Decision.
Here I’d just like to note that one must not assume all subsystems of Current Brain remain constant over time. And what if the brain is partly a chaotic system? (AND new information flows in all the time… Sorry, I cannot condone this model as presented.)
Perhaps it can observe your neurochemistry in detail and in real time.
I already mentioned this possibility. Fallible models make the situation gameable. I’d get together with my friends, try to figure out when the model predicts correctly, calculate its accuracy, work out a plan for who picks what, and split the profits between ourselves. How’s that for rationality? To get around this, the alien needs to predict our plan and—do what? Our plan treats his mission like total garbage. Should he try to make us collectively lose out? But that would hamper his initial design.
(Whether it cares about such games or not, what input the alien takes, when, how, and what exactly it does with said input—everything counts in charting an optimal solution. You can’t just say it uses Method A and then replace it with Method B when convenient. THAT is the point: Predictive methods are NOT interchangeable in this context. (Reminder: Reading my brain AS I make the decision violates the original conditions.))
Perhaps land-ape psychology turns out to be really simple if you’re an omnipotent thought-experiment enthusiast.
We’re veering into uncertain territory again… (Which would be fine if it weren’t for the vagueness of mechanism inherent in magical algorithms.)
The reasoning wouldn’t be “this person is a one-boxer” but rather “this person will pick one box in this particular situation”.
Second note: An entity, alien or not, offering me a million dollars, or anything remotely analogous to this, would be a unique event in my life with no precedent whatever. My last post was written entirely under the assumption that the alien would be using simple heuristics based on similar decisions in the past. So yeah, if you’re tweaking the alien’s method, then disregard all that.
It’s very difficult to be the sort of person who would pick one box in the situation you are in without actually picking one box in the situation you are in.
From the alien’s point of view, this is epistemologically non-trivial if my box-picking nature is more complicated than a yes-no switch. Even if the final output must take the form of a yes or a no, the decision tree that generated that result can be as endlessly complex as I want, every step of which the alien must predict correctly (or be a Luck Elemental) to maintain its reputation of infallibility.
If it’s worse, just do the other thing—isn’t that more “rational”?
As long as I know nothing about the alien’s method, the choice is arbitrary. See my second note. This is why the alien’s ultimate goals, algorithms, etc, MATTER.
(If the alien reads my brain chemistry five minutes before The Task, his past history is one of infallibility, and no especially cunning plan comes to mind, then my bet regarding the nature of brain chemistry would be that not going with one box is silly if I want the million dollars. I mean, he’ll read my intentions and place the money (or not) like five minutes before… (At least that’s what I’ll determine to do before the event. Who knows what I’ll end up doing once I actually get there. (Since even I am unsure as to the strength of my determination to keep to this course of action once I’ve been scanned, the conscious minds of me and the alien are freed from culpability. Whatever happens next, only the physical stance is appropriate for the emergent scenario. ((“At what point then, does decision theory apply here?” is what I was getting at.) Anyway, enough navel-gazing and back to Timeless Decision Theory.))))
i’d say: you don’t have a phd, therfore you’re not qualified to judge whether or not yudkowsky should have a phd.
it doesn’t, it’s a jocular reductio ad absurdum based on the irrationality of the underlying premise. 9_9 and what field are we talking here, education? phdology?
naw, i’m talking about what field qualifies you to judge how much not having a phd disqualifies you from judging statements on that subject.
In cases like this, I find ethics grounded in utilitarianism to be despicably manipulative positions. You are not treating people as rational agents, but pandering to their lack of virtue so as to recruit them as pawns in your game. If that’s how you’re going to play, why not manufacture evidence in support of your position if you’re Really Sure your assessment is accurate? A clear line of division between “pandering: acceptable” & “evidence manufacture: unacceptable” is nothing but a temporary, culturally contingent consensus caring nothing for reason or consistency. To predict the future, see the direction in which the trend is headed.
No, I would scrupulously adhere to a position of utmost sincerity. Screw the easily offended customers. If this causes my downfall, so be it. That outcome is acceptable because personally, if my failure is caused by honesty and goodwill rather than incompetence, I would question if such a world is worth saving to begin with. I mean, if that is what this enlightened society is like and wants to be like, then I can rather easily imagine our species eventually ending up as the aggressors in one of those alien invasion movies like Independence Day. I keep wondering why, if they evolved in a symbiotic ecosystem analogous to ours, one morally committed individual among their number didn’t wipe out their own race and rid the galaxy of this aimless, proliferating evil. It’d be better still to let them be smothered peacefully under their own absence of self-reflection and practice of rewarding corruption, without going out of your way to help them artificially reach a position of preeminence from which to bully others.
“Would Isaac Newton have remained a mystic, even in that earlier era, if he’d lived the lives of Galileo and Archimedes instead of just reading about them?”
Possibly, depending on your definition of “mystic”. This is not a simple yes-and-no question because that which commands universal validity in the real world is nonetheless not fundamental to the human psyche. You value the power of intelligence and incessantly work to refine your art of rationality, and yet you complain of low mental energy. I don’t think that’s necessarily because you are a low-mental-energy person. It could just be that you’re an imperfectly “rational” being and are, like the rest of us, ultimately motivated by a complex interplay of sense and emotion that can only be called poetry. (I’m not saying matters shouldn’t be corrected by transhumanist methods, just observing that that’s how they stand at the moment.) If you were to adopt classical rites known to confer such inspiration, you could be bounding from insight to insight riding a crest of divine frenzy. Such traditions give you access to poetic frameworks refined by generations of thinkers and consequently able to bestow tremendous power. Instead, you choose to immerse yourself in work and socialization. Those are themselves American Protestant rites, and they DO work, but they seem to suit you poorly. Why keep at it regardless, except to fool the eyes of society?
I would ask, would Newton have had the motivation to discover gravity if he hadn’t been inspired by astrological mysteries? If so, what would be his incentive? What if modern humanism seemed as insipid to him as it does to me? Elsewhere, I believe you spoke of sacredness not being private, but the fact is, sacredness IS private in the sense that different people find different things sacred and even if you could list all the rational pillars supporting your perspective on the sacredness of a thing that are available to your conscious mind, predictably communicating to others a direct taste of your sensation of holiness would still be an immensely difficult endeavor. I say this because, knowing most of the reasons shuttle launches appear sacred to you, I can readily imagine how someone could find it sacred, and yet I do not share this feeling myself. Exhilarating, tense, joyful, among other things, but sacred? Not really, and I don’t think belief and disbelief enter the picture when we’re exploring the domain of sensation. Hence, private and incommunicable.
I confess, I worry you might have fallen prey to Post-Christian rationalizationism. See, Christians loved to leach the joy and meaning out of life wherever they didn’t understand it, leaving a dry and lifeless husk which they proceeded to arbitrarily label “rational”. Not just informally, but as a matter of church doctrine. They then mocked and acted dismissive and when necessary, passive-aggressive to anyone who disagreed with their point of view, which was effective at keeping people in line after centuries of violent evaporative cooling. Is it perhaps possible that everyone acted as though certain modes of behavior are Obviously Rational, and you believed them without systematically questioning their presuppositions? I first suspected the importance of ritual (in a broad sense of the word) in daily life when studying the tenth and final chapter of this book on Neo-Confucian metaphysics: http://faculty.washington.edu/mkalton/ I’m not sure you’d have the desire to read it with as much patience and forbearance as I’ve had to invest in it.
Who am I to accuse you of unthinking assimilationism anyway? I myself don’t practice any traditional rites, though that’s because the Neoplatonic rites, (http://www.youtube.com/watch?v=k-PkooJfLRA) which intrigue me the most, are, as far as I’m aware, lost. Thanks again, Christianity! Only the cheap, populist crap, the Christian, Gnostic and Hermetic rites, survive from classical antiquity, out of which the Christian ones are conveniently superior in terms of quality, having received the most attention and polish. Unfortunately, Christianity, at its core, is a constructivist doctrine with a deep distrust of individual self-cultivation (which existed in the West as in the East in Hellenic times) that does not conform to their self-righteous path of ostentatious self-abasement. On the bright side, several important expository texts have come down to us: http://www.scribd.com/doc/31503637/Proclus-on-the-Theology-of-Plato-all-six-books-plus-a-seventh-by-Thomas-Taylor
Sorry, just… no. I realize it’s been four years, but I had to create an account just to register my disapproval. The question remains, what did you want from the blegg? Vanadium or Palladium? Its glow-in-the-dark property? A gestalt effect arising from the combination of certain salient features? What does any of this have to do with consensus-based definitions?