I’m having trouble seeing the relevance of either of those posts. Elga’s article is about static self-locating belief, i.e., which of two individuals I should believe myself to currently be. Eliezer seems only to be questioning the coherence of dynamic self-locating belief, i.e., which individual I should believe to be my future self. And I’m not presently sure how the Boltzmann Brain post touches on this at all.
antigonus
O.K., if general historical edification for largely ignorant laymen like me counts, then there are two Berkeley undergrad courses on iTunes that I love by a professor named Margaret Anderson. The first is on the Second Reich and the second is a more general survey of modern European history. I learned a ridiculously large amount from both and personally found them to be more fluid and engaging than any college course I took.
I don’t really understand what the problem you’re diagnosing is supposed to be or what it is you’re asking for.
Valuable for what? I like the “History of Rome” podcast.
Could you clarify? Which posts are you referring to?
If you do write that article, I’d be very interested to read it.
Sure, but there are vastly more constraints involved in maximizing E(U3). It’s easy to maximize E(U(A)) in such a way as to let E(U(B)) go down the tubes. But if C is positively correlated with either A or B, it’s going to be harder to max-min A and B while letting E(U(C)) plummet. The more accounted-for sources of utility there are, the likelier a given unaccounted-for source of utility X is to be entangled with one of the former, so the harder it will be for the AI to max-min in such a way that neglects X. Perhaps it gets exponentially harder! And humans care about hundreds or thousands of things. It’s not clear that a satisficer that’s concerned with a significant fraction of those would be able to devise a strategy that fails utterly to bring the others up to snuff.
Right, the satisficer will not have an incentive to increase its expected utility by becoming a maximizer when its expected utility (by remaining a satisficer) is already over the threshold. But surely this condition would fail frequently.
Doesn’t follow if an agent wants to satisfice multiple things, since maximizing the amount of one thing could destroy your chances of bringing about a sufficient quantity of another.
I understand that it’s the person who originally voiced the thoughts, but which is more important—the thoughts or the person ?
The point is that the thoughts are so diverse that the main thing they have in common is their original promotion by a single individual. But it really isn’t just Yudkowsky’s beliefs that have become currency here. Lots of his idioms pop up everywhere, for example. There was recently a Chuck Norris-inspired thread about how amazingly intelligent he is—tongue-in-cheek, but still telling. And there does seem to be an implicit agreement by many that he’s likely to be an important player in saving the world from a certain looming existential threat if anyone is.
I’m a newly-arrived outsider and am doubtless missing tons of context and information. But from the perspective of an outsider like me, it sure looks like there’s this one guy who most directly shapes thought and language, who’s held in esteem beyond all others and whose initiative provided the glue that holds the whole thing together. That’s not to say no one can or does question him. “Yudkowskian” still feels like the single word that best captures all of this.
Also, I’d like to reiterate that I’m not trying to provoke or offend.
I understand there was a big rift in Objectivism over precisely this, with one group led by David Kelley splitting off because they were for a more intellectually tolerant Objectivism.
I guess I’m using the word “follower” here in the sense that one would describe someone as, e.g., a follower of Ayn Rand. That is, someone who has passionately assimilated an extremely far-reaching set of beliefs that originates largely from a single, high-status thinker, and who seeks to establish novel communities or institutions inspired by/based around that thinker’s thoughts. I’m sure most Objectivists would disavow Ayn Rand of any magical wisdom and insist that they came to their beliefs through critical reflection, but I would happily label the more serious of them followers.
I’m not sure about the Great Leader bit, but I do think lots of people here could be accurately classified as followers of Yudkowsky’s thinking. Again, that’s not to suggest it’s bad to be such a person. But “being interested in refining the art of rationalism” both 1. builds in theory-laden terms the mere adoption of which already marks one as a member of the community, and 2. doesn’t capture the range of interests, opinions and lingo widely shared across the site.
I agree that “Yudkowskian” isn’t a great label politically. And looking back on it, KenChen’s post is about thinking of good political descriptors and not just “apt” ones, so I think I’m probably going off-topic here. However, I do think that the term “Yudkowskian” most accurately and succinctly summarizes the cluster of views tying the members of the community based around this website together. (Then again, I don’t really know anyone here, so maybe my impression is unjustified.) For instance, talking of rationality as an art to be refined feels very Yudkowskian to me. This isn’t meant to be pejorative.
I feel like the most apt term for people here (or, more accurately, people who identify themselves as being part of some community inaugurated by this website) would be “Yudkowskian.”
What is a calibration game?
Here’s a deterministic solution that does at least as well as hiding the coin randomly (I think?). Take the expected amount of time t it would take to find the coin by random search. Write down all the deterministic coin-hiding algorithms you can think of on a piece of paper as fast as you can, starting with the most obvious. Continue until t time units have elapsed, and then use the last algorithm you thought of.
This does assume we’re counting the time it takes your future self to compute your hiding algorithm towards the time it takes him/her to find the coin.
Any chance the talk or any of the ensuing discussion will be recorded?
I think your conclusion there trades on an ambiguity of what “evidence” refers to in your y (= “all books contain equally strong evidence for their respective religion”). The assumption y could mean either:
For each book x, x contains really compelling evidence that we’re sure would equally convince us if we were to encounter it in a normal situation (i.e., without knowing about the other books or the AI’s deviousness).
For each book x, x contains really compelling evidence even after considering and correctly reasoning about all the facts of the thought experiment.
Obviously the second interpretation is either incoherent or completely trivializes the thought experiment, since it’s an assumption about what the all-things-considered best thing to believe after reading a book is, when that’s precisely the question we’re being posed in the first place. On the other hand, the first interpretation, even if assumed with probability 1, is compatible with a given book lowering the posterior expected strength of evidence of the other books.
I think it’s easy to make my second point without the asymmetry. Let’s re-pose the problem so that we expect in advance not only that each book will produce strong evidence in favor of the religion it advocates, but also strong evidence that none of the other books contain strong counter-evidence or similarly undermining evidence. When you read book Z, you learn individual pieces of evidence z1, z2, …, zn. But z1, …, zn undermine your confidence that the other books contain strong arguments, thus disconfirming your belief that you’d likely find convincing evidence for Zoroastrianism in the book whether or not the religion is true. But then it starts looking like we have evidence for Zoroastrianism. However, if, as you argue, z1, …, zn only support Zoroastrianism through things we expected to see in advance of reading the book, then we shouldn’t have any evidence. So either I’m confused or we still have a problem.
Avoid overuse of italics. Try to write so that the reader can intuit where the emphasis goes.