“An object is grue iff it is first observed before time T, and it is green, or it is first observed after time T, and it is blue.”
I don’t see any reason such an object is likely to eat me when I’m walking around in the dark.
“An object is grue iff it is first observed before time T, and it is green, or it is first observed after time T, and it is blue.”
I don’t see any reason such an object is likely to eat me when I’m walking around in the dark.
I think this might be the most strongly contrarian post here in a while...
Have you read his paper on CEV? To the best of my knowledge, that’s the clearest place he’s laid out what he wants an AGI to do, and I wouldn’t really label it “take over the world and do what [Eliezer Yudkowsky] wants” except for broad use of those terms to the point of dropping their typical connotations.
While some people tried to appeal to non-linear aggregation, you would have to appeal to a non-linear aggregation which was non-linear enough to reduce 3^^^3 to a small constant.
Sum(1/n^2, 1, 3^^^3) < Sum(1/n^2, 1, inf) = (pi^2)/6
So an algorithm like, “order utilities from least to greatest, then sum with a weight if 1/n^2, where n is their position in the list” could pick dust specks over torture while recommending most people not go sky diving (as their benefit is outweighed by the detriment to those less fortunate).
This would mean that scope insensitivity, beyond a certain point, is a feature of our morality rather than a bias; I am not sure my opinion of this outcome.
That said, while giving an answer to the one problem that some seem more comfortable with, and to the second that everyone agrees on, I expect there are clear failure modes I haven’t thought of.
Edited to add:
This of course holds for weights of 1/n^a for any a>1; the most convincing defeat of this proposition would be showing that weights of 1/n (or 1/(n log(n))) drop off quickly enough to lead to bad behavior.
… and how many office chairs we’ve broken?
I guess I am jumping the shark here.
I don’t think that idiom means what you think it means.
I expect this is incorrect.
Specifically, I would guess that you can distinguish the strength of your belief that a lottery ticket you might purchase will win the jackpot from one in a thousand (a.k.a. 0.1%). Am I mistaken?
The problem seems trivially easy.
Each observed emerald is evidence for both “the emerald is green” and “the emerald is grue.” The first is preferred because it is vastly simpler (and picking any particular T, of course, is hugely privileging the hypothesis!) Evidence that is equally strong for two propositions doesn’t change their relative likelihoods—so it starts out more likely that the emeralds are green than grue, and it ends more likely that the emeralds are green than grue, but both are quickly more likely than the proposition that emeralds are uniformly red.
What’s weird about this?
Apparently he hasn’t seen many Cohen brothers movies...
Beware that many things labeled “adverbs” in dictionaries (particularly older dictionaries) aren’t the adverbs that we want to eliminate from clear writing. A better summation of the whole bit on adjectives and adverbs is a simple application of “Prefer Brevity”: anytime you have , see if you can replace the whole thing with a single word of the same type as the target that expresses the whole idea. This will usually be shorter, clearer, and more interesting.
Rounding to zero is odd. In the absence of other considerations, you have no preference whether or not people get a dust speck in their eye?
It is also in violation of the structure of the thought experiment—a dust speck was chosen as the least bad bad thing that can happen to someone. If you would round it to zero, then you need to choose slightly worse thing—I can’t imagine your intuitions will be any less shocked by preferring torture to that slightly worse thing.
If you don’t have that stuff down but try to buy the hat you saw Justin Timberlake wearing you’re going to look silly.
If you do have that stuff down and try to buy the hat you saw Justin Timberlake wearing, you’re going to look silly.
Edited to elaborate:
Modern entertainment celebrities are a poor choice of role-model for clothing, for several reasons.
1) They are usually accorded higher status, and thus able to “get away with” more. 2) They will often be more interested in attracting overt attention to their clothing than you would be. 3) There is a significant selection bias—they are mostly people who look good in the first place. George Clooney in something awful still looks like George Clooney. Also, they may be dressing to emphasize what you might prefer to de-emphasize.
Better are classic celebrities known for their dress sense, who at least have the additional filter of being remembered for it this much later. Also better are politicians and CEOs, who are presumably chosen less for their intrinsic looks (we hope), and for whom 2 probably does not apply.
Best is people around you, with a similar overall look.
In all cases, see if you can figure out what about the item or combination in question is working, consider whether something similar would work for you, and don’t be afraid to ask for help.
[Y]ou should be as ready to drop it to 69% as raising it to 71%.
No, you should be as ready to drop it to 69% as raise it to ~70.98%. With rounding, obviously, the above isn’t numerically wrong, but that’s not my objection: it encourages the reader to think of probability updates in percentages as addative, which is wrong.
(edited: fixed my wrong numbers...)
My recollection is that this guy was winning competitions until they kicked him out on the grounds that without force feeding, it’s not really foie gras.
And this is just hiding the complexity, not making it simpler. Complexity isn’t a function of how many words you use, cf. “The lady down the street is a witch; she did it.” If we are writing a program that emits actual features of reality, rather than socially defined labels, the simplest program for green is simpler than the simplest program for grue or bleen. That you can also produce more complex programs that give the same results (defining green in terms of bleen and grue is only one such example) is both trivially true and irrelevant.
This. Either we know nothing about each of the three states, or we know nothing about either of the two states, not both.
I cannot verify right now, but I believe it was in this TED talk.
We are adaptation executers, not fitness maximizers.
That’s equally the case for other animals.
I expect the problem is not that you are wrong (that’s more or less open), but that there has been similar discussion in many places (one is here) on this site and building another tree with pretty much the same starting point doesn’t really make sense.
Azathoth should probably link here. I think using our jargon is fine, but links to the source help keep it discoverable for newcomers.