I have met people who explicitly say they prefer a lower gap between them and the better-offs over a better absolute level for themselves. IIRC they were more concerned about ‘fairness’ than about what the powerful might do to them. They also believed that most would agree with them (I believe the opposite).
pangel
There is an animated series for children aimed at explaining the human body which personifies bacteria, viruses, etc. Anyone interested in pursuing your idea may want to pick up techniques from the show:
Wikipedia article: http://en.wikipedia.org/wiki/Once_Upon_a_Time..._Life
Example: http://www.youtube.com/watch?v=LIyvrcHnriE&t=1m11s
Although I appreciate the parallel, and am skeptical of both, the mental paths that lead to those somewhat related ideas are seriously dissimilar.
I see your point. As an author I would think I’m misdirecting my readers by doing that though; “Voldemort has the same deformity as in canon? He’s been playing with Horcruxes!” is the reasoning I would expect from them. Which is why I would, say, remove Quirrell’s turban as soon as my plot had Voldemort not on the back of Quirrell’s head.
An intuition is that red-black trees encode 2-3-4 trees (B-trees of order 4) as binary trees.
For a simpler case, 2-3 trees (Ie. B-trees of order 3) are either empty, a (2-)node with 1 value and 2 subtrees, or a (3-)node with 2 values and 3 subtrees. The idea is to insert new values in their sorted position, expand 2-nodes to 3-nodes if necessary, and bubble up the extra values when a 3-node should be expanded. This keeps the tree balanced.
A 2-3-4 tree just generalises the above.
Now the intuition is that red means “I am part of a bigger node.” That is, red nodes represent the values contained in some higher black node. If the black node represents a 2-node, it has no red children. If it represents a 3-node, it has one red child, and if it represents a 4-node, it has 2 red children.
In this context, the “rules” of the red-black trees make complete sense. For instance we only count black trees when comparing branch heights because those represent the actual nodes. I’m sure that with a bit of work, it’s possible to make complete sense of the insertion/deletion rules through the B-tree lens but I haven’t done it.
edit: I went through the insertion rules and they do make complete sense if you think about a B-tree while you read them.
I had the exact same argument with my girlfriend (a bad idea) a while ago and asked for references to point her to on the IRC channel. I was given The Simple Truth and The Relativity of Wrong.
So I was about to write a very supportive response when I saw Mitchell Porter’s comment. And this
(...) the children of post-Christian agnostics grow up to be ideologically aggressive posthuman rationalists.
aptly describes recent interactions I’ve had with my father¹. The accusation of narrowmindedness was present.
So, recurring conflicts with friends and family because of a newfound perspective on, well, everything? Values quickly changing as a consequence of new beliefs on what is true and what is not? Assuming we are in the they-were-right-this-time subgroup of this cliché, there must be smarter ways of dealing with it than making ourselves look crazy in front of the people who care about us.
¹ Except he’s a raging atheist but has never propagated the consequences of this belief to his philosophy.
Responding to a point about the rise of absolute wealth since 1916, this article makes (not very well) a point about the importance of relative wealth.
Comparing folks of different economic strata across the ages ignores a simple fact: Wealth is relative to your peers, both in time and geography.
I’ve had a short discussion about this earlier, and find it very interesting.
In particular, I sincerely do not care about my relative wealth. I used to think that was universal, then found out I was wrong. But is it typical? To me it has profound implications about what kind of economic world we should strive for—if most folks are like me, the current system is fine. If they are like some people I have met, a flatter real wealth distribution, even at the price of a much, much lower mean, could be preferable.
I’m interested in any thoughts you all might have on the topic :)
Or Harry transfigured Hermione’s body into a rock and then the rock into a brown diamond. Unless the story explicitly disallows double transfigurations and I missed it.
“Those who are spoken of in a prophecy, may listen to that prophecy there. Do you see the implication, Harry?”
Shouldn’t Minerva see another implication, that Dumbledore has no reason to wonder whether he is the dark lord of the prophecy?
Thank you for the link! Note that the .pdf version of the article (which is also referenced in dbaupp’s link) has a record of the “hostile-wife” cases over a span of 8 years.
Women don’t like cryonics.
What made you believe this? Is there a pattern to the declared reasons?
Unless its utility function has a maximum, we are at risk. Observing Mandelbrot fractals is probably enhanced by having all the atoms of a galaxy playing the role of pixels.
Would you agree that unless the utility function of a random AI has a (rather low) maximum, and barring the discovery of infinite matter/energy sources, its immediate neighbourhood is likely to get repurposed?
I must say that at least I finally understand why you think botched FAIs are more risky than others.
But consider, as Ben Goertzel mentioned, that nobody is trying to build a random AI. Whatever achieves AGI-level is likely to have a built-in representation for humans and to have a tendency to interact with them. Check to see if I actually understood you correctly: does the previous sentence make it more probable that any future AGI is likely to be destructive?
You probably already agreed with “Ghosts in the Machine” before reading it since obviously, a program executes exactly its code even in the context of AI. Also obviously, the program can still appear to not do what it’s supposed to if “supposed” is taken to mean to programmer’s intent.
These statements don’t ignore machine learning; they imply that we should not try to build an FAI using current machine learning techniques. You’re right, we understand (program + parameters learned from dataset) even less than (program). So while the outside view might say: “current machine learning techniques are very powerful, so they are likely to be used for FAI,” that piece of inside view says: “actually, they aren’t. Or at least they shouldn’t.” (“learn” has a precise operational meaning here, so this is unrelated to whether an FAI should “learn” in some other sense of the word).
Again, whether a development has been successful or promising in some field doesn’t mean it will be as successful in FAI, so imitation of the human brain isn’t necessarily good here. Reasoning by analogy and thinking about evolution is also unlikely to help; nature may have given us “goals”, but they are not goals in the same sense as : “The goal of this function is to add 2 to its input,” or “The goal of this program is to play chess well,” or “The goal of this FAI is to maximize human utility.”
Congratulations!
The fictional college of the article only selects incoming students on price.
...people have already set up their fallback arguments once the soldier of ‘...’ has been knocked down.
Is this really good phrasing or did you manage to naturally think that way? If you do it automatically: I would like to do it too.
It often takes me a long time to recognize an argument war. Until that moment, I’m confused as to how anyone could be unfazed by new information X w.r.t. some topic. How do you detect you’re not having a discussion but are walking on a battlefield?
As an anecdote, I had an opposite slight tendency to go for what seemed like the worst answer and I had to switch answers twice because of this.
As an instance of the limits of replacing words with their definitions to clarify debates, this looks like an important conversation.
The fuzziest starting point for “consciousness” is “something similar to what I experience when I consider my own mind”. But this doesn’t help much. Someone can still claim “So rocks probably have consciousness!”, and another can respond “Certainly not, but brains grown in labs likely do!”. Arguing from physical similarity, etc. just relies on the other person sharing your intuitions.
For some concepts, we disagree on definitions because we don’t know actually know what those concepts refer to (this doesn’t include concepts like “art”, etc.). I’m not sure what the best way to talk about whether an entity possesses such a concept is. Are there existing articles/discussions about that?
Being in a situation somewhat similar to yours, I’ve been worrying that my lowered expectations about others’ level of agency (with elevated expectations as to what constitutes a “good” level of agency) has an influence on those I interact with: if I assume that people are somewhat influenced by what others expect of them, I must conclude that I should behave (as far as they can see) as if I believed them to be as capable of agency as myself, so that their actual level of agency will improve. This would would work on me, for instance I’d be more generally prone to take initiative if I saw trust in my peers’ eyes.
I understood the introductory question as “Frodo Baggins from the Lord of the Rings is buying pants. Which of these is he most likely to buy?”, and correctly answered (c). I suggest rephrasing your question to ensure that it actually tests the reader’s fictional bias. Also, Szalinski in Journal of Cognitive Minification is a nice one.