Sounds like we need to formalize human morality first, otherwise you aren’t guaranteed consistency. Of course formalizing human morality seems like a hopeless project. Maybe we can ask an AI for help!
Gray_Area
People don’t maximize expectations. Expectation-maximizing organisms—if they ever existed—died out long before rigid spines made of vertebrae came on the scene. The reason is simple, expectation maximization is not robust (outliers in the environment can cause large behavioral changes). This is as true now as it was before evolution invented intelligence and introspection.
If people’s behavior doesn’t agree with the axiom system, the fault may not be with them, perhaps they know something the mathematician doesn’t.
Finally, the ‘money pump’ argument fails because you are changing the rules of the game. The original question was, I assume, asking whether you would play the game once, whereas you would presumably iterate the money pump until the pennies turn into millions. The problem, though, is if you asked people to make the original choices a million times, they would, correctly, maximize expectations. Because when you are talking about a million tries, expectations are the appropriate framework. When you are talking about 1 try, they are not.
- 28 Oct 2009 15:36 UTC; 5 points) 's comment on Expected utility without the independence axiom by (
- 28 Jan 2013 0:30 UTC; 2 points) 's comment on Allais Malaise by (
“My definition of an intelligent person is slowly becoming ‘someone who agrees with Eliezer’, so that’s all right.”
That’s not in the spirit of this blog. Status is the enemy, only facts are important.
On further reflection, the wish as expressed by Nick Tarleton above sounds dangerous, because all human morality may either be inconsistent in some sense, or ‘naive’ (failing to account for important aspects of reality we aren’t aware of yet). Human morality changes as our technology and understanding changes, sometimes significantly. There is no reason to believe this trend will stop. I am afraid (genuine fear, not figure of speech) that the quest to properly formalize and generalize human morality for use by a ‘friendly AI’ is akin to properly formalizing and generalizing Ptolemean astronomy.
This reminds me of teaching. I think good teachers understand short inferential distances at least intuitively if not explicitly. The ‘shortness’ of inference is why good teaching must be interactive.
Eliezer: When you are experimenting with apples and earplugs you are indeed doing empirical science, but the claim you are trying to verify isn’t “2+2=4” but “counting of physical things corresponds to counting with natural numbers.” The latter is, indeed an empirical statement. The former is a statement about number theory, the truth of which is verified wrt some model (per Tarski’s definition).
Perhaps ‘a priori’ and ‘a posteriori’ are too loaded with historic context. Eliezer seems to associate a priori with dualism, an association which I don’t think is necessary. The important distinction is the process by which you arrive at claims. Scientists use two such processes: induction and deduction.
Deduction is reasoning from premises using ‘agreed upon’ rules of inference such as modus ponens. We call (conditional) claims which are arrived at via deduction ‘a priori.’
Induction is updating beliefs from evidence using rules of probability (Bayes theorem, etc). We call (conditional) claims which are arrived at via induction ‘a posteriori.’
Note: both the rules of inference used in deduction and rules of evidence aggregation used in induction are agreed upon as an empirical matter because it has been observed that we get useful results using these particular rules and not others.
Furthermore: both deduction and induction happen only (as far as we know) in the physical world.
Furthermore: deductive claims by themselves are ‘sterile,’ and making them useful immediately entails coating them with a posteriori claims.
Nevertheless, there is a clear algorithmic distinction between deduction and induction, a distinction which is mirrored in the claims obtained from these two processes.
Eliezer said: “I encounter people who are quite willing to entertain the notion of dumber-than-human Artificial Intelligence, or even mildly smarter-than-human Artificial Intelligence. Introduce the notion of strongly superhuman Artificial Intelligence, and they’ll suddenly decide it’s “pseudoscience”.”
It may be that the notion of strongly superhuman AI runs into people’s preconceptions they aren’t willing to give up (possibly of religious origins). But I wonder if the ‘Singularians’ aren’t suffering from a bias of their own. Our current understanding of science and intelligence is compatible with many non-Singularity outcomes:
(a) ‘human-level’ intelligence is, for various physical reasons, an approximate upper bound on intelligence (b) Scaling past ‘human-level’ intelligence is possible but difficult due to extremely poor returns (e.g., logarithmic rather than exponential growth past a certain point) (c) Scaling past ‘human-level’ intelligence is possible, is not difficult, but runs into an inherent ‘glass ceiling’ far below ‘incomprehensibility’ of the resulting intelligence
and so on
Many of these scenarios seem as interesting to me as a true Singularity outcome, but my perception is they aren’t being given equal time. Singularity is certainly more ‘vivid,’ but is it more likely?
For what it’s worth, I find plenty to disagree with Eleazar about, on points of both style and substance, but on death I think he has it exactly right. Death is a really bad thing, and while humans have diverse psychological adaptations for dealing with death, it seems the burden of proof is on people who do NOT want to make the really bad thing go away in the most expedient way possible.
“The idea that Bayesian decision theory being descriptive of the scientific process is very beautifully detailed in classics like Pearl’s book, Causality, in a way that a blog or magazine article cannot so easily convey.”
I wish people would stop bringing up this book to support arbitrary points, like people used to bring up the Bible. There’s barely any mention of decision theory in Causality, let alone an argument for Bayesian decision theory being descriptive of all scientific process (although Pearl clearly does talk about decisions being modeled as interventions).
What circles do you run in Eliezer? I meet a fair number of people who work in AI, (you can say I “work in AI” myself) and so far I can’t think of a single person who was sure of a way to build general intelligence. Is this attitude you observe a common one among people who aren’t actually doing AI research, but who think about AI?
Watching myself trying to write (or speak), I am coming to realize what a horrendous hack the language processes of the brain are. It is sobering to contemplate what sorts of noise and bias this introduces to our attempts to think and communicate.
Eliezer, why are you concerned with untestable questions?
The core issue is whether statements in number theory, and more generally, mathematical statements are independent of physical reality or entailed by our physical laws. (This question isn’t as obvious as it might seem, I remember reading a paper claiming to construct a consistent set of physical laws where 2 + 2 has no definite answer). At any rate, if the former is true, 2+2=4 is outside the province of empirical science, and applying empirical reasoning to evaluate its ‘truth’ is wrong.
“Eliezer is almost certainly wrong about what a hyper-rational AI could determine from a limited set of observations.”
Eliezer is being silly. People invented computational learning theory, which among other things, shows the minimum number of samples needed to recover a given error rate.
“Sometimes I can feel the world trying to strip me of my sense of humor.”
If you are trying to be funny, the customer is always right, I am afraid. The post wasn’t productive, in my opinion, and I have no emotional stake in Christianity at all (not born, not raised, not currently).
billswift said: “Prove it.”
I am just saying ‘being unpredictable’ isn’t the same as free will, which I think is pretty intuitive (most complex systems are unpredictable, but presumably very few people will grant them all free will). As far as the relationship between randomness and free will, that’s clearly a large discussion with a large literature, but again it’s not clear what the relationship is, and there is room for a lot of strange explanations. For example some panpsychists might argue that ‘free will’ is the primitive notion, and randomness is just an effect, not the other way around.
I don’t really understand what Eliezer is arguing against. Clearly he understands the value of mathematics, and clearly he understands the difference between induction and deduction. He seems to be arguing that deduction is a kind of induction, but that doesn’t make much sense to me.
Nick: you can construct a model where there is a notion of ‘natural number’ and a notion of ‘plus’ except this plus happens to act ‘oddly’ when applied to 2 and 2. I don’t think this model would be particularly interesting, but it could be made.
“Causality” by Judea Pearl is an excellent formal treatment of the subject central to empirical science.
In computer science there is a saying ‘You don’t understand something until you can program it.’ This may be because programming is not forgiving to the kind of errors Eliezer is talking about. Interestingly, programmers often use the term ‘magic’ (or ‘automagically’) in precisely the same way Eliezer and his colleague did.