And yet, all the publicly known MIRI research seems to be devoted to formal proof systems, not to testing, “boxing”, fail-safe mechanisms, defense in depth, probabilistic failure analysis, and so on.
This paragraph is a simplification rather than the whole story, but: Our research tends to be focused on mathematical logic and proof systems these days because those are expressive frameworks with which to build toy models that can give researchers some general insight into the shape of the novel problems of AGI control. Methods like testing and probabilistic failure analysis require more knowledge of the target system than we now have for AGI.
And we do try to be clear about the role that proof plays in our research. E.g. see the tiling agents LW post:
The paper uses first-order logic (FOL) because FOL has a lot of useful standard machinery for reflection which we can then invoke; in real life, FOL is of course a poor representational fit to most real-world environments outside a human-constructed computer chip with thermodynamically expensive crisp variable states.
As further background, the idea that something-like-proof might be relevant to Friendly AI is not about achieving some chimera of absolute safety-feeling, but rather about the idea that the total probability of catastrophic failure should not have a significant conditionally independent component on each self-modification, and that self-modification will (at least in initial stages) take place within the highly deterministic environment of a computer chip. This means that statistical testing methods (e.g. an evolutionary algorithm’s evaluation of average fitness on a set of test problems) are not suitable for self-modifications which can potentially induce catastrophic failure (e.g. of parts of code that can affect the representation or interpretation of the goals).
Reply to: “My previous understanding had been that MIRI staff think that by default, one should expect to need to solve the Lob problem in order to build a Friendly AI.”
By default, if you can build a Friendly AI you were not troubled by the Lob problem. That working on the Lob Problem gets you closer to being able to build FAI is neither obvious nor certain (perhaps it is shallow to work on directly, and those who can build AI resolve it as a side effect of doing something else) but everything has to start somewhere. Being able to state crisp difficulties to work on is itself rare and valuable, and the more you engage with a problem like stable self-modification, the more you end up knowing about it. Engagement in a form where you can figure out whether or not your proof goes through is more valuable than engagement in the form of pure verbal arguments and intuition, although the latter is significantly more valuable than not thinking about something at all.
My guess is that people hear the words “proof” and “Friendliness” in the same sentence but (quite understandably!) don’t take time to read the actual papers, and end up with the impression that MIRI is working on “provably Friendly AI” even though, as far as I can tell, we’ve never claimed that.
This paragraph is a simplification rather than the whole story, but: Our research tends to be focused on mathematical logic and proof systems these days because those are expressive frameworks with which to build toy models that can give researchers some general insight into the shape of the novel problems of AGI control. Methods like testing and probabilistic failure analysis require more knowledge of the target system than we now have for AGI.
When somebody says they are doing A for reason X, then reason X is criticized and they claim they are actually doing A for reason Y, and they have always been, I tend to be wary.
In this case A is “research on mathematical logic and formal proof systems”, X is “self-improving AI is unboxable and untestable, we need to get it provably right on the first try” and Y is “Our research tends to be focused on mathematical logic and proof systems these days because those are expressive frameworks with which to build toy models that can give researchers some general insight into the shape of the novel problems of AGI control”.
If Y is better than X, as it seems to me in this case, this is indeed an improvement, but when you modify your reasons and somehow conclude that your previously chosen course of action is still optimal, then I doubt your judgment.
“And if Novamente should ever cross the finish line, we all die. That is what I believe or I would be working for Ben this instant.” “I intend to plunge into the decision theory of self-modifying decision systems and never look back. (And finish the decision theory and implement it and run the AI, at which point, if all goes well, we Win.)” ”Take metaethics, a solved problem: what are the odds that someone who still thought metaethics was a Deep Mystery could write an AI algorithm that could come up with a correct metaethics? I tried that, you know, and in retrospect it didn’t work.” ″Find whatever you’re best at; if that thing that you’re best at is inventing new math[s] of artificial intelligence, then come work for the Singularity Institute. [ … ] Aside from that, though, I think that saving the human species eventually comes down to, metaphorically speaking, nine people and a brain in a box in a basement, and everything else feeds into that.”
X is “self-improving AI is unboxable and untestable, we need to get it provably right on the first try”
But where did somebody from MIRI say “we need to get it provably right on the first try”? Also, what would that even mean? You can’t write a formal specification that includes the entire universe and than formally verify an AI against that formal specification. I couldn’t find any Yudkowsky quotes about “getting it provably right on the first try” at the link you provided.
Again, a simplification, but: we want a sufficient guarantee of stably friendly behavior before we risk pushing things past a point of no return. A sufficient guarantee plausibly requires having robust solutions for indirect normatively, stable self-modification, reflectively consistent decision theory, etc. But that doesn’t mean we expect to ever have a definite “proof” that system will be stably friendly.
Formal methods work for today’s safety-critical software systems never results in a definite proof that a system will be safe, either, but ceteris paribis formal proofs of particular internal properties of the system give you more assurance that the system will behave as intended than you would otherwise have.
Otherwise compared to nothing, or otherwise compared to informal methods?
Are you talking into account that the formal/proveable/unupdateable approach has a drawback in the AI domain that it doesn’t have in the non AI domain, namely you lose the potential to tell an AI “stop doing that,it isn’t nice”
For anyone else who only read the link’s main text and couldn’t understand how it’s meant to refute the “MIRI intends to make an AI that is provably friendly” idea: the explicit disclaimer is in footnote 7.
You admit that friendliness is not guaranteed. That means that you’re not wrong, which is a good sign, but it doesn’t fix the problem that friendliness isn’t guaranteed. You have as many tries as you want for intelligence, but only one for friendliness. How do you expect to manage it in the first try?
It also doesn’t seem to be clear to me that this is the best strategy. In order to get that provably friendly thing to work, you have to deal with an explicit, unchanging utility function, which means that friendliness has to be right from the beginning. If you deal with an implicit utility function that will change as the AI comes to understand itself better, you could program an AI to recognise pictures of smiles, then let it learn that the smiles correspond to happy humans and update its utility function accordingly, until it (hopefully) decides on “do what we mean”.
It seems to me that part of the friendliness proof would require proving that the AI will follow its explicit utility function. This would be impossible. The AI is not capable of perfect solomonoff induction, and will alway have some bias, no matter how small. This means that its implicit utility function will never quite match its explicit utility function. Am I missing something here?
In order to get that provably friendly thing to work
Again, I think “provably friendly thing” mischaracterizes what MIRI thinks will be possible.
I’m not sure exactly what you’re saying in the rest of your comment. Have you read the section on indirect normativity in Superintelligence? I’d start there.
Given the apparent misconceptions about MIRI’s work even among LWers, it seems like you need to write a Main post clarifying what MIRI does and does not claim, and does and does not work on.
Again, I think “provably friendly thing” mischaracterizes what MIRI thinks will be possible.
From what I can gather, there’s still supposed to be some kind of proof, even if it’s just the mathematical kind where you’re not really certain because there might be an error in it. The intent is to have some sort of program that maximizes utility function U, and then explicitly write the utility function as something along the lines of “do what I mean”.
Have you read the section on indirect normativity in Superintelligence? I’d start there.
I’m not sure what you’re referring to. Can you give me a link?
For one thing, you’d have to explicitly come up with the utility function before you can prove the AI follows it.
You can either make an AI that will proveably do what you mean, or make one that will hopefully figure out what you meant when you said “do what I mean,” and do that.
When I picture what a proven-Friendly AI looks like, I think of something where it’s goals are 1)Using a sample of simulated humans, generalize to unpack ‘do what I mean’ followed by 2)Make satisfying that your utility function.
Proving those two steps each rigorously would produce a proven-Friendly AI without an explicit utility function. Proving step 1 to be safe would obviously be very difficult; proving step 2 to be safe would probably be comparatively easy. Both, however, are plausibly rigorously provable.
Those points were excellent, and it is no credit to LW that the comment was on negative karma when I encountered it.
No, the approach based on proveable correctness isn’t a 100% guarantee, and, since it involves an unupdateable UF, and has the additional disadvantage that if you don’t get the UF right first time, you can’t tweak it.
The alternative family of approaches, based on flexibility, training and acculturation have often been put forward by MIRIs critics....and MIRI has never been quantiified why the one approach is better than the other.
I really wish people would stop repeating this claim. Mathematical Proofs Improve But Don’t Guarantee Security, Safety, and Friendliness.
And yet, all the publicly known MIRI research seems to be devoted to formal proof systems, not to testing, “boxing”, fail-safe mechanisms, defense in depth, probabilistic failure analysis, and so on.
Motte and bailey?
This paragraph is a simplification rather than the whole story, but: Our research tends to be focused on mathematical logic and proof systems these days because those are expressive frameworks with which to build toy models that can give researchers some general insight into the shape of the novel problems of AGI control. Methods like testing and probabilistic failure analysis require more knowledge of the target system than we now have for AGI.
And we do try to be clear about the role that proof plays in our research. E.g. see the tiling agents LW post:
And later, in an Eliezer comment:
My guess is that people hear the words “proof” and “Friendliness” in the same sentence but (quite understandably!) don’t take time to read the actual papers, and end up with the impression that MIRI is working on “provably Friendly AI” even though, as far as I can tell, we’ve never claimed that.
When somebody says they are doing A for reason X, then reason X is criticized and they claim they are actually doing A for reason Y, and they have always been, I tend to be wary.
In this case A is “research on mathematical logic and formal proof systems”,
X is “self-improving AI is unboxable and untestable, we need to get it provably right on the first try”
and Y is “Our research tends to be focused on mathematical logic and proof systems these days because those are expressive frameworks with which to build toy models that can give researchers some general insight into the shape of the novel problems of AGI control”.
If Y is better than X, as it seems to me in this case, this is indeed an improvement, but when you modify your reasons and somehow conclude that your previously chosen course of action is still optimal, then I doubt your judgment.
Well… (trigger wa-...)
“And if Novamente should ever cross the finish line, we all die. That is what I believe or I would be working for Ben this instant.”
“I intend to plunge into the decision theory of self-modifying decision systems and never look back. (And finish the decision theory and implement it and run the AI, at which point, if all goes well, we Win.)”
”Take metaethics, a solved problem: what are the odds that someone who still thought metaethics was a Deep Mystery could write an AI algorithm that could come up with a correct metaethics? I tried that, you know, and in retrospect it didn’t work.”
″Find whatever you’re best at; if that thing that you’re best at is inventing new math[s] of artificial intelligence, then come work for the Singularity Institute. [ … ] Aside from that, though, I think that saving the human species eventually comes down to, metaphorically speaking, nine people and a brain in a box in a basement, and everything else feeds into that.”
But where did somebody from MIRI say “we need to get it provably right on the first try”? Also, what would that even mean? You can’t write a formal specification that includes the entire universe and than formally verify an AI against that formal specification. I couldn’t find any Yudkowsky quotes about “getting it provably right on the first try” at the link you provided.
Why talk about unupdateable UFs and “solving morality” if you are not going for that approach?
Again, a simplification, but: we want a sufficient guarantee of stably friendly behavior before we risk pushing things past a point of no return. A sufficient guarantee plausibly requires having robust solutions for indirect normatively, stable self-modification, reflectively consistent decision theory, etc. But that doesn’t mean we expect to ever have a definite “proof” that system will be stably friendly.
Formal methods work for today’s safety-critical software systems never results in a definite proof that a system will be safe, either, but ceteris paribis formal proofs of particular internal properties of the system give you more assurance that the system will behave as intended than you would otherwise have.
Otherwise compared to nothing, or otherwise compared to informal methods?
Are you talking into account that the formal/proveable/unupdateable approach has a drawback in the AI domain that it doesn’t have in the non AI domain, namely you lose the potential to tell an AI “stop doing that,it isn’t nice”
How so?
Do you think that wouldl work on Clippie?
For anyone else who only read the link’s main text and couldn’t understand how it’s meant to refute the “MIRI intends to make an AI that is provably friendly” idea: the explicit disclaimer is in footnote 7.
You admit that friendliness is not guaranteed. That means that you’re not wrong, which is a good sign, but it doesn’t fix the problem that friendliness isn’t guaranteed. You have as many tries as you want for intelligence, but only one for friendliness. How do you expect to manage it in the first try?
It also doesn’t seem to be clear to me that this is the best strategy. In order to get that provably friendly thing to work, you have to deal with an explicit, unchanging utility function, which means that friendliness has to be right from the beginning. If you deal with an implicit utility function that will change as the AI comes to understand itself better, you could program an AI to recognise pictures of smiles, then let it learn that the smiles correspond to happy humans and update its utility function accordingly, until it (hopefully) decides on “do what we mean”.
It seems to me that part of the friendliness proof would require proving that the AI will follow its explicit utility function. This would be impossible. The AI is not capable of perfect solomonoff induction, and will alway have some bias, no matter how small. This means that its implicit utility function will never quite match its explicit utility function. Am I missing something here?
Typo?
Again, I think “provably friendly thing” mischaracterizes what MIRI thinks will be possible.
I’m not sure exactly what you’re saying in the rest of your comment. Have you read the section on indirect normativity in Superintelligence? I’d start there.
Given the apparent misconceptions about MIRI’s work even among LWers, it seems like you need to write a Main post clarifying what MIRI does and does not claim, and does and does not work on.
Fixed.
From what I can gather, there’s still supposed to be some kind of proof, even if it’s just the mathematical kind where you’re not really certain because there might be an error in it. The intent is to have some sort of program that maximizes utility function U, and then explicitly write the utility function as something along the lines of “do what I mean”.
I’m not sure what you’re referring to. Can you give me a link?
Superintelligence is a recent book by Nick Bostrom
I think this is incorrect. If it isn’t, it at least requires some proof.
For one thing, you’d have to explicitly come up with the utility function before you can prove the AI follows it.
You can either make an AI that will proveably do what you mean, or make one that will hopefully figure out what you meant when you said “do what I mean,” and do that.
When I picture what a proven-Friendly AI looks like, I think of something where it’s goals are 1)Using a sample of simulated humans, generalize to unpack ‘do what I mean’ followed by 2)Make satisfying that your utility function.
Proving those two steps each rigorously would produce a proven-Friendly AI without an explicit utility function. Proving step 1 to be safe would obviously be very difficult; proving step 2 to be safe would probably be comparatively easy. Both, however, are plausibly rigorously provable.
This is what I mean by an explicit utility function. An implicit one is where it never actually calculates utility, like how humans work.
Those points were excellent, and it is no credit to LW that the comment was on negative karma when I encountered it.
No, the approach based on proveable correctness isn’t a 100% guarantee, and, since it involves an unupdateable UF, and has the additional disadvantage that if you don’t get the UF right first time, you can’t tweak it.
The alternative family of approaches, based on flexibility, training and acculturation have often been put forward by MIRIs critics....and MIRI has never been quantiified why the one approach is better than the other.