I predict a 10% chance that I win my bet with Eliezer in the next decade (the one about a transhuman intelligence being created not by Eliezer, not being deliberately created for Friendliness, and not destroying the world.)
I’ll go ahead and claim a 98% chance that, if a transhuman, non-Friendly intelligence is created, it makes things worse. And an 80% chance that this is in a nonrecoverable way.
I kinda hope you’re right, but I just don’t see how.
In other words, one of us did not specify the prediction correctly.
I don’t think it’s me. I deliberately didn’t say it’d destroy the world. Would it be correct to modify yours to say ”..and not making the world a worse place”?
No. If you look at the original bet with Eliezer, he was betting that on those conditions, the AI would literally destroy the world. In other words, if both of us are still around, and I’m capable of claiming the money, I win the bet, even if the world is worse off.
one of us did not specify the prediction correctly
Assuming that there is, in fact, a correct way to specify the predictions. It’s possible that you weren’t actually disagreeing and that you both assign substantial probability to (world is made worse off but not destroyed | non-FAI is created) while still having a low probability for (non-FAI is created in the next decade).
Considering that the bet includes “not destroying the world”, the only fair way to do this type of bet (for money) is for you to give the other party $X now, and for them to give you $Y later if you turn out to be correct.
I said there at the time, “As for what constitutes the AI, since we don’t have any measure of superhuman intelligence, it seems to me sufficient that it be clearly more intelligent than any human being.” Everyone’s agreement that it is clearly more intelligent would be the “objective” standard.
In any case, I am risk averse, so I don’t really want to bet on the next decade, which according to my prediction would give me a 90% chance of losing the bet. The bet with Eliezer was indefinite, since I already paid; I am simply counting on it happening within our lifetimes.
I like your side of the original bet because I think the probability that the first superintelligent AI will be only slightly smarter than humans, non-goal-driven, and non-self-improving, and therefore non-Singularity-inducing, is better than 1%. The reason I’m willing to bet against you on the above version is that I think 10% is way overconfident for a 10-year timeframe.
I predict a 10% chance that I win my bet with Eliezer in the next decade (the one about a transhuman intelligence being created not by Eliezer, not being deliberately created for Friendliness, and not destroying the world.)
I’ll go ahead and claim a 98% chance that, if a transhuman, non-Friendly intelligence is created, it makes things worse. And an 80% chance that this is in a nonrecoverable way.
I kinda hope you’re right, but I just don’t see how.
This prediction is technically consistent with my prediction (although this doesn’t mean that I don’t disagree with it anyway.)
In other words, one of us did not specify the prediction correctly.
I don’t think it’s me. I deliberately didn’t say it’d destroy the world. Would it be correct to modify yours to say ”..and not making the world a worse place”?
No. If you look at the original bet with Eliezer, he was betting that on those conditions, the AI would literally destroy the world. In other words, if both of us are still around, and I’m capable of claiming the money, I win the bet, even if the world is worse off.
Yup. If he lives to collect, he collects.
Assuming that there is, in fact, a correct way to specify the predictions. It’s possible that you weren’t actually disagreeing and that you both assign substantial probability to (world is made worse off but not destroyed | non-FAI is created) while still having a low probability for (non-FAI is created in the next decade).
Considering that the bet includes “not destroying the world”, the only fair way to do this type of bet (for money) is for you to give the other party $X now, and for them to give you $Y later if you turn out to be correct.
That’s exactly what happened; I gave Eliezer $10, and he will pay me $1000 when I win the bet.
I’ll put down money on the other side of this prediction provided that we can agree on an objective definition of “transhuman intelligence”.
My bet with Eliezer can be found at http://lesswrong.com/lw/wm/disjunctions_antipredictions_etc/.
I said there at the time, “As for what constitutes the AI, since we don’t have any measure of superhuman intelligence, it seems to me sufficient that it be clearly more intelligent than any human being.” Everyone’s agreement that it is clearly more intelligent would be the “objective” standard.
In any case, I am risk averse, so I don’t really want to bet on the next decade, which according to my prediction would give me a 90% chance of losing the bet. The bet with Eliezer was indefinite, since I already paid; I am simply counting on it happening within our lifetimes.
I like your side of the original bet because I think the probability that the first superintelligent AI will be only slightly smarter than humans, non-goal-driven, and non-self-improving, and therefore non-Singularity-inducing, is better than 1%. The reason I’m willing to bet against you on the above version is that I think 10% is way overconfident for a 10-year timeframe.
Would a sped-up upload count as super-intelligent in your opinion?