I like your side of the original bet because I think the probability that the first superintelligent AI will be only slightly smarter than humans, non-goal-driven, and non-self-improving, and therefore non-Singularity-inducing, is better than 1%. The reason I’m willing to bet against you on the above version is that I think 10% is way overconfident for a 10-year timeframe.
I like your side of the original bet because I think the probability that the first superintelligent AI will be only slightly smarter than humans, non-goal-driven, and non-self-improving, and therefore non-Singularity-inducing, is better than 1%. The reason I’m willing to bet against you on the above version is that I think 10% is way overconfident for a 10-year timeframe.