Thanks for posting this. I would definitely enjoy seeing a debate between Deutsch and Yudkowsky.
The part that dealt with ethics was incredibly naive. About 47 minutes in, for example, he is counseling us not to fear ET, because ET’s morality will inevitably be superior to our own. And the slogan: “All evils are due to lack of knowledge”. Why does this kind of thing remind me of George W. Bush?
But I agreed with some parts of his argument for the superiority of a a Popperian approach over a Bayesian one when ‘unknown unknowns’ regarding the growth of knowledge are involved. For example, 42:30 in when he quotes Popper advising us to drop the hopeless search for an inerrant source of knowledge, and to instead search for a fairly reliable method of eliminating error once it has become established. Maybe a good idea.
I have mixed feelings, though, about his advocacy of optimism. He argues that Malthus’s pessimistic predictions failed simply because Malthus had no way of foreseeing the positive effects of the growth of knowledge. But by the same token, optimistic predictions of a positive future for mankind are also liable to fail because they attempt to predict that the growth of knowledge will include specific breakthroughs.
The part that dealt with ethics was incredibly naive. About 47 minutes in, for example, he is counseling us not to fear ET, because ET’s morality will inevitably be superior to our own.
This seems pretty daft to me too. It looks like a kind of moral realism—according to which being eaten by aliens might well be “good”—since it leads to more “goodness”.
I have some sympathies for the idea that convergent evolution is likely to eventually result in a universal morality—rather than, say, pebble sorters and baby eaters. If true, that might be considered to be a kind of moral realism.
It is a kind of moral realism if you add in the proclamation that one ought to do now that which we all converge toward doing later. Plus you probably need some kind of argument that the limit of the convergence is pretty much independent of the starting point.
My own viewpoint on morality is closely related to this. I think that what one morally ought to do now is the same as what one prudentially and pragmatically ought to do in an ideal world in which all agents are rational, communication between agents is cheap, there are few, if any, secrets, and lifetimes are long. In such a society, a strongly enforced “social contract” will come into existence, which will have many of the characteristics of a universal morality. At least within a species. And to some degree, between species.
It is a kind of moral realism if you add in the proclamation that one ought to do now that which we all converge toward doing later.
...or if you think what we ought to be doing is helping to create the thing with the universal moral values.
I’m not really convinced that the convergence will be complete, though. If two advanced alien races meet, they probably won’t agree on all their values—perhaps due to moral spontaneous symmetry breaking—and small differences can become important.
. And the slogan: “All evils are due to lack of knowledge”.
You should read his book, The Beginning of Infinity. It’s not a slogan but a philosophical position which he explains at length. Learn why he thinks it. He’s not an idiot.
Since you partly agree with him, and have mixed feelings, I think it’d be worth looking into for you, so I wanted to let you know it’s much more than a slogan! And “optimism” to DD does not mean “predicting a positive future”, it’s not about wearing rose colored glasses.
Thanks for posting this. I would definitely enjoy seeing a debate between Deutsch and Yudkowsky.
The part that dealt with ethics was incredibly naive. About 47 minutes in, for example, he is counseling us not to fear ET, because ET’s morality will inevitably be superior to our own. And the slogan: “All evils are due to lack of knowledge”. Why does this kind of thing remind me of George W. Bush?
But I agreed with some parts of his argument for the superiority of a a Popperian approach over a Bayesian one when ‘unknown unknowns’ regarding the growth of knowledge are involved. For example, 42:30 in when he quotes Popper advising us to drop the hopeless search for an inerrant source of knowledge, and to instead search for a fairly reliable method of eliminating error once it has become established. Maybe a good idea.
I have mixed feelings, though, about his advocacy of optimism. He argues that Malthus’s pessimistic predictions failed simply because Malthus had no way of foreseeing the positive effects of the growth of knowledge. But by the same token, optimistic predictions of a positive future for mankind are also liable to fail because they attempt to predict that the growth of knowledge will include specific breakthroughs.
Well, it reminds me of Plato, which is much more damning.
This seems pretty daft to me too. It looks like a kind of moral realism—according to which being eaten by aliens might well be “good”—since it leads to more “goodness”.
Right. But moral realism is not necessarily daft. It only becomes so when you add in universalism and a stricture against self-indexicality.
I have some sympathies for the idea that convergent evolution is likely to eventually result in a universal morality—rather than, say, pebble sorters and baby eaters. If true, that might be considered to be a kind of moral realism.
It is a kind of moral realism if you add in the proclamation that one ought to do now that which we all converge toward doing later. Plus you probably need some kind of argument that the limit of the convergence is pretty much independent of the starting point.
My own viewpoint on morality is closely related to this. I think that what one morally ought to do now is the same as what one prudentially and pragmatically ought to do in an ideal world in which all agents are rational, communication between agents is cheap, there are few, if any, secrets, and lifetimes are long. In such a society, a strongly enforced “social contract” will come into existence, which will have many of the characteristics of a universal morality. At least within a species. And to some degree, between species.
...or if you think what we ought to be doing is helping to create the thing with the universal moral values.
I’m not really convinced that the convergence will be complete, though. If two advanced alien races meet, they probably won’t agree on all their values—perhaps due to moral spontaneous symmetry breaking—and small differences can become important.
You should read his book, The Beginning of Infinity. It’s not a slogan but a philosophical position which he explains at length. Learn why he thinks it. He’s not an idiot.
Since you partly agree with him, and have mixed feelings, I think it’d be worth looking into for you, so I wanted to let you know it’s much more than a slogan! And “optimism” to DD does not mean “predicting a positive future”, it’s not about wearing rose colored glasses.