2) Just to amplify point 1) a bit: you shouldn’t always maximize expected utility if you only live once. Expected values — in other words, averages — are very important when you make the same small bet over and over again. When the stakes get higher and you aren’t in a position to repeat the bet over and over, it may be wise to be risk averse.
Expected utilities do not work like that. If you’re risk averse you embody that in the utility function by assigning diminishing returns (and this can indeed lead to a situation where you would take a bet 1000 times but would not take it once), you do not stop maximising expected utility.
If a mathematician like John Baez can be that wrong, doesn’t that mean the topic needs further attention? Not necessarily in the sense of research but that people are given specific resources to read up on so that they don’t make similar mistakes in the future.
I suspect that John Baez and the people from GiveWell are capable of understanding what you understand about this topic. All of them have read a lot of LW and interviewed the SIAI. If you take that into account, their intelligence and knowledge of the positions hold by the SIAI, is there a way to figure out what went wrong and what we can improve so that those and other people understand how they are wrong?
I am just trying to locate the problem. What do you think is the cause of their disagreement?
Baez: … you shouldn’t always maximize expected utility if you only live once.
BenElliot: [Baez is wrong] Expected utilities do not work like that.
XiXiDu: If a mathematician like John Baez can be that wrong …
A mathematician like Baez can indeed be that wrong, when he discusses technical topics that he is insufficiently familiar with. I’m sure Baez is quite capable of understanding the standard position of economists on this topic (the position echoed by BenElliot). But, as it apparently turns out, Baez has not yet done so. No big deal. Treat Baez as an authority on mathematical physics, category theory, and perhaps saving the environment. He is not necessarily an authority on the foundations of microeconomics.
A mathematician like Baez can indeed be that wrong, when he discusses technical topics that he is insufficiently familiar with.
What about Robin Hanson? See for example his post here and here. What is it that he is insufficiently familiar with? Or what about Katja Grace who has been a visiting fellow of the SIAI? See her post here (there are many other posts by her).
And the people from GiveWell even knew about Pascal’s Mugging, what is it that they are insufficiently familiar with?
I mean, those people might disagree for different reasons. But I think that too often the argument is used that people just don’t know what they are talking about, rather than trying to find out why else they might disagree. As I said in the OP, none of them doubts that there are risks from AI, just that we don’t know enough to take them too seriously at this moment. Whereas the SIAI says that the utility associated with AI related matters outweighs those doubts. So if we were going to pinpoint the exact nature of disagreement, would it maybe all come down to how seriously we should take vague possibilities?
And if you are right that the whole problem is that they are insufficiently familiar with the economics of existential risks, then isn’t that something that should be improved by putting some effort into raising the awareness of why it is rational not to disregard risks from AI even if one believes that they are very unlikely?
For the record, I never said I disagreed with the people from Givewell. I don’t, my charity of choice is currently Village Reach. I merely disagree with Baez when he says we should not maximise expected utility. I would be very surprised to find Robin Hanson making the same mistake (if I did I would seriously re-think my own position, and possibly lower my respect for Hanson significantly).
Please stop trying to view the world in just two sides, Hanson’s arguments are arguments that the probability of a singularity (as Eliezer sees it) is low enough that an expected utility maximiser would not spend much time worrying about it (at least, I think that’s his point, all he explicitly argues is that the probability is low). His point is not, even slightly, an argument against the utility maximisation.
Sheesh! Please don’t assume that everyone who disagrees with one point you made is doing so because he disagrees with the whole thrust of your thinking.
He isn’t wrong, he’s just used to using different language than you are. And I might add that the language he is using is, as far as I can tell, the far more commonly accepted notion of utility, rather than VNM utility, which is what I assume you are talking about. By “commonly accepted” I mean that the average technical person who uses the word utility probably is not thinking about VNM utility. So if you want to write Baez’s views off, you should at least first agree on the same definition and then ask the same question.
See my other comment here. I originally misattributed the Baez quote to XiXiDu, so the reply was addressed to him directly.
Expected utilities do not work like that. If you’re risk averse you embody that in the utility function by assigning diminishing returns (and this can indeed lead to a situation where you would take a bet 1000 times but would not take it once), you do not stop maximising expected utility.
If a mathematician like John Baez can be that wrong, doesn’t that mean the topic needs further attention? Not necessarily in the sense of research but that people are given specific resources to read up on so that they don’t make similar mistakes in the future.
I suspect that John Baez and the people from GiveWell are capable of understanding what you understand about this topic. All of them have read a lot of LW and interviewed the SIAI. If you take that into account, their intelligence and knowledge of the positions hold by the SIAI, is there a way to figure out what went wrong and what we can improve so that those and other people understand how they are wrong?
I am just trying to locate the problem. What do you think is the cause of their disagreement?
A mathematician like Baez can indeed be that wrong, when he discusses technical topics that he is insufficiently familiar with. I’m sure Baez is quite capable of understanding the standard position of economists on this topic (the position echoed by BenElliot). But, as it apparently turns out, Baez has not yet done so. No big deal. Treat Baez as an authority on mathematical physics, category theory, and perhaps saving the environment. He is not necessarily an authority on the foundations of microeconomics.
What about Robin Hanson? See for example his post here and here. What is it that he is insufficiently familiar with? Or what about Katja Grace who has been a visiting fellow of the SIAI? See her post here (there are many other posts by her).
And the people from GiveWell even knew about Pascal’s Mugging, what is it that they are insufficiently familiar with?
I mean, those people might disagree for different reasons. But I think that too often the argument is used that people just don’t know what they are talking about, rather than trying to find out why else they might disagree. As I said in the OP, none of them doubts that there are risks from AI, just that we don’t know enough to take them too seriously at this moment. Whereas the SIAI says that the utility associated with AI related matters outweighs those doubts. So if we were going to pinpoint the exact nature of disagreement, would it maybe all come down to how seriously we should take vague possibilities?
And if you are right that the whole problem is that they are insufficiently familiar with the economics of existential risks, then isn’t that something that should be improved by putting some effort into raising the awareness of why it is rational not to disregard risks from AI even if one believes that they are very unlikely?
For the record, I never said I disagreed with the people from Givewell. I don’t, my charity of choice is currently Village Reach. I merely disagree with Baez when he says we should not maximise expected utility. I would be very surprised to find Robin Hanson making the same mistake (if I did I would seriously re-think my own position, and possibly lower my respect for Hanson significantly).
Please stop trying to view the world in just two sides, Hanson’s arguments are arguments that the probability of a singularity (as Eliezer sees it) is low enough that an expected utility maximiser would not spend much time worrying about it (at least, I think that’s his point, all he explicitly argues is that the probability is low). His point is not, even slightly, an argument against the utility maximisation.
What benelliot said.
Sheesh! Please don’t assume that everyone who disagrees with one point you made is doing so because he disagrees with the whole thrust of your thinking.
Doesn’t seem to agree with Baez on the subject of utility maximisation. Baez was making no sense—he does seem to be “that wrong” on the topic.
scope insensitivity.
He isn’t wrong, he’s just used to using different language than you are. And I might add that the language he is using is, as far as I can tell, the far more commonly accepted notion of utility, rather than VNM utility, which is what I assume you are talking about. By “commonly accepted” I mean that the average technical person who uses the word utility probably is not thinking about VNM utility. So if you want to write Baez’s views off, you should at least first agree on the same definition and then ask the same question.
See my other comment here. I originally misattributed the Baez quote to XiXiDu, so the reply was addressed to him directly.