So it seems to follow that once you know Eliezer’s beliefs about the future, whether those beliefs are right or wrong is irrelevant to you: that just affects what actually happens in the future, which you systematically discount anyway.
But Eliezer’s beliefs about the future continue to change—as he gains new information and completes new deductions. And there is no way that he can practically keep me informed of his beliefs—neither he nor I would be willing to invest the time required for that communication. But Eliezer’s beliefs about the future impact his actions in the present, and those actions have consequences both in the near and distant future. From my point of view, therefore, his actions have essentially random effects on the only thing that matters to me—the near future.
Absolutely. But who isn’t that true of? At least Eliezer has extensively documented his putative beliefs at various points in time, which gives you some data points to extrapolate from.
I have no complaints regarding the amount of information about Eliezer’s beliefs that I have access to. My complaint is that Eliezer, and his fellow non-discounting act utilitarians, are morally driven by the huge differences in utility which they see as arising from events in the distant future—events which I consider morally irrelevant because I discount the future. No realistic amount of information about beliefs can alleviate this problem. The only fix is for them to start discounting. (I would have added “or for me to stop discounting” except that I still don’t know how to handle the infinities.)
Given that they predominantly care about things I don’t care about, and that I predominantly care about things they don’t worry about, we can only consider each other to be moral monsters.
You and I seem to be talking past each other now. It may be time to shut this conversation down.
Given that they predominantly care about things I don’t care about, and that I predominantly care about things they don’t worry about, we can only consider each other to be moral monsters.
Ethical egoists are surely used to this situation, though. The world is full of people who care about extremely different things from one another.
Yes. And if they both mostly care about modest-sized predictable things, then they can do some rational bargaining. Trouble arises when one or more of them has exquisitely fragile values—when they believe that switching a donation from one charity to another destroys galaxies.
I expect your decision algorithm will find a way to deal with people who won’t negotiate on some topics—or who behave in manner you have a hard time predicting. Some trouble for you, maybe—but probably not THE END OF THE WORLD.
From my point of view, therefore, his actions have essentially random effects on the only thing that matters to me—the near future.
Looking at the last 10 years, there seems to be some highly-predictable fund raising activity, and a lot of philosophising about the importance of machine morality.
I see some significant patterns there. It is not remotely like a stream of random events. So: what gives?
But Eliezer’s beliefs about the future continue to change—as he gains new information and completes new deductions. And there is no way that he can practically keep me informed of his beliefs—neither he nor I would be willing to invest the time required for that communication. But Eliezer’s beliefs about the future impact his actions in the present, and those actions have consequences both in the near and distant future. From my point of view, therefore, his actions have essentially random effects on the only thing that matters to me—the near future.
Absolutely. But who isn’t that true of? At least Eliezer has extensively documented his putative beliefs at various points in time, which gives you some data points to extrapolate from.
I have no complaints regarding the amount of information about Eliezer’s beliefs that I have access to. My complaint is that Eliezer, and his fellow non-discounting act utilitarians, are morally driven by the huge differences in utility which they see as arising from events in the distant future—events which I consider morally irrelevant because I discount the future. No realistic amount of information about beliefs can alleviate this problem. The only fix is for them to start discounting. (I would have added “or for me to stop discounting” except that I still don’t know how to handle the infinities.)
Given that they predominantly care about things I don’t care about, and that I predominantly care about things they don’t worry about, we can only consider each other to be moral monsters.
You and I seem to be talking past each other now. It may be time to shut this conversation down.
Ethical egoists are surely used to this situation, though. The world is full of people who care about extremely different things from one another.
Yes. And if they both mostly care about modest-sized predictable things, then they can do some rational bargaining. Trouble arises when one or more of them has exquisitely fragile values—when they believe that switching a donation from one charity to another destroys galaxies.
I expect your decision algorithm will find a way to deal with people who won’t negotiate on some topics—or who behave in manner you have a hard time predicting. Some trouble for you, maybe—but probably not THE END OF THE WORLD.
Looking at the last 10 years, there seems to be some highly-predictable fund raising activity, and a lot of philosophising about the importance of machine morality.
I see some significant patterns there. It is not remotely like a stream of random events. So: what gives?