I’d like to ask those people who downvote this post for their reasons. I thought this is a reasonable antiprediction to the claims made regarding the value of a future galactic civilisation. Based on economic and scientific evidence it is reasonable to assume that the better part of the future, namely the the time from 10^20 to 10^100 years (and beyond) will be undesirable.
If you spend money and resources on the altruistic effort of trying to give birth to this imaginative galactic civilisation, why don’t you take into account the more distant and much larger part of the future that lacks any resources to sustain given civilisation? You are deliberately causing suffering here by putting short-term interests over those of the bigger part of the future.
I didn’t downvote the post—it is thought-provoking, though I don’t agree with it.
But I had a negative reaction to the title (which seems borderline deliberately provocative to attract attention), and the disclaimer—as thomblake said, “Please write posts such that they can be interpreted literally, so the gist follows naturally from the literal reading.”
It is the disclaimer. I was rather annoyed at all the comments to my other post. People claimed things I to my understanding never said. And if what I said was analyzed I’m sure nobody could show me how to come to such conclusions. As was obvious, not even EY read my post but simply took something out of context and run with it.
Future is the stuff you build goodness out of. The properties of stuff don’t matter, what matters is the quality and direction of decisions made about arranging it properly. If you suggest a plan with obvious catastrophic problems, chances are it’s not what will be actually chosen by rational agents (that or your analysis is incorrect).
Since XiXiDu and multifoliaterose’s posts have all been made during the Singularity Summit, when everyone at SIAI is otherwise occupied and so cannot respond, I thought someone familiar with the issues should engage rather than leave a misleading appearance of silence. And giving a bit of advice that I think has a good chance of improving XiXiDu’s contributions seemed reasonable and not too costly.
Future is the stuff you build goodness out of. The properties of stuff don’t matter, what matters is the quality and direction of decisions made about arranging it properly.
There is not enough stuff to sustain a galactic civilization for very long (relative to the expected time of the universe to sustain intelligence). There is no way to alter the quality or direction of the fundamental outcome in any way to overcome this problem (given what we know right now).
If you suggest a plan with obvious catastrophic problems, chances are it’s not what will be actually chosen by rational agents (that or your analysis is incorrect).
That’s what I am inquiring about, is it rational given that we adopt a strategy of minimizing suffering? Or are we going to create trillions to have fun for a relatively short period and then have them suffering or commit suicide for a much longer period?
It’s a worthwhile question, but probably fits better on an open thread for the first round or two of comments, so you can refine the question to a specific proposal or core disagreement/question.
My first response to what I think you’re asking is that this question applies to you as an individual just as much as it does to humans (or human-like intelligences) as a group. There is a risk of sadness and torture in your future. Why keep living?
I thought this is a reasonable antiprediction to the claims made regarding the value of a future galactic civilisation. Based on economic and scientific evidence it is reasonable to assume that the better part of the future, namely the the time from 10^20 to 10^100 years (and beyond) will be undesirable.
I don’t believe that is a reasonable prediction. You’re dealing with timescales so far beyond human lifespans that assuming they will never think of the things you think of is entirely implausible.
In this horrendous future of yours, why do people keep reproducing? Why don’t the last viable generation (knowing they’re the last viable generation) cease reproduction?
If you think that this future civilisation will be incapable of understanding the concepts you’re trying to convey, what makes you think we will understand them?
It is not about reproduction but that at that time there’ll already be much more entities than ever before. And they all will have to die. Now only a few will have to die or suffer.
And it is not my future. It’s much more based on evidence than the near-term future talked about on LW.
I’m not sure I disagree, but I’m also not sure that dying is a necessity. We don’t understand physics yet, much less consciousness; it’s too early to assume it as a certainty, which means I have a significantly nonzero confidence of life being an infinite good.
There are all kinds of mathematical tricks to deal with infinite quantities. Renormalization is something you’d be familiar with from physics; from my own CS background, I’ve got asymptotic analysis (which can’t see the fine details, but easily can handle large ones). Even something as simple as taking the derivative of your utility function would often be enough to tell which alternative is best.
I’ve also got a significantly nonzero confidence of infinite negative utility, mind you. Life isn’t all roses.
We already donate based on the assumption that superhuman AI is possible and that it is right to base our decisions on extrapolated utility of it and a possible galactic civilisation. Why are we not able to make decisions based on a more evidence based economic and physical assumption of a universe that is unable to sustain a galactic civilisation for most of its lifespan and the extrapolated suffering that is a conclusion of this prediction?
What kind of decisions were you planning to take? You surely wouldn’t want to make a “friendly AI” that’s hardcoded to wipe out humanity; you’d expect it to come to the conclusion that that’s the best option by itself, based on CEV. I’d want it to explain its reasoning in detail, but I might even go along with that.
My argument is that it’s too early to take any decisions at all. We’re still in the data collection phase, and the state of reality is such that I wouldn’t trust anything but a superintelligence to be right about the consequences of our various options anyway.
We can decide that such a superintelligence is right to create, yes. But having decided that, it makes an awful lot of sense to punt most other decisions over to it.
We can decide that such a superintelligence is right to create, yes. But having decided that, it makes an awful lot of sense to punt most other decisions over to it.
True, I have to read up on CEV and see if there was a possibility that a friendly AI could decide to kill us all to reduce suffering in the long-term.
The whole idea in the OP stems from the kind of negative utilitarianism that sgggests that it is not worth to torture 100 people infinitel to make billions happy. So I thought to extrapolate this and see what if we figure out that in the long run most entities will be suffering?
Negative utilitarianism is.. interesting, but I’m pretty sure it holds an immediate requirement to collectively commit suicide no matter what (short of continued existence, inevitably(?) ended by death, possibly being less bad than suicide, which seems unlikely) - am I wrong?
That’s not at all similar to your scenario, which holds the much more reasonable assumption that the future might be a net negative even while counting the positives.
In my opinion, the post doesn’t warrant −90 karma points. That’s pretty harsh. I think you have plenty to contribute to this site—I hope the negative karma doesn’t discourage you from participating, but rather, encourages you to refine your arguments (perhaps get feedback in the open thread first?)
That I get bad karma here is completely biased in my opinion. People just don’t realize that I’m basing extrapolated conclusions on some shaky premises just like LW does all the time when talking about the future galactic civilization and risks from AI. The difference is, my predictions are much more based on evidence.
It’s a mock of all that is wrong with this community. I already thought I’d get bad karma for my other post but was surprised not to. I’ll probably get really bad karma now that I say this. Oh well :-)
To be clear, this is a thought experiment about asking what we can and should do if we ultimately are prone to cause more suffering than happiness. It’s nothing more than that. People suspect that I’m making strong arguments, that it is my opinion, that I ask for action. Which is all wrong, I’m not the SIAI. I can argue for things I don’t support and not even think are sound.
Note that multifoliaterose’s recent posts and comments have been highly upvoted: he’s gained over 500 karma in a few days for criticizing SIAI. I think that the reason is that they were well-written, well-informed, and polite while making strong criticisms using careful argument. If you raise the quality of your posts I expect you will find the situation changing.
You are one of the few people here whose opinion I’m actually taking serious, after many insightful and polite comments. What is the bone of contention in the OP? I took a few different ingredients: Robin Hanson’s argumentation about resource problems in the far future (the economic argument); Questions based on Negative utilitarianism (the ethical argument); The most probable fate of the universe given current data (the basic premise) -- then I extrapolated from there and created an antiprediction. That is, I said that it is too unlikely that the outcome will be good to believe that it is possible. Our responsibility is to prevent a lot of suffering over 10^100 years.
I never said I support this conclusion or think that it is sound. But I think it is very similar to other arguments within this community.
On a thematic/presentation level I think the biggest problem was an impression that the post was careless, attempting to throw as many criticisms as possible at its target without giving a good account of any one. This impression was bolstered by the disclaimer and the aggressive rhetorical style (which “reads” angry, and doesn’t fit with norms of politeness and discourse here).
Substantively, I’ll consider the major pieces individually.
The point that increasing populations would result in more beings that would quite probably die is not a persuasive argument to most people, who are glad to exist and who do not believe that creating someone to live a life which is mostly happy but then ends is necessarily a harm. You could have presented Benatar’s arguments and made your points more explicit, but instead simply stated the conclusion.
The empirical claim that superhuman entities awaiting the end of the universe would suffer terribly with resource decline was lacking in supporting arguments. Most humans today expect to die within no more than a hundred years, and yet consider their lives rather good. Superintelligent beings capable of directly regulating their own emotions would seem well-positioned to manage or eliminate stress and suffering related to resource decline. David Pearce’s Hedonistic Imperative is relevant here: with access to self-modification capacities entities could remain at steadily high levels of happiness, while remaining motivated to improve their situations and realize their goals.
For example, it would be trivial to ensure that accepting agreed upon procedures for dealing with the “lifeboat ethics” scenarios you describe at the end would not be subjectively torturous, even while the entities would prefer to live longer. And the comparison with Alzheimer’s doesn’t work: carefully husbanded resources could be used at the rate preferred by their holders, and there is little reason to think that quality (as opposed to speed or quantity) of cognition would be much worsened.
In several places throughout the post you use “what if” language without taking the time to present sufficient arguments in favor of plausibility, which is a rationalist faux-pas.
Edit: I misread the “likely” in this sentence and mistakenly objected to it.
Might it be better to believe that winning is impossible, than that it’s likely, if the actual probability is very low?
I think that spending more time reading the sequences, and the posts of highly upvoted Less Wrongers such as Yvain and Kaj Sotala, will help you to improve your sense of the norms of discourse around here.
Thanks, I’ll quit making top-level posts as I doubt I’ll ever be able to exhibit the attitude required for the level of thought and elaboration you demand. That was actually my opinion before making the last and first post. But all this, in my opinion, laughable attitude around Roko’s post made me sufficiently annoyed to signal my incredulity.
ETA
In several places throughout the post you use “what if” language without taking the time to present sufficient arguments in favor of plausibility, which is a rationalist faux-pas.
I think you should probably read more of the Less Wrong sequences before you make more top level posts. Most of the highly upvoted posts are by people that have the knowledge background from the sequences.
“If you confront it rationally full on then you can’t really justify trading off any part of galactic civilization for anything that you could get now days.”
So why, I ask you directly, am I not to argue that we can’t really justify to balance the happiness and utility of a galactic civilization with the MUCH longer time of decay? There is this whole argument about how we have to give rise to the galactic civilization and have to survive now. But I predict that suffering will prevail. That it is too unlikely that the outcome will be positive. What is wrong with that?
I’d like to ask those people who downvote this post for their reasons. I thought this is a reasonable antiprediction to the claims made regarding the value of a future galactic civilisation. Based on economic and scientific evidence it is reasonable to assume that the better part of the future, namely the the time from 10^20 to 10^100 years (and beyond) will be undesirable.
If you spend money and resources on the altruistic effort of trying to give birth to this imaginative galactic civilisation, why don’t you take into account the more distant and much larger part of the future that lacks any resources to sustain given civilisation? You are deliberately causing suffering here by putting short-term interests over those of the bigger part of the future.
I didn’t downvote the post—it is thought-provoking, though I don’t agree with it.
But I had a negative reaction to the title (which seems borderline deliberately provocative to attract attention), and the disclaimer—as thomblake said, “Please write posts such that they can be interpreted literally, so the gist follows naturally from the literal reading.”
It is the disclaimer. I was rather annoyed at all the comments to my other post. People claimed things I to my understanding never said. And if what I said was analyzed I’m sure nobody could show me how to come to such conclusions. As was obvious, not even EY read my post but simply took something out of context and run with it.
Future is the stuff you build goodness out of. The properties of stuff don’t matter, what matters is the quality and direction of decisions made about arranging it properly. If you suggest a plan with obvious catastrophic problems, chances are it’s not what will be actually chosen by rational agents (that or your analysis is incorrect).
The analysis is incorrect? Well, ask the physicists.
Moral analysis.
Yes, I think so too. But I haven’t seen any good arguments against Negative utilitarianism in the comments yet. (More here)
You lost the context. Try not to drift.
Is this really worth your time (or Carl Shulman’s)? Surely you guys have better things to do?
If you tell me where my argumentation differs from arguments like this, I’ll know if it is a waste or not. I can’t figure it out.
Since XiXiDu and multifoliaterose’s posts have all been made during the Singularity Summit, when everyone at SIAI is otherwise occupied and so cannot respond, I thought someone familiar with the issues should engage rather than leave a misleading appearance of silence. And giving a bit of advice that I think has a good chance of improving XiXiDu’s contributions seemed reasonable and not too costly.
There is not enough stuff to sustain a galactic civilization for very long (relative to the expected time of the universe to sustain intelligence). There is no way to alter the quality or direction of the fundamental outcome in any way to overcome this problem (given what we know right now).
That’s what I am inquiring about, is it rational given that we adopt a strategy of minimizing suffering? Or are we going to create trillions to have fun for a relatively short period and then have them suffering or commit suicide for a much longer period?
It’s a worthwhile question, but probably fits better on an open thread for the first round or two of comments, so you can refine the question to a specific proposal or core disagreement/question.
My first response to what I think you’re asking is that this question applies to you as an individual just as much as it does to humans (or human-like intelligences) as a group. There is a risk of sadness and torture in your future. Why keep living?
I don’t believe that is a reasonable prediction. You’re dealing with timescales so far beyond human lifespans that assuming they will never think of the things you think of is entirely implausible.
In this horrendous future of yours, why do people keep reproducing? Why don’t the last viable generation (knowing they’re the last viable generation) cease reproduction?
If you think that this future civilisation will be incapable of understanding the concepts you’re trying to convey, what makes you think we will understand them?
It is not about reproduction but that at that time there’ll already be much more entities than ever before. And they all will have to die. Now only a few will have to die or suffer.
And it is not my future. It’s much more based on evidence than the near-term future talked about on LW.
Ah, I get it now, you believe that all life is necessarily a net negative. That existing is less of a good than dying is of a bad.
I disagree, and I suspect almost everyone else here does too. You’ll have to provide some justification for that belief if you wish us to adopt it.
I’m not sure I disagree, but I’m also not sure that dying is a necessity. We don’t understand physics yet, much less consciousness; it’s too early to assume it as a certainty, which means I have a significantly nonzero confidence of life being an infinite good.
Doesn’t that make most expected utility calculations make no sense?
A problem with the math, not with reality.
There are all kinds of mathematical tricks to deal with infinite quantities. Renormalization is something you’d be familiar with from physics; from my own CS background, I’ve got asymptotic analysis (which can’t see the fine details, but easily can handle large ones). Even something as simple as taking the derivative of your utility function would often be enough to tell which alternative is best.
I’ve also got a significantly nonzero confidence of infinite negative utility, mind you. Life isn’t all roses.
We already donate based on the assumption that superhuman AI is possible and that it is right to base our decisions on extrapolated utility of it and a possible galactic civilisation. Why are we not able to make decisions based on a more evidence based economic and physical assumption of a universe that is unable to sustain a galactic civilisation for most of its lifespan and the extrapolated suffering that is a conclusion of this prediction?
Well, first off..
What kind of decisions were you planning to take? You surely wouldn’t want to make a “friendly AI” that’s hardcoded to wipe out humanity; you’d expect it to come to the conclusion that that’s the best option by itself, based on CEV. I’d want it to explain its reasoning in detail, but I might even go along with that.
My argument is that it’s too early to take any decisions at all. We’re still in the data collection phase, and the state of reality is such that I wouldn’t trust anything but a superintelligence to be right about the consequences of our various options anyway.
We can decide that such a superintelligence is right to create, yes. But having decided that, it makes an awful lot of sense to punt most other decisions over to it.
True, I have to read up on CEV and see if there was a possibility that a friendly AI could decide to kill us all to reduce suffering in the long-term.
The whole idea in the OP stems from the kind of negative utilitarianism that sgggests that it is not worth to torture 100 people infinitel to make billions happy. So I thought to extrapolate this and see what if we figure out that in the long run most entities will be suffering?
Negative utilitarianism is.. interesting, but I’m pretty sure it holds an immediate requirement to collectively commit suicide no matter what (short of continued existence, inevitably(?) ended by death, possibly being less bad than suicide, which seems unlikely) - am I wrong?
That’s not at all similar to your scenario, which holds the much more reasonable assumption that the future might be a net negative even while counting the positives.
In my opinion, the post doesn’t warrant −90 karma points. That’s pretty harsh. I think you have plenty to contribute to this site—I hope the negative karma doesn’t discourage you from participating, but rather, encourages you to refine your arguments (perhaps get feedback in the open thread first?)
That I get bad karma here is completely biased in my opinion. People just don’t realize that I’m basing extrapolated conclusions on some shaky premises just like LW does all the time when talking about the future galactic civilization and risks from AI. The difference is, my predictions are much more based on evidence.
It’s a mock of all that is wrong with this community. I already thought I’d get bad karma for my other post but was surprised not to. I’ll probably get really bad karma now that I say this. Oh well :-)
To be clear, this is a thought experiment about asking what we can and should do if we ultimately are prone to cause more suffering than happiness. It’s nothing more than that. People suspect that I’m making strong arguments, that it is my opinion, that I ask for action. Which is all wrong, I’m not the SIAI. I can argue for things I don’t support and not even think are sound.
Note that multifoliaterose’s recent posts and comments have been highly upvoted: he’s gained over 500 karma in a few days for criticizing SIAI. I think that the reason is that they were well-written, well-informed, and polite while making strong criticisms using careful argument. If you raise the quality of your posts I expect you will find the situation changing.
You are one of the few people here whose opinion I’m actually taking serious, after many insightful and polite comments. What is the bone of contention in the OP? I took a few different ingredients: Robin Hanson’s argumentation about resource problems in the far future (the economic argument); Questions based on Negative utilitarianism (the ethical argument); The most probable fate of the universe given current data (the basic premise) -- then I extrapolated from there and created an antiprediction. That is, I said that it is too unlikely that the outcome will be good to believe that it is possible. Our responsibility is to prevent a lot of suffering over 10^100 years.
I never said I support this conclusion or think that it is sound. But I think it is very similar to other arguments within this community.
On a thematic/presentation level I think the biggest problem was an impression that the post was careless, attempting to throw as many criticisms as possible at its target without giving a good account of any one. This impression was bolstered by the disclaimer and the aggressive rhetorical style (which “reads” angry, and doesn’t fit with norms of politeness and discourse here).
Substantively, I’ll consider the major pieces individually.
The point that increasing populations would result in more beings that would quite probably die is not a persuasive argument to most people, who are glad to exist and who do not believe that creating someone to live a life which is mostly happy but then ends is necessarily a harm. You could have presented Benatar’s arguments and made your points more explicit, but instead simply stated the conclusion.
The empirical claim that superhuman entities awaiting the end of the universe would suffer terribly with resource decline was lacking in supporting arguments. Most humans today expect to die within no more than a hundred years, and yet consider their lives rather good. Superintelligent beings capable of directly regulating their own emotions would seem well-positioned to manage or eliminate stress and suffering related to resource decline. David Pearce’s Hedonistic Imperative is relevant here: with access to self-modification capacities entities could remain at steadily high levels of happiness, while remaining motivated to improve their situations and realize their goals.
For example, it would be trivial to ensure that accepting agreed upon procedures for dealing with the “lifeboat ethics” scenarios you describe at the end would not be subjectively torturous, even while the entities would prefer to live longer. And the comparison with Alzheimer’s doesn’t work: carefully husbanded resources could be used at the rate preferred by their holders, and there is little reason to think that quality (as opposed to speed or quantity) of cognition would be much worsened.
In several places throughout the post you use “what if” language without taking the time to present sufficient arguments in favor of plausibility, which is a rationalist faux-pas.
Edit: I misread the “likely” in this sentence and mistakenly objected to it.
I copied that sentence from here (last sentence).
Thanks, I’ll quit making top-level posts as I doubt I’ll ever be able to exhibit the attitude required for the level of thought and elaboration you demand. That was actually my opinion before making the last and first post. But all this, in my opinion, laughable attitude around Roko’s post made me sufficiently annoyed to signal my incredulity.
ETA
The SIAI = What If?
I think you should probably read more of the Less Wrong sequences before you make more top level posts. Most of the highly upvoted posts are by people that have the knowledge background from the sequences.
I’m talking about these kind of statements: http://www.vimeo.com/8586168 (5:45)
“If you confront it rationally full on then you can’t really justify trading off any part of galactic civilization for anything that you could get now days.”
So why, I ask you directly, am I not to argue that we can’t really justify to balance the happiness and utility of a galactic civilization with the MUCH longer time of decay? There is this whole argument about how we have to give rise to the galactic civilization and have to survive now. But I predict that suffering will prevail. That it is too unlikely that the outcome will be positive. What is wrong with that?