Not sure if we are talking about the same thing, but I think that there are many people who just “play it safe”, and in a civilized society that generally means following the rules and avoiding unnecessary conflicts. The same people can behave differently if you give them power (even on a small scale, e.g. when they have children).
But I think there are also people who try to do good even when the incentives point the other way round. And also people who can’t resist hurting others even when that predictably gets them punished.
Given more information about someone, your capacity for having {commune, love, compassion, kindness, cooperation} for/with them increases more than your capacity for {hatred, adversariality} towards them increases.
Knowing more about people allows you to have a better model of them. So if you started with the assumption e.g. that people who don’t seem sufficiently similar to you are bad, then knowing them better will improve your attitude towards them. On the other hand, if you started from some kind of Pollyanna perspective, knowing people better can make you disappointed and bitter. Finally, if you are a psychopath, knowing people better just gives you more efficient ways to exploit them.
Right. Presumably, maybe. But I am interested in considering quite extreme versions of the claim. Maybe there’s only 10,000 people who would, as emperor, make a world that is, after 1,000,000 years, net negative according to us. Maybe there’s literally 0? I’m not even sure that there aren’t literally 0, though quite plausibly someone else could know this confidently. (For example, someone could hypothetically have solid information suggesting that someone could remain truly delusionally and disorganizedly psychotic and violent to such an extent that they never get bored and never grow, while still being functional enough to give directions to an AI that specify world domination for 1,000,000 years.)
Sounds to me like wishful thinking. You basically assume that in 1 000 000 years people will get bored of doing the wrong thing, and start doing the right thing. My perspective is that “good” is a narrow target in the possibility space, and if someone already keeps missing it now, if we expand their possibility space by making them a God-emperor, the chance of converging to that narrow target only decreases.
Basically, for your model to work, kindness would need to be the only attractor in the space of human (actually, post-human) psychology.
A simple example of how things could go wrong is for Genghis Khan to set up an AI to keep everyone else in horrible conditions forever, and then (on purpose, or accidentally) wirehead himself.
Another example is the God-emperor editing their own brain to remove all empathy, e.g. because they consider it a weakness at the moment. Once all empathy is uninstalled, there is no incentive to reinstall it.
EDIT: I see that Thane Ruthenis already made this argument, and didn’t convince you.
No, I ask the question, and then I present a couple hypothesis-pieces. (Your stance here seems fairly though not terribly anti-thought AFAICT, so FYI I may stop engaging without further warning.)
My perspective is that “good” is a narrow target in the possibility space, and if someone already keeps missing it now, if we expand their possibility space by making them a God-emperor, the chance of converging to that narrow target only decreases.
I’m seriously questioning whether it’s a narrow target for humans.
Basically, for your model to work, kindness would need to be the only attractor in the space of human (actually, post-human) psychology.
Well, if we assume that humans are fundamentally good / inevitably converging to kindness if given enough time… then, yeah, giving someone God-emperor powers is probably going to be good in long term. (If they don’t accidentally make an irreparable mistake.)
On the time scale of current human lifespan, I guess I could point out that some old people are unkind, or that some criminals keep re-offending a lot, so it doesn’t seem like time automatically translates to more kindness.
But an obvious objection is “well, maybe they need 200 years of time, or 1000”, and I can’t provide empirical evidence against that. So I am not sure how to settle this question.
On average, people get less criminal as they get older, so that would point towards human kindness increasing in time. On the other hand, they also get less idealistic, on average, so maybe a simpler explanation is that as people get older, they get less active in general. (Also, some reduction in crime is caused by the criminals getting killed as a result of their lifestyle.)
There is probably a significant impact of hormone levels, which means that we need to make an assumption about how the God-emperor would regulate their own hormones. For example, if he decides to keep a 25 years old human male body, maybe his propensity to violence will match the body?
tl;dr—what kinds of arguments should even be used in this debate?
what kinds of arguments should even be used in this debate?
Ok, now we have a reasonable question. I don’t know, but I provided two argument-sketches that I think are of a potentially relevant type. At an abstract level, the answer would be “mathematico-conceptual reasoning”, just like in all previous instances where there’s a thing that has never happened before, and yet we reason somewhat successfully about it—of which there are plenty examples, if you think about it for a minute.
On average, people get less criminal as they get older, so that would point towards human kindness increasing in time. On the other hand, they also get less idealistic, on average, so maybe a simpler explanation is that as people get older, they get less active in general.
When I read Tsvi’s OP, I was imagining something like a (trans-/post- but not too post-)human civilization where everybody by default has an unbounded lifespan and healthspan, possibly somewhat boosted intelligence and need for cognition / open intellectual curiosity. (In which case, “people tend to X as they get older”, where X is something mostly due to things related to default human aging, doesn’t apply.)
Now start it as a modern-ish democracy or a cluster of (mostly) democracies, run for 1e4 to 1e6 years, and see what happens.
Maybe some people are, and some people are not?
Not sure if we are talking about the same thing, but I think that there are many people who just “play it safe”, and in a civilized society that generally means following the rules and avoiding unnecessary conflicts. The same people can behave differently if you give them power (even on a small scale, e.g. when they have children).
But I think there are also people who try to do good even when the incentives point the other way round. And also people who can’t resist hurting others even when that predictably gets them punished.
Knowing more about people allows you to have a better model of them. So if you started with the assumption e.g. that people who don’t seem sufficiently similar to you are bad, then knowing them better will improve your attitude towards them. On the other hand, if you started from some kind of Pollyanna perspective, knowing people better can make you disappointed and bitter. Finally, if you are a psychopath, knowing people better just gives you more efficient ways to exploit them.
Right. Presumably, maybe. But I am interested in considering quite extreme versions of the claim. Maybe there’s only 10,000 people who would, as emperor, make a world that is, after 1,000,000 years, net negative according to us. Maybe there’s literally 0? I’m not even sure that there aren’t literally 0, though quite plausibly someone else could know this confidently. (For example, someone could hypothetically have solid information suggesting that someone could remain truly delusionally and disorganizedly psychotic and violent to such an extent that they never get bored and never grow, while still being functional enough to give directions to an AI that specify world domination for 1,000,000 years.)
Sounds to me like wishful thinking. You basically assume that in 1 000 000 years people will get bored of doing the wrong thing, and start doing the right thing. My perspective is that “good” is a narrow target in the possibility space, and if someone already keeps missing it now, if we expand their possibility space by making them a God-emperor, the chance of converging to that narrow target only decreases.
Basically, for your model to work, kindness would need to be the only attractor in the space of human (actually, post-human) psychology.
A simple example of how things could go wrong is for Genghis Khan to set up an AI to keep everyone else in horrible conditions forever, and then (on purpose, or accidentally) wirehead himself.
Another example is the God-emperor editing their own brain to remove all empathy, e.g. because they consider it a weakness at the moment. Once all empathy is uninstalled, there is no incentive to reinstall it.
EDIT: I see that Thane Ruthenis already made this argument, and didn’t convince you.
No, I ask the question, and then I present a couple hypothesis-pieces. (Your stance here seems fairly though not terribly anti-thought AFAICT, so FYI I may stop engaging without further warning.)
I’m seriously questioning whether it’s a narrow target for humans.
Curious to hear other attractors, but your proposals aren’t really attractors. See my response here: https://www.lesswrong.com/posts/Ht4JZtxngKwuQ7cDC/tsvibt-s-shortform?commentId=jfAoxAaFxWoDy3yso
Ah I see you saw Ruthenis’s comment and edited your comment to say so, so I edited my response to your comment to say that I saw that you saw.
Well, if we assume that humans are fundamentally good / inevitably converging to kindness if given enough time… then, yeah, giving someone God-emperor powers is probably going to be good in long term. (If they don’t accidentally make an irreparable mistake.)
I just strongly disagree with this assumption.
It’s not an assumption, it’s the question I’m asking and discussing.
Ah, then I believe the answer is “no”.
On the time scale of current human lifespan, I guess I could point out that some old people are unkind, or that some criminals keep re-offending a lot, so it doesn’t seem like time automatically translates to more kindness.
But an obvious objection is “well, maybe they need 200 years of time, or 1000”, and I can’t provide empirical evidence against that. So I am not sure how to settle this question.
On average, people get less criminal as they get older, so that would point towards human kindness increasing in time. On the other hand, they also get less idealistic, on average, so maybe a simpler explanation is that as people get older, they get less active in general. (Also, some reduction in crime is caused by the criminals getting killed as a result of their lifestyle.)
There is probably a significant impact of hormone levels, which means that we need to make an assumption about how the God-emperor would regulate their own hormones. For example, if he decides to keep a 25 years old human male body, maybe his propensity to violence will match the body?
tl;dr—what kinds of arguments should even be used in this debate?
Ok, now we have a reasonable question. I don’t know, but I provided two argument-sketches that I think are of a potentially relevant type. At an abstract level, the answer would be “mathematico-conceptual reasoning”, just like in all previous instances where there’s a thing that has never happened before, and yet we reason somewhat successfully about it—of which there are plenty examples, if you think about it for a minute.
When I read Tsvi’s OP, I was imagining something like a (trans-/post- but not too post-)human civilization where everybody by default has an unbounded lifespan and healthspan, possibly somewhat boosted intelligence and need for cognition / open intellectual curiosity. (In which case, “people tend to X as they get older”, where X is something mostly due to things related to default human aging, doesn’t apply.)
Now start it as a modern-ish democracy or a cluster of (mostly) democracies, run for 1e4 to 1e6 years, and see what happens.