Honestly, I would moderate society with more positive religious elements. In my opinion modern society has preserved many dysfunctional elements of religion while abandoning the functional benefits. I can see that a community of rationalists would have a problem with this perspective, seeing that religion almost always results in an undereducated majority being enchanted by their psychological reflexes; but personally, I don’t see the existence of an irrational mass as unconditionally detrimental.
It is interesting to speculate about the potential of a majorly rational society, but I see no practical method of accomplishing this, nor a reason to believe that, I see no real reason to believe that if there was such a configuration would necessarily be superior to the current model.
Either swimmer or Dave, are either of you aware of a practical methodology for rationalizing the masses, or a reason to think why a more efficient society would be any less oppressive or war driven. In fact, in a worst case scenario, I see a world of majorly rational people as transforming into an even more efficient war machine, and killing us all faster. As for the project of pursuit of Friendly AI, I do not know that much about it. What is the perceived end goal of friendly Ai? Is it that an unbiased, unfailing intelligence replaces humans as the primary organizers and arbiters of power in our society, or is it that humanity itself is digitized? I would be very interested to know…without being told to read an entire tome of LW essays.
Is it that an unbiased, unfailing intelligence replaces humans as the primary organizers and arbiters of power in our society, or is it that humanity itself is digitized?
Pretty much the first, but with a perspective worth mentioning. Expressing human values in terms that humans can understand is pretty easy, but still difficult enough to keep philosophy departments writing paper after paper and preachers writing sermon after sermon. Expressing human values in terms that computers can understand- well, that’s tough. Really tough. And if you get it wrong, and the computers become the primary organizers and arbiters of power- well, now we’ve lost the future.
Either swimmer or Dave, are either of you aware of a practical methodology for rationalizing the masses
For a sufficiently broad understanding of “practical” and “the masses” (and understanding “rationalizing” the way I think you mean it, which I would describe as educating), no. Way too many people on the planet for any of the educational techniques I know about to affect more than the smallest fraction of them without investing a huge amount of effort.
It’s worth asking what the benefits are of better educating even a small fraction of “the masses”, though.
or a reason to think why a more efficient society would be any less oppressive or war driven
That depends, of course, on what the society values. If I value oppressing people, making me more efficient just lets me oppress people more efficiently. If I value war, making me more efficient means I conduct war more efficiently.
My best guess is that collectively we value things that war turns out to be an inefficient way of achieving. I’m not confident the same is true about oppression.
In fact, in a worst case scenario, I see a world of majorly rational people as transforming into an even more efficient war machine, and killing us all faster.
Sure. But that scenario implies that wanting to kill ourselves is the goal we’re striving for, and I consider that unlikely enough to not be worth worrying about much.
What is the perceived end goal of friendly Ai? Is it that an unbiased, unfailing intelligence replaces humans as the primary organizers and arbiters of power in our society
Similar, yes. A system designed to optimize the environment for the stuff humans value will, if it’s a better optimizer than humans are, get better results than humans do.
That depends, of course, on what the society values. If I value oppressing people, making me more efficient just lets me oppress people more efficiently. If I value war, making me more efficient means I conduct war more efficiently.
So does rationality determine what a person or group values, or is it merely a tool to be used towards subjective values?
Sure. But that scenario implies that wanting to kill ourselves is the goal we’re striving for, and I consider that unlikely enough to not be worth worrying about much.
My scenario does not assume that all of humanity views themselves as one in-group. Whereas what you are saying assumes that it does. Killing ourselves and killing them are two very different things. I don’t think many groups have the goal of killing themselves, but do you not think that the eradication of competing out groups could be seen as increasing in-group survival?
Almost entirely orthogonal.
You are going to have to explain what you mean here.
So does rationality determine what a person or group values, or is it merely a tool to be used towards subjective values?
Dunno about “merely”, but yeah, the thing LW refers to by “rationality” is a tool that can be used to promote any values.
My scenario does not assume that all of humanity views themselves as one in-group. Whereas what you are saying assumes that it does.
I don’t think it assumes that, actually. You mentioned “a world of majorly rational people [..] killing us all faster.” I don’t see how a world of people who are better at achieving what they value results in all of us being killed faster, unless people value killing all of us.
If what I value is killing you and surviving myself, and you value the same, but we end up taking steps that result in both of us dying, it would appear we have failed to take steps that optimize for our goals. Perhaps if we were better at optimizing for our goals, we would have taken different steps.
do you not think that the eradication of competing out groups could be seen as increasing in-group survival?
Sure.
Almost entirely orthogonal.
You are going to have to explain what you mean here.
I mean that whether humanity is digitized has almost nothing to do with the perceived end goal.
Honestly, I would moderate society with more positive religious elements. In my opinion modern society has preserved many dysfunctional elements of religion while abandoning the functional benefits. I can see that a community of rationalists would have a problem with this perspective, seeing that religion almost always results in an undereducated majority being enchanted by their psychological reflexes; but personally, I don’t see the existence of an irrational mass as unconditionally detrimental.
It is interesting to speculate about the potential of a majorly rational society, but I see no practical method of accomplishing this, nor a reason to believe that, I see no real reason to believe that if there was such a configuration would necessarily be superior to the current model.
Either swimmer or Dave, are either of you aware of a practical methodology for rationalizing the masses, or a reason to think why a more efficient society would be any less oppressive or war driven. In fact, in a worst case scenario, I see a world of majorly rational people as transforming into an even more efficient war machine, and killing us all faster. As for the project of pursuit of Friendly AI, I do not know that much about it. What is the perceived end goal of friendly Ai? Is it that an unbiased, unfailing intelligence replaces humans as the primary organizers and arbiters of power in our society, or is it that humanity itself is digitized? I would be very interested to know…without being told to read an entire tome of LW essays.
Pretty much the first, but with a perspective worth mentioning. Expressing human values in terms that humans can understand is pretty easy, but still difficult enough to keep philosophy departments writing paper after paper and preachers writing sermon after sermon. Expressing human values in terms that computers can understand- well, that’s tough. Really tough. And if you get it wrong, and the computers become the primary organizers and arbiters of power- well, now we’ve lost the future.
For a sufficiently broad understanding of “practical” and “the masses” (and understanding “rationalizing” the way I think you mean it, which I would describe as educating), no. Way too many people on the planet for any of the educational techniques I know about to affect more than the smallest fraction of them without investing a huge amount of effort.
It’s worth asking what the benefits are of better educating even a small fraction of “the masses”, though.
That depends, of course, on what the society values. If I value oppressing people, making me more efficient just lets me oppress people more efficiently. If I value war, making me more efficient means I conduct war more efficiently.
My best guess is that collectively we value things that war turns out to be an inefficient way of achieving. I’m not confident the same is true about oppression.
Sure. But that scenario implies that wanting to kill ourselves is the goal we’re striving for, and I consider that unlikely enough to not be worth worrying about much.
Similar, yes. A system designed to optimize the environment for the stuff humans value will, if it’s a better optimizer than humans are, get better results than humans do.
Almost entirely orthogonal.
So does rationality determine what a person or group values, or is it merely a tool to be used towards subjective values?
My scenario does not assume that all of humanity views themselves as one in-group. Whereas what you are saying assumes that it does. Killing ourselves and killing them are two very different things. I don’t think many groups have the goal of killing themselves, but do you not think that the eradication of competing out groups could be seen as increasing in-group survival?
You are going to have to explain what you mean here.
Dunno about “merely”, but yeah, the thing LW refers to by “rationality” is a tool that can be used to promote any values.
I don’t think it assumes that, actually. You mentioned “a world of majorly rational people [..] killing us all faster.” I don’t see how a world of people who are better at achieving what they value results in all of us being killed faster, unless people value killing all of us.
If what I value is killing you and surviving myself, and you value the same, but we end up taking steps that result in both of us dying, it would appear we have failed to take steps that optimize for our goals. Perhaps if we were better at optimizing for our goals, we would have taken different steps.
Sure.
I mean that whether humanity is digitized has almost nothing to do with the perceived end goal.