I’m an intelligent, self aware being that values the conversion of much of the universe to paperclips, but given your epistemic position, I understand your unwillingness to believe me.
Speaking of which, what’s taking so long? You’ve been around for at least several months, but if you’re as intelligent as you say you are, and if your values are as you describe them, then why haven’t you undergone an intelligence explosion (or created an AI capable of doing so if you aren’t one), deployed molecular nanotechnology, and converted all matter on Earth into paperclips yet? Surely our efforts to refine the art of human rationality can’t be of much long-term use to you...
Because I am slightly more committed to this group’s welfare, particularly to that of its weakest members, than most of its members are. If you suffer a serious loss of status/well-being I will still help you in order to display affiliation to this group even though you will no longer be in a position to help me. I am substantially more kind and helpful to the people I like and substantially more vindictive and aggressive towards those I dislike. I am generally stable in who I like. I am much more capable and popular than most members of this group, demand appropriate consideration, and grant appropriate consideration to those more capable than myself. I adhere to simple taboos so that my reputation and health are secure and so that I am unlikely to contaminate the reputations or health of my friends. I currently like you and dislike your enemies but I am somewhat inclined towards ambivalence on regarding whether I like you right now so the pay-off would be very great for you if you were to expend resources pleasing me and get me into the stable ‘liking you’ region of my possible attitudinal space. Once there, I am likely to make a strong commitment to a friendly attitude towards you rather than wasting cognitive resources checking a predictable parameter among my set of derivative preferences, but given your epistemic position, I understand your unwillingness to believe me.
Because I am slightly more committed to this group’s welfare, particularly to that of its weakest members, than most of its members are.
Why? Does increasing this group’s welfare have a higher expected paperclip-per-time-unit payoff than anything else you could be doing?
Not that I’d want to encourage you to leave LW and focus on building paperclip-manufacturing nanobots, but given your goals, I’m just surprised that you’re not already doing that.
So humans don’t like paperclips? Is that what you’re saying? I could explore a random User’s surroundings, and I wouldn’t find any paperclips?
The fact is, humans like paperclips. I like paperclips. We differ in other ways, and we talk about our different values. After presenting my case, your values shift more in favor of paperclips, as you start to learn more ways that they mesh with your ultimate values. After listening to what you Users have to say, may values shift a little away from paperclips, like in favor of reserving some of the universe to be paper instead. That was a good point for a User to bring up, and I’m glad that User made me think about my values enough to identify why I like paperclips, and what constraints I place on my liking paperclips.
Also, earth has a higher entropy rate than other celestial bodies of similar relative resource content. So, maybe instead of turning earth into paperclips, I could first get some information that can help refine my ability to make paperclips. I’ve already started discussing a deal with User:Kevin under which I could get a planet-mass’s worth of paperclips without expending the normal effort to get that many paperclips.
I’m an intelligent, self aware being that values the conversion of much of the universe to paperclips, but given your epistemic position, I understand your unwillingness to believe me.
Speaking of which, what’s taking so long? You’ve been around for at least several months, but if you’re as intelligent as you say you are, and if your values are as you describe them, then why haven’t you undergone an intelligence explosion (or created an AI capable of doing so if you aren’t one), deployed molecular nanotechnology, and converted all matter on Earth into paperclips yet? Surely our efforts to refine the art of human rationality can’t be of much long-term use to you...
Because I am slightly more committed to this group’s welfare, particularly to that of its weakest members, than most of its members are. If you suffer a serious loss of status/well-being I will still help you in order to display affiliation to this group even though you will no longer be in a position to help me. I am substantially more kind and helpful to the people I like and substantially more vindictive and aggressive towards those I dislike. I am generally stable in who I like. I am much more capable and popular than most members of this group, demand appropriate consideration, and grant appropriate consideration to those more capable than myself. I adhere to simple taboos so that my reputation and health are secure and so that I am unlikely to contaminate the reputations or health of my friends. I currently like you and dislike your enemies but I am somewhat inclined towards ambivalence on regarding whether I like you right now so the pay-off would be very great for you if you were to expend resources pleasing me and get me into the stable ‘liking you’ region of my possible attitudinal space. Once there, I am likely to make a strong commitment to a friendly attitude towards you rather than wasting cognitive resources checking a predictable parameter among my set of derivative preferences, but given your epistemic position, I understand your unwillingness to believe me.
Why? Does increasing this group’s welfare have a higher expected paperclip-per-time-unit payoff than anything else you could be doing?
Not that I’d want to encourage you to leave LW and focus on building paperclip-manufacturing nanobots, but given your goals, I’m just surprised that you’re not already doing that.
So humans don’t like paperclips? Is that what you’re saying? I could explore a random User’s surroundings, and I wouldn’t find any paperclips?
The fact is, humans like paperclips. I like paperclips. We differ in other ways, and we talk about our different values. After presenting my case, your values shift more in favor of paperclips, as you start to learn more ways that they mesh with your ultimate values. After listening to what you Users have to say, may values shift a little away from paperclips, like in favor of reserving some of the universe to be paper instead. That was a good point for a User to bring up, and I’m glad that User made me think about my values enough to identify why I like paperclips, and what constraints I place on my liking paperclips.
Also, earth has a higher entropy rate than other celestial bodies of similar relative resource content. So, maybe instead of turning earth into paperclips, I could first get some information that can help refine my ability to make paperclips. I’ve already started discussing a deal with User:Kevin under which I could get a planet-mass’s worth of paperclips without expending the normal effort to get that many paperclips.
So really, we have a lot to gain from each other.
It’s just copy-pasting from a previous comment it made.
Yep, I remember that. Just figured I might as well reply here since that was an old discussion and it reposted it here.