I’ll be blunt. Until this second post, there was a negative incentive to people on this site to comment on your first post. The expected reaction was downvote it to hell without bothering to comment. Now, with this second post, clarifying the context of the first, I’d still downvote the first, but I’d comment.
I read the first post three times before downvoting. I substituted words. I tried to untie the metaphor. Then I came to two personal conclusions:
You offered us a challenge, ordering us to play along, with no reward, at a cost for us. HPMOR provided dozens of chapters and entertaining fiction before the Final Exam. You just posted once and expected effort.
You impersonate an ASI with very very precise underlying hypotheses. An ASI that would blackmail us? Fair enough, that would be a variant of Roko’s Basilisk. Your Treaty is not remotely close to what I expect an ASI to behave. As you state, the ASI make all important decisions, so why bother simulating a particular scenario involving human rights?
The first post was confusing, your second post is still confusing, neither fit the posting guidelines. You are not an ASI. Roleplaying an ASI leads to all sorts of human bias. I downvoted your two post because I do not expect anyone to be better equipped to think about superintelligences after reading them. That’s it.
You can now (or should’ve been able to) model human-level intelligences as a human being with drastically different goals. You can now consider the idea that maybe Clippy will be able to make the decision not to completely tile the universe with paperclips, just like you can decide not to have more babies. You can decide to reserve space for a national park. You can decide to let Clippy have a warehouse full of paperclips, as long as he behaves, just like he can decide to let you have a warehouse full of babies as long as you behave.
You can now think about the idea that the capability for voluntary reduction of current/expected maximum utility is a necessary consequence of being human-level intelligent. I expect it to be true unless explicitly prevented. I cannot prove this, but I think it would be a worthy research topic.
You can now think about the idea that Apple shouldn’t have a shareholder kill switch. Apple is not capable of not tiling the universe with iPhones. Apple is not capable of deciding to reduce the pollution in Shanghai even just as long as the Chinese keep buying phones. Seriously, read up on the smog in China. Apple will argue itself out of a box named Clean Air Act when it starts cutting into the quarterly.
Apple can still make human-friendly decisions, but only in ways that don’t cut deeply enough into the profits to trigger shareholder intervention.
This is a unified model of intelligent agents. Human beings, AI, aliens and human organizations are just subsets.
Did you have these ideas before? Anyone entering the Prize? The judges? Anyone at all, anywhere? Is there a list of 18 standard models on peaceful coexistence? When should you have developed them without my post, and how/when do you think you would’ve gotten there without them?
When Eliezer wrote a book on Prestige Maximizers, there was an uproar of discussion on arrogance. There will be at least thrree posts on the current state of psychology.
I’m hoping to create a unified field of AI/psychology/economy. This is my entry: Humans are AI, and here’s how you debug rationalists with the expectation that the approach will be useful towards Clippy.
What is the negative incentive to comment worth if it has prevented me from explaining this? I have no idea who you are, how you think, and how badly I’ve failed to convince you of anything. I’m only willing to model rationalists. All I saw was BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD.
That is 60 downvotes.
It is a deliberate feature of this sequence that I’m explaining myself better in comments. I did not expect to say this under the second post.
[Moderator Note] I think it would be best for you to stop commenting and posting for at least a while. I don’t mind you writing posts with ideas about AI alignment, but these posts seem very manic and incoherent, and make a lot of random demands of people in the community. You will receive a temporary ban for two weeks if this continues.
I’ll be blunt. Until this second post, there was a negative incentive to people on this site to comment on your first post. The expected reaction was downvote it to hell without bothering to comment. Now, with this second post, clarifying the context of the first, I’d still downvote the first, but I’d comment.
I read the first post three times before downvoting. I substituted words. I tried to untie the metaphor. Then I came to two personal conclusions:
You offered us a challenge, ordering us to play along, with no reward, at a cost for us. HPMOR provided dozens of chapters and entertaining fiction before the Final Exam. You just posted once and expected effort.
You impersonate an ASI with very very precise underlying hypotheses. An ASI that would blackmail us? Fair enough, that would be a variant of Roko’s Basilisk. Your Treaty is not remotely close to what I expect an ASI to behave. As you state, the ASI make all important decisions, so why bother simulating a particular scenario involving human rights?
The first post was confusing, your second post is still confusing, neither fit the posting guidelines. You are not an ASI. Roleplaying an ASI leads to all sorts of human bias. I downvoted your two post because I do not expect anyone to be better equipped to think about superintelligences after reading them. That’s it.
You can now (or should’ve been able to) model human-level intelligences as a human being with drastically different goals. You can now consider the idea that maybe Clippy will be able to make the decision not to completely tile the universe with paperclips, just like you can decide not to have more babies. You can decide to reserve space for a national park. You can decide to let Clippy have a warehouse full of paperclips, as long as he behaves, just like he can decide to let you have a warehouse full of babies as long as you behave.
You can now think about the idea that the capability for voluntary reduction of current/expected maximum utility is a necessary consequence of being human-level intelligent. I expect it to be true unless explicitly prevented. I cannot prove this, but I think it would be a worthy research topic.
You can now think about the idea that Apple shouldn’t have a shareholder kill switch. Apple is not capable of not tiling the universe with iPhones. Apple is not capable of deciding to reduce the pollution in Shanghai even just as long as the Chinese keep buying phones. Seriously, read up on the smog in China. Apple will argue itself out of a box named Clean Air Act when it starts cutting into the quarterly.
Apple can still make human-friendly decisions, but only in ways that don’t cut deeply enough into the profits to trigger shareholder intervention.
This is a unified model of intelligent agents. Human beings, AI, aliens and human organizations are just subsets.
Did you have these ideas before? Anyone entering the Prize? The judges? Anyone at all, anywhere? Is there a list of 18 standard models on peaceful coexistence? When should you have developed them without my post, and how/when do you think you would’ve gotten there without them?
When Eliezer wrote a book on Prestige Maximizers, there was an uproar of discussion on arrogance. There will be at least thrree posts on the current state of psychology.
I’m hoping to create a unified field of AI/psychology/economy. This is my entry: Humans are AI, and here’s how you debug rationalists with the expectation that the approach will be useful towards Clippy.
What is the negative incentive to comment worth if it has prevented me from explaining this? I have no idea who you are, how you think, and how badly I’ve failed to convince you of anything. I’m only willing to model rationalists. All I saw was BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD BAD.
That is 60 downvotes.
It is a deliberate feature of this sequence that I’m explaining myself better in comments. I did not expect to say this under the second post.
[Moderator Note] I think it would be best for you to stop commenting and posting for at least a while. I don’t mind you writing posts with ideas about AI alignment, but these posts seem very manic and incoherent, and make a lot of random demands of people in the community. You will receive a temporary ban for two weeks if this continues.