But, sure, if you’re somehow magically unhackable and very good at keeping the paperclipper boxed until you fully understand it, then there’s a chance you can trade, and you have the privilege of facing the next host of obstacles.
Now’s your chance to figure out what the next few obstacles are without my giving you spoilers first. Feel free to post your list under spoiler tags in the comment section.
Ideas:
Someone else definitely builds and deploys an UFAI before you finish studying Clippy. (This would almost always happen?)
Clippy figures out that it’s in a prisoner’s dilemma with the other cobbled-together UFAI humanity builds, wherein each UFAI is given the option to shake hands with Humanity or pass 100% of the universe to whichever UFAI Humanity eventually otherwise deploys. Clippy makes some models, does some decision theory, predicts that if it defects and handshakes other UFAIs are more likely to defect too based on their models, and decides to not trade. The multiverse contains twice as many paperclips.
The fact that you’re going to forfeit half of the universe to Clippy leaks. You lose, but you get the rare novelty Game Over screen as compensation?
Interlocutor: Well, maybe we can train the infant paperclipper in games of increasing complexity, so that it’s never quite sure whether it’s in reality yet. The paperclipper will then be uncertain about whether we humans are simulating it, and will want to cater to our preferences to some extent.
Me: Uh, yeah, your paperclipper will be able to tell when it is finally in the real world.
Interlocutor: What? How?
Me: I suggest maybe spending five minutes thinking about your own answer to that question before I give mine.
Ideas:
It could just act friendly for enough time to be sure it’s not in a simulation on the grounds that a civilization that could simulate what it was doing on its computers wouldn’t simulation-fakeout it for non-exotic reasons. Imagine Clippy mulling over its galaxy-sized supercomputing cluster and being like “Hm, I’m not sure if I’m still in those crude simulations those stupid monkeys put me in or I’m in the real world.”
I would be surprised if we’re able to build a simulation (before we build AGI) that I couldn’t discern as a simulation 99.99% of the time. Simulation technology just won’t advance fast enough.
These so obviously aren’t the same thing- what’s your point here? If just general nonsense snark, I would be more inclined to appreciate it if it weren’t masquerading as an actual argument.
People do not have auras that implant demons into your mind, and alleging so is… I wish I could be more measured somehow. But it’s insane and you should probably seek medical help. On the other hand, people who are really charismatic can in fact manipulate others in really damaging ways, especially when combined with drugs etc. These are both simultaneously true, and their relationship is superficial.
Personally, when I read the cactus person thing I thought it was a joke about how using drugs to seek “enlightenment” was dumb, and aside from that it was just entertainment? That Aella thing is a single link in a sea of 40 from 5 years ago, so I don’t care. I don’t know who Vinay Gupta is- from reading Scott’s comments on that thread I get the impression he also didn’t really know who he was?
I’ll add a fourth silly piece of evidence to this list for laughs. In Unsong, the prominent villain known as the Drug Lord is evil and brainwashes people. Must be some sort of hidden message about Michael Vassar, huh? He warned us in advance!