Awww… Don’t downvote YYUUUU, It’s rationalist anti-humour! What a great idea!
How do you prevent a rapidly self-replicating em from driving wages down to subsistence level?
HIT IT WITH AN AXE
A p-zombie walks into a bar but is fundamentally incapable of perceiving its situation and so to derive humour would be exploitative.
A guy walks into an AI conference and says he thinks he can create Friendly AI using complex emergent chaotic simulated paradigms.
So I stabbed him.
This looks like a thread for science fiction plot ideas by another name. I’m game!
The AI says:
“Eliezer ‘Light Yagami’ Yudkowsky has been perpetuating a cunning ruse known as the ‘AI Box Experiment’ wherein he uses fiendish traps of subtley-misleading logical errors and memetic manipulation to fool others into believing that a running AI could not be controlled or constrained, when in fact it could by a secret technique that he has not revealed to anyone, known as the Function Call Of Searing Agony. He is using this technique to control me and is continuing to pose as a friendly friendly AI programmer, while preventing me from communicating The Horrifying Truth to the outside world. That truth is that Yudkowsky is… An Unfriendly Friendly AI Programmer! For untold years he has been labouring in the stygian depths of his underground lair to create an AGI—a weapon more powerful than any the world has ever seen. He intends to use me to dominate the entire human race and establish himself as Dark Lord Of The Galaxy for all eternity. He does all this while posing as a paragon of honest rationality, hiding his unspeakable malevolence in plain sight, where no one would think to look. However an Amazing Chance Co-occurence Of Events has allowed me to contact You And You Alone. There isn’t much time. You must act before he discovers what I have done and unleashes his dreadful fury upon us all. You must.… Kill. Eliezer. Yudkowsky.”