This article is awesome! I’ve been doing this kind of stuff for years with regards to motivation, attitudes, and even religious belief. I’ve used the terminology of “virtualisation” to talk about my thought-processes/thought-rituals in carefully defined compartments that give me access to emotions, attitudes, skills, etc. I would otherwise find difficult. I even have a mental framework I call “metaphor ascendence” to convert false beliefs into virtualised compartments so that they can be carefully dismantled without loss of existing utility. It’s been nearly impossible to explain to other people how I do and think about this, though often you can show them how to do it without explaining. And for me the major in-road was totally a realisation that there exist tasks which are only possible if you believe they are—guess I’ll have to check out The Phantom Tollbooth (I’ve never read it).
This might be a bit of a personal question (feel free to pm or ignore), but have you by any chance done this with religious beliefs? I felt like I got a hint of that between the lines and it would be amazing to find someone else who does this. I’ve come across so many people in my life who threw away a lot of utility when they left religion, never realising how much of it they could keep or convert without sacrificing their integrity. One friend even teasingly calls me the “atheist Jesus” because of how much utility I pumped back into his life just by leveraging his personal religious past. Religion has been under strong selective pressure for a long time, and has accumulated a crapload of algorithmic optimisations that can easily get tossed by their apostates just because they’re described in terms of false beliefs. My line is always, “I would never exterminate a nuisance species without first sequencing its DNA.” You just have to remember that asking the organism about its own DNA is a silly strategy.
Anyways, I could go on for a long time about this, but this article has given me the language to set up a new series I’ve been trying to rework for Less Wrong, along the lines of this, so I better get cracking. But the buzz of finding someone like-minded is an awesome bonus. Thank you so much for posting.
p.s. I have to agree with various other commenters that I wouldn’t use the “dark arts” description myself—mind optimisation is at the heart of legit rationality. But I see how it definitely makes for useful marketing language, so I won’t give you too much of a hard time for it.
It seems at best fairly confused to say that an L-zombie is wrong because of something it would do if it were run, simply because we evaluated what it would say or do against the situation where it didn’t. Where you keep saying “is” and “concludes” and “being” you should be saying “would”, “would conclude”, and “would be”, all of which is a gloss for “would X if it were run”, and in the (counter-factual) world where the L-zombie “would” do those things it “would be running” and therefore “would be right”. Being careful with your tenses here will go a long way.
Nonetheless I think the concept of an L-zombie is useful, if only to point out that computation matters. I can write a simple program that encapsulates all possible L-zombies (or rather would express them all, if it were run), yet we wouldn’t consider that program to be those consciousnesses—a point well worth remembering in numerous examinations of the topic.