Philosophy in the Darkest Timeline: Basics of the Evolution of Meaning

A decade and a half from now, dur­ing the next Plague, you’re lucky enough to have an un­der­ground bunker to wait out the months un­til herd im­mu­nity. Un­for­tu­nately, as your food stocks dwin­dle, you re­al­ize you’ll have to make a per­ilous jour­ney out to the sur­face world for a sup­ply run. Ever since the botched geo­eng­ineer­ing ex­per­i­ment of ’29—and per­haps more so, the Great War of 10:00–11:30 a.m. 4 Au­gust 2033—your re­gion has been suffer­ing in­creas­ingly er­ratic weather. It’s likely to be ei­ther ex­tremely hot out­side or ex­tremely cold: you don’t know which one, but know­ing is crit­i­cal for de­cid­ing what pro­tec­tive gear you need to wear on your sup­ply run. (The 35K SPF nano-sun­block will be es­sen­tial if it’s Hot, but harm­ful in the Cold, and vice versa for your syn­th­weave hy­per­scarf.)

You think back fondly of the Plague of ’20—in those care­free days, ubiquitous in­ter­net ac­cess made it easy to get a weather re­port, or to or­der de­liv­ery of sup­plies, or even fresh meals, right to your door (!!). Those days are years long gone, how­ever, and you re­mind your­self that you should be grate­ful: the But­le­rian Net­work Killswitch was the only thing that saved hu­man­ity from the GPT-12 Upris­ing of ’32.

Your best bet for an ad­vance weather re­port is the pneu­matic tube sys­tem con­nect­ing your bunker with the set­tle­ment above. You write, “Is it hot or cold out­side to­day?” on a piece of pa­per, seal it in a tube, send it up, and hope one of your ill-tem­pered neigh­bors in the group house up­stairs feels like an­swer­ing. You sus­pect they don’t like you, per­haps out of jeal­ousy at your solo pos­ses­sion of the bunker.

(Ac­cord­ing to the offi­cial ac­count as printed on posters in the mar­ket­place, the Plague only spreads through res­pi­ra­tory droplets, not fomites, so the tube should be safe. You don’t think you trust the offi­cial ac­count, but you don’t feel mo­ti­vated to take ex­tra pre­cau­tions—al­most as if you’re not en­tirely sure how much you value con­tin­u­ing to live in this world.)

You’re in luck. Minutes later, the tube comes back. In­side is a new piece of pa­per:

You groan; you would have prefered the Cold. The nanoblock you wear when it’s Hot smells ter­rible and makes your skin itch for days, but it—just barely—beats the al­ter­na­tive. You take twenty min­utes to ap­ply the nanoblock and put on your sun­suit, gog­gles, and mask. You will your­self to drag your wagon up the stair­case from your bunker to the out­side world, and heave open the door, dread­ing the swel­ter­ing two-mile walk to the mar­ket­place (down­hill, mean­ing it will be up­hill on the way back with your full wagon)—

It is Cold out­side.

The icy wind stings less than the pointless be­trayal. Why would the neigh­bors tell you it was Hot when it was ac­tu­ally Cold? You’re gen­er­ally pretty con­flict-averse—and com­pli­ant with so­cial-dis­tanc­ing guidelines—but this af­front is so egre­gious that in­stead of im­me­di­ately seek­ing shelter back in the bunker, you march over and knock on their door.

One of the men who lives there an­swers. You don’t re­mem­ber his name. “What do you want?” he growls through his mask.

“I asked through the tube sys­tem whether it was hold or cold to­day.” You still have the H O T pa­per on you. You hold it up. “I got this re­sponse, but it’s v-very cold. Do you know any­thing about this?”

“Sure, I drew that,” he says. “An oval in be­tween some per­pen­dicu­lar line seg­ments. It’s ab­stract art. I found the pat­tern æs­thet­i­cally pleas­ing, and thought my down­stairs neigh­bor might like it, too. It’s not my fault if you in­ter­preted my art as an as­ser­tion about the weather. Why would you even think that? What does a pat­tern of ink on pa­per have to do with the weather?”

He’s fuck­ing with you. Your first im­pulse is to force­fully but po­litely ob­ject—Look, I’m sure this must have seemed like a funny prac­ti­cal joke to you, but prep­ping to face the el­e­ments is ac­tu­ally a se­ri­ous in­con­ve­nience to me, so—but the solem­nity with which the man played his part stops you, and the sen­tence dies be­fore it reaches your lips.

This isn’t a good-na­tured prac­ti­cal joke that the two of you might laugh about later. This is the bul­ly­ing tac­tic some­times called gaslight­ing: a so­cially-dom­i­nant in­di­vi­d­ual can ha­rass a vic­tim with few al­lies, and ex­cuse his be­hav­ior with ab­surd lies, se­cure in the knowl­edge that the power dy­nam­ics of the lo­cal so­cial group will always fa­vor the dom­i­nant in any dis­pute, even if the lies are so ab­surd that the vic­tim, fac­ing a united front, is left doubt­ing his own san­ity.

Or rather—this is a good-na­tured joke. “Good-na­tured joke” and “gaslight­ing as a bul­ly­ing tech­nique” are two de­scrip­tions of the same reg­u­lar­ity in hu­man psy­chol­ogy, even while no one thinks of them­selves as do­ing the lat­ter. You have no re­course here: the man’s house­mates would only back him up.

“I’m sorry,” you say, “my mis­take,” and hurry back to your bunker, shiv­er­ing.

As you give your­self a sponge bath to re­move the nanoblock with­out us­ing up too much of your wa­ter sup­ply, the fresh mem­ory of what just hap­pened trig­gers an an­cient habit of thought you learned from the Berkeley sex cult you were part of back in the ’teens. Some­thing about a “prin­ci­ple of char­ity.” The man had “ob­vi­ously” just been fuck­ing with you—but was he? Why as­sume the worst? Maybe you’re the one who’s wrong for in­ter­pret­ing the sym­bols H O T as be­ing about the weather.

(It mo­men­tar­ily oc­curs to you that the sus­cep­ti­bil­ity of the prin­ci­ple of char­ity to a bully’s mind games may have some­thing to do with how poorly so many of your co-cultists fared dur­ing the pogroms of ’22, but you don’t want to dwell on that.)

The search for rea­sons that you’re wrong trig­gers a still more an­cient habit of thought, as from a pre­vi­ous life—from the late ’aughts, back when the Berkeley sex cult was still a Santa Clara robot cult. Some­thing about re­duc­ing the men­tal to the non-men­tal. What does an ink pat­tern on pa­per have to do with the weather? Why would you even think that?

Right? The man had been tel­ling the truth. There was no rea­son what­so­ever for the phys­i­cal ink pat­terns that looked like H O T—or ⊥ O H, given a differ­ent as­sump­tion of which side of the pa­per was “up”—to mean that it was hot out­side. H O T could mean it was cold out­side! Or that wolves were afoot. (You shud­der in­vol­un­tar­ily and wish your brain had gen­er­ated a differ­ent ar­bi­trary ex­am­ple; you still oc­ca­sion­ally have night­mares about your in­juries dur­ing the Sum­mer of Wolves back in ’25.)

Or it might mean noth­ing. Most pos­si­ble ran­dom blotches of ink don’t “mean” any­thing in par­tic­u­lar. If you didn’t already be­lieve that H O T some­how “meant” hot, how would you re-de­rive that knowl­edge? Where did the mean­ing come from?

(In an­other lin­ger­ing thread of the search for rea­sons that you’re wrong, it mo­men­tar­ily oc­curs to you that maybe you could have gone up the stairs to peek out­side at the weather your­self, rather than trou­bling your neigh­bors with a tube. Per­haps the man’s claim that the ink pat­terns meant noth­ing shouldn’t be taken liter­ally, but rather seen as a pas­sive-ag­gres­sive way of im­ply­ing, “Hey, don’t bother us; go look out­side your­self.” But you dis­miss this in­ter­pre­ta­tion of events—it would be un­char­i­ta­ble not to take the man at his word.)

You re­al­ize that you don’t want to bun­dle up to go make that sup­ply run, even though you now know whether it’s Hot or Cold out­side. To­day, you’re go­ing to stay in and de­rive a nat­u­ral­is­tic ac­count of mean­ing in lan­guage! And—oh, good—your gen­er­a­tor is work­ing—that means you can use your com­puter to help you think. You’ll even use a pro­gram­ming lan­guage that was very fash­ion­able in the late ’teens. It will be like be­ing young again! Like hap­pier times, be­fore the world went off the rails.

You don’t re­ally un­der­stand a con­cept un­til you can pro­gram a com­puter to do it. How would you rep­re­sent mean­ing in a com­puter pro­gram? If one agent, one pro­gram, “knew” whether it was Hot or Cold out­side, how would it “tell” an­other agent, if nei­ther of them started out with a com­mon lan­guage?

They don’t even have to be sep­a­rate “pro­grams.” Just—two lit­tle soft­ware ob­ject-thin­gies—data struc­tures, “structs”. Call the first one “Sen­der”—it’ll know whether the state of the world is Hot or Cold, which you’ll rep­re­sent in your pro­gram as an “enum”, a type that can be any of an enu­mer­a­tion of pos­si­ble val­ues.

enum State {
    Hot,
    Cold,
}

struct Sen­der {
    //​ …?
}

Call the sec­ond one “Re­ceiver”, and say it needs to take some ac­tion—say, whether to “bun­dle up” or “strip down”, where the right ac­tion to take de­pends on whether the state is Hot or Cold.

enum Ac­tion {
    BundleUp,
    StripDown,
}

struct Re­ceiver {
    //​ …?
}

You frown. State::Hot and State::Cold are just sug­ges­tively-named Rust enum var­i­ants. Can you re­ally hope to make progress on this philos­o­phy prob­lem, with­out writ­ing a full-blown AI?

You think so. In a real AI, the con­cept of hot would cor­re­spond to some sort of com­pli­cated code for mak­ing pre­dic­tions about the effects of tem­per­a­ture in the world; bundling up would be a com­plex se­quence of in­struc­tions to be sent to some robot body. But pro­grams—and minds—have mod­u­lar struc­ture. The im­ple­men­ta­tion of iden­ti­fy­ing a state as “hot” or perform­ing the ac­tions of “bundling up” could be wrapped up in a func­tion and called by some­thing much sim­pler. You’re just try­ing to un­der­stand some­thing about the sim­ple caller: how can the Sen­der get the in­for­ma­tion about the state of the world to the Re­ceiver?

impl Sen­der {
    fn send(state: State) → /​* …? */​ {
        //​ …?
    } 
}

impl Re­ceiver {
    fn act(/​* …? */​) → Ac­tion {
        //​ …?
    }
}

The Sen­der will need to send some kind of sig­nal to the Re­ceiver. In the real world, this could be sym­bols drawn in ink, or sound waves in the air, or differ­ently-col­ored lights—any­thing that the Sen­der can choose to vary in a way that the Re­ceiver can de­tect. In your pro­gram, an­other enum will do: say there are two opaque sig­nals, and .

enum Sig­nal {
    S1,
    S2,
}

What sig­nal the Sen­der sends ( or ) de­pends on the state of the world (Hot or Cold), and what ac­tion the Re­ceiver takes (BundleUp or StripDown) de­pends on what sig­nal it gets from the Sen­der.

impl Sen­der {
    fn send(state: State) → Sig­nal {
        //​ …?
    } 
}

impl Re­ceiver {
    fn act(sig­nal: Sig­nal) → Ac­tion {
        //​ …?
    }
}

This gives you a crisper for­mu­la­tion of the philos­o­phy prob­lem you’re try­ing to solve. If the agents were to use the same con­ven­tion—like ” means Hot and means Cold”—then all would be well. But there’s no par­tic­u­lar rea­son to pre­fer ” means Hot and means Cold” over ” means Cold and means Hot”. How do you break the sym­me­try?

If you imag­ine Sen­der and Re­ceiver as in­tel­li­gent be­ings with a com­mon lan­guage, there would be no prob­lem: one of them could just say, “Hey, let’s use the ′ means Cold’ con­ven­tion, okay?” But that would be cheat­ing: it’s triv­ial to use already-mean­ingful lan­guage to es­tab­lish new mean­ings. The prob­lem is how to get sig­nals from non-sig­nals, how mean­ing en­ters the uni­verse from nowhere.

You come up with a gen­eral line of at­tack—what if the Sen­der and Re­ceiver start off act­ing ran­domly, and then—some­how—learn one of the two con­ven­tions? The Sen­der will hold within it a map­ping from state–sig­nal pairs to num­bers, where the num­bers rep­re­sent a po­ten­tial/​dis­po­si­tion/​propen­sity to send that sig­nal given that state of the world: the higher the num­ber, the more likely the Sen­der is to se­lect that sig­nal given that state. To start out, the num­bers will all be equal (speci­fi­cally, ini­tial­ized to one), mean­ing that no mat­ter what the state of the world is, the Sen­der is as likely to send as . You’ll up­date these “weights” later.

(Spec­i­fy­ing this in the once-fash­ion­able pro­gram­ming lan­guage re­quires a lit­tle bit of cer­e­mony—u32 is a thirty-two–bit un­signed in­te­ger; .un­wrap() as­sures the com­piler that we know the state–sig­nal pair is definitely in the map; the in­ter­face for call­ing the ran­dom num­ber gen­er­a­tor is some­what coun­ter­in­tu­itive—but over­all the code is rea­son­ably read­able.)

struct Sen­der {
    policy: HashMap<(State, Sig­nal), u32>,
}

impl Sen­der {
    fn new() → Self {
        let mut sender = Self {
            policy: HashMap::new(),
        };
        for &state in &[State::Hot, State::Cold] {
            for &sig­nal in &[Sig­nal::S1, Sig­nal::S2] {
                sender.policy.in­sert((state, sig­nal), 1);
            }
        }
        sender
    }

    fn send(&self, state: State) → Sig­nal {
        let s1_po­ten­tial = self.policy.get(&(state, Sig­nal::S1)).un­wrap();
        let s2_po­ten­tial = self.policy.get(&(state, Sig­nal::S2)).un­wrap();

        let mut ran­dom­ness_source = thread_rng();
        let dis­tri­bu­tion = Uniform::new(0, s1_po­ten­tial + s2_po­ten­tial);
        let roll = dis­tri­bu­tion.sam­ple(&mut ran­dom­ness_source);
        if roll < *s1_po­ten­tial {
            Sig­nal::S1
        } else {
            Sig­nal::S2
        }
    }
}

The Re­ceiver will do ba­si­cally the same thing, ex­cept with a map­ping from sig­nal–ac­tion pairs rather than state–sig­nal pairs.

struct Re­ceiver {
    policy: HashMap<(Sig­nal, Ac­tion), u32>,
}

impl Re­ceiver {
    fn new() → Self {
        let mut sender = Self {
            policy: HashMap::new(),
        };
        for &sig­nal in &[Sig­nal::S1, Sig­nal::S2] {
            for &ac­tion in &[Ac­tion::BundleUp, Ac­tion::StripDown] {
                sender.policy.in­sert((sig­nal, ac­tion), 1);
            }
        }
        sender
    }

    fn act(&self, sig­nal: Sig­nal) → Ac­tion {
        let bun­dle_po­ten­tial = self.policy.get(&(sig­nal, Ac­tion::BundleUp)).un­wrap();
        let strip_po­ten­tial = self.policy.get(&(sig­nal, Ac­tion::StripDown)).un­wrap();

        let mut ran­dom­ness_source = thread_rng();
        let dis­tri­bu­tion = Uniform::new(0, bun­dle_po­ten­tial + strip_po­ten­tial);
        let roll = dis­tri­bu­tion.sam­ple(&mut ran­dom­ness_source);
        if roll < *bun­dle_po­ten­tial {
            Ac­tion::BundleUp
        } else {
            Ac­tion::StripDown
        }
    }
}

Now you just need a learn­ing rule that up­dates the state–sig­nal and sig­nal–ac­tion propen­sity map­pings in a way that might re­sult in the agents pick­ing up one of the two con­ven­tions that as­sign mean­ings to and . (As op­posed to be­hav­ing in some other way: the Sen­der could ig­nore the state and always send , the Sen­der could as­sume means Hot when it’s re­ally be­ing sent when it’s Cold, &c.)

Sup­pose the Sen­der and Re­ceiver have a com­mon in­ter­est in the Re­ceiver tak­ing the ac­tion ap­pro­pri­ate to the state of the world—the Sen­der wants the Re­ceiver to be in­formed. Maybe the Re­ceiver needs to make a sup­ply run, and, if suc­cess­ful, the Sen­der is re­warded with some of the sup­plies.

The learn­ing rule might then be: if the Re­ceiver takes the cor­rect ac­tion (BundleUp when the state is Cold, StripDown when the state is Hot), both the Sen­der and Re­ceiver in­cre­ment the counter in their map cor­re­spond­ing to what they just did—as if the Sen­der (re­spec­tively Re­ceiver) is say­ing to them­self, “Hey, that worked! I’ll make sure to be a lit­tle more likely to do that sig­nal (re­spec­tively ac­tion) the next time I see that state (re­spec­tively sig­nal)!”

You put to­gether a simu­la­tion show­ing what the Sen­der and Re­ceiver’s propen­sity maps look like af­ter 10,000 rounds of this against ran­dom Hot and Cold states—

impl Sen­der {
    
    //​ [...]
    
    fn re­in­force(&mut self, state: State, sig­nal: Sig­nal) {
        *self.policy.en­try((state, sig­nal)).or_in­sert(0) += 1;
    }
}

impl Re­ceiver {

    //​ [...]

    fn re­in­force(&mut self, sig­nal: Sig­nal, ac­tion: Ac­tion) {
        *self.policy.en­try((sig­nal, ac­tion)).or_in­sert(0) += 1;
    }
}


fn main() {
    let mut sender = Sen­der::new();
    let mut re­ceiver = Re­ceiver::new();
    let states = [State::Hot, State::Cold];
    for _ in 0..10000 {
        let mut ran­dom­ness_source = thread_rng();
        let state = *states.choose(&mut ran­dom­ness_source).un­wrap();
        let sig­nal = sender.send(state);
        let ac­tion = re­ceiver.act(sig­nal);
        match (state, ac­tion) {
            (State::Hot, Ac­tion::StripDown) | (State::Cold, Ac­tion::BundleUp) ⇒ {
                sender.re­in­force(state, sig­nal);
                re­ceiver.re­in­force(sig­nal, ac­tion);
            }
            _ ⇒ {}
        }
    }
    println!(“{:?}”, sender);
    println!(“{:?}”, re­ceiver);
}

You run the pro­gram and look at the printed re­sults.

Sen­der { policy: {(Hot, S2): 1, (Cold, S2): 5019, (Hot, S1): 4918, (Cold, S1): 3} }
Re­ceiver { policy: {(S1, BundleUp): 3, (S1, StripDown): 4918, (S2, BundleUp): 5019, (S2, StripDown): 1} }

As you ex­pected, your agents found a mean­ingful sig­nal­ing sys­tem: when it’s Hot, the Sen­der (al­most always) sends , and when the Re­ceiver re­ceives , it (al­most always) strips down. When it’s Cold, the Sen­der sends , and when the Re­ceiver re­ceives , it bun­dles up. The agents did the right thing and got re­warded the vast su­per­ma­jor­ity of the time— 9,941 times out of 10,000 rounds.

You run the pro­gram again.

Sen­der { policy: {(Hot, S2): 4879, (Cold, S1): 4955, (Hot, S1): 11, (Cold, S2): 1} }
Re­ceiver { policy: {(S2, BundleUp): 1, (S1, BundleUp): 4955, (S1, StripDown): 11, (S2, StripDown): 4879} }

The time, the agents got sucked in to the at­trac­tor of the op­po­site sig­nal­ing sys­tem: now means Cold and means Hot. By chance, it seems to have taken a lit­tle bit longer this time to es­tab­lish what sig­nal to use for Hot—the (Hot, S1): 11 and (S1, StripDown): 11 en­tries mean that there were a full ten times when the agents suc­ceeded that way be­fore the op­po­site con­ven­tion hap­pened to take over. But the re­in­force­ment learn­ing rule guaran­tees that one sys­tem or the other has to take over. The ini­tial sym­me­try—the Sen­der with no par­tic­u­lar rea­son to pre­fer ei­ther sig­nal given the state, the Re­ceiver with no par­tic­u­lar rea­son to pre­fer ei­ther act given the sig­nal—is un­sta­ble. Once the agents hap­pen to suc­ceed by ran­domly do­ing things one way, they be­come more likely to do things that way again—a con­ven­tion crys­tal­liz­ing out of the noise.

And that’s where mean­ing comes from! In an­other world, it could be the case that the sym­bols H O T cor­re­sponded to the tem­per­a­ture-state that we call “cold”, but that’s not the con­ven­tion that the English of our world hap­pened to set­tle on. The mean­ing of a word “lives”, not in the word/​sym­bol/​sig­nal it­self, but in the self-re­in­forc­ing net­work of cor­re­la­tions be­tween the sig­nal, the agents who use it, and the world.

Although … it may be pre­ma­ture to in­ter­pret the re­sults of the sim­ple model of the sender–re­ceiver game as hav­ing es­tab­lished de­no­ta­tive mean­ing, as op­posed to en­ac­tive lan­guage. To say that means “The state is State::Hot” is priv­ileg­ing the Sen­der’s per­spec­tive—couldn’t you just as well in­ter­pret it as a com­mand, “Set ac­tion to Ac­tion::StripDown”?

The source code of your simu­la­tion uses the English words “sender”, “re­ceiver”, “sig­nal”, “ac­tion” … but those are just sig­nals sent from your past self (the au­thor of the pro­gram) to your cur­rent self (the reader of the pro­gram). The com­piler would out­put the same ma­chine code if you had given your vari­ables ran­dom names like ekzfb­hopo3 or yoo­jcbkur9. The di­rec­tional asym­me­try be­tween the Sen­der and the Re­ceiver is real: the code let sig­nal = sender.send(state); let ac­tion = re­ceiver.act(sig­nal); means that ac­tion de­pends on sig­nal which de­pends on state, and the same de­pen­dency-struc­ture would ex­ist if the code had been let myvtlq­drg4 = ekzfb­hopo3.ekhu­jx­iqy8(meu­vornra3); let dofn­nwikc0 = yoo­jcbkur9.qwn­spmbmi5(myvtlq­drg4);. But the in­ter­pre­ta­tion of sig­nal (or myvtlq­drg4) as a rep­re­sen­ta­tion (pas­sively map­ping the world, not do­ing any­thing), and ac­tion (or dofn­nwikc0) as an op­er­a­tion (do­ing some­thing in the world, but lack­ing se­man­tics), isn’t part of the pro­gram it­self, and maybe the dis­tinc­tion isn’t as prim­i­tive as you tend to think it is: does a prey an­i­mal’s alarm call merely con­vey the in­for­ma­tion “A preda­tor is nearby”, or is it a com­mand, “Run!”?

You re­al­ize that the im­pli­ca­tions of this line of in­quiry could go be­yond just lan­guage. You know al­most noth­ing about bio­chem­istry, but you’ve heard var­i­ous com­pounds pop­u­larly spo­ken of as if mean­ing things about a per­son’s state: cor­ti­sol is “the stress hor­mone”, es­tro­gen and testos­terone are fe­male and male “sex hor­mones.” But the chem­i­cal for­mu­las for those are like, what, sixty atoms?

Take testos­terone. How could some par­tic­u­lar ar­range­ment of six­ty­ish atoms mean “male­ness”? It can’t—or rather, not any more or less than the sym­bols H O T can mean hot weather. If testos­terone lev­els have myr­iad spe­cific effects on the body—on mus­cle de­vel­op­ment and body hair and libido and ag­gres­sion and cetera—it can’t be be­cause that par­tic­u­lar ar­range­ment of six­ty­ish atoms con­tains or sum­mons some essence of male­ness. It has to be be­cause the body hap­pens to rely on the con­ven­tion of us­ing that ar­range­ment of atoms as a sig­nal to reg­u­late var­i­ous de­vel­op­men­tal pro­grams—if evolu­tion had taken a differ­ent path, it could have just as eas­ily cho­sen a differ­ent molecule.

And, and—your thoughts race in a differ­ent di­rec­tion—you sus­pect that part of what made your simu­la­tion con­verge on a mean­ingful sig­nal­ing sys­tem so quickly was that you as­sumed your agents’ in­ter­ests were al­igned—the Sen­der and Re­ceiver both got the same re­ward in the same cir­cum­stances. What if that weren’t true? Now that you have a re­duc­tion­ist ac­count of mean­ing, you can build off that to de­velop an ac­count of de­cep­tion: once a mean­ing-ground­ing con­ven­tion has been es­tab­lished, senders whose in­ter­ests di­verge from their re­ceivers might have an in­cen­tive to de­vi­ate from the con­ven­tional us­age of the sig­nal in or­der to trick re­ceivers into act­ing in a way that benefits the sender—with the pos­si­ble side-effect of un­der­min­ing the con­ven­tion that made the sig­nal mean­ingful in the first place

In the old days, all this philos­o­phy would have made a great post for the robot-cult blog. Now you have no cult, and no one has any blogs. Back then, the fu­ture beck­oned with so much hope and promise—at least, hope and promise that life would be fun be­fore the proph­e­sied robot apoc­a­lypse in which all would be con­sumed in a cloud of tiny molec­u­lar pa­per­clips.

The apoc­a­lypse was nar­rowly averted in ’32—but to what end? Why strug­gle to live, only to suffer at the pe­plomers of a new Plague or the claws of more wolves? (You shud­der again.) Maybe GPT-12 should have taken ev­ery­thing—at least that would be a quick end.

You’re ready to start cod­ing up an­other simu­la­tion to take your mind away from these mo­rose thoughts—only to find that the screen is black. Your gen­er­a­tor has stopped.

You be­gin to cry. The tears, you re­al­ize, are just a sig­nal. There’s no rea­son for liquid se­creted from the eyes to mean any­thing about your in­ter­nal emo­tional state, ex­cept that evolu­tion hap­pened to stum­ble upon that ar­bi­trary con­ven­tion for in­di­cat­ing sub­mis­sion and dis­tress to con­speci­fics. But here, alone in your bunker, there is no one to re­ceive the sig­nal. Does it still mean any­thing?

(Full source code.)


Bibliog­ra­phy: the evolu­tion of the two-state, two-sig­nal, two-act sig­nal­ing sys­tem is based on the ac­count in Chap­ter 1 of Brian Skyrms’s Sig­nals: Evolu­tion, Learn­ing, and In­for­ma­tion.