Building Phenomenological Bridges

Nat­u­ral­ized in­duc­tion is an open prob­lem in Friendly Ar­tifi­cial In­tel­li­gence (OPFAI). The prob­lem, in brief: Our cur­rent lead­ing mod­els of in­duc­tion do not al­low rea­son­ers to treat their own com­pu­ta­tions as pro­cesses in the world.

The prob­lem’s roots lie in al­gorith­mic in­for­ma­tion the­ory and for­mal episte­mol­ogy, but find­ing an­swers will re­quire us to wade into de­bates on ev­ery­thing from the­o­ret­i­cal physics to an­thropic rea­son­ing and self-refer­ence. This post will lay the ground­work for a se­quence of posts (ti­tled ‘Ar­tifi­cial Nat­u­ral­ism’) in­tro­duc­ing differ­ent as­pects of this OPFAI.

AI per­cep­tion and be­lief: A toy model

A more con­crete prob­lem: Con­struct an al­gorithm that, given a se­quence of the col­ors cyan, ma­genta, and yel­low, pre­dicts the next col­ored field.

Colors: CYYM CYYY CYCM CYYY ????

This is an in­stance of the gen­eral prob­lem ‘From an in­com­plete data se­ries, how can a rea­soner best make pre­dic­tions about fu­ture data?’. In prac­tice, any agent that ac­quires in­for­ma­tion from its en­vi­ron­ment and makes pre­dic­tions about what’s com­ing next will need to have two map-like1 sub­pro­cesses:

1. Some­thing that gen­er­ates the agent’s pre­dic­tions, its ex­pec­ta­tions. By anal­ogy with hu­man sci­en­tists, we can call this pre­dic­tion-gen­er­a­tor the agent’s hy­pothe­ses or be­liefs.

2. Some­thing that trans­mits new in­for­ma­tion to the agent’s pre­dic­tion-gen­er­a­tor so that its hy­pothe­ses can be up­dated. Em­ploy­ing an­other an­thro­po­mor­phic anal­ogy, we can call this pro­cess the agent’s data or per­cep­tions.

Here’s an ex­am­ple of a hy­poth­e­sis an agent could use to try to pre­dict the next color field. I’ll call the imag­i­nary agent ‘Cai’. Any rea­soner will need to be­gin with some (per­haps pro­vi­sional) as­sump­tions about the world.2 Cai be­gins with the be­lief3 that its en­vi­ron­ment be­haves like a cel­lu­lar au­toma­ton: the world is a grid whose tiles change over time based on a set of sta­ble laws. The laws are lo­cal in time and space, mean­ing that you can perfectly pre­dict a tile’s state based on the states of the tiles next to it a mo­ment prior — if you know which laws are in force.

Cai be­lieves that it lives in a closed 3x3 grid where tiles have no di­ag­o­nal effects. Each tile can oc­cupy one of three states. We might call the states ‘0’, ‘1’, and ‘2’, or, to make vi­su­al­iza­tion eas­ier, ‘white’, ‘black’, and ‘gray’. So, on Cai’s view, the world as it changes looks some­thing like this:

An ex­am­ple of the world’s state at one mo­ment, and its state a mo­ment later.

Cai also has be­liefs about its own lo­ca­tion in the cel­lu­lar au­toma­ton. Cai be­lieves that it is a black tile at the cen­ter of the grid. Since there are no di­ag­o­nal laws of physics in this world, Cai can only di­rectly in­ter­act with the four tiles di­rectly above, be­low, to the left, and to the right. As such, any per­cep­tual data Cai ac­quires will need to come from those four tiles; any­thing else about Cai’s uni­verse will be known only by in­fer­ence.

Cai per­ceives stim­uli in four di­rec­tions. Unob­serv­able tiles fall out­side the cross.

How does all this bear on the color-pre­dict­ing prob­lem? Cai hy­poth­e­sizes that the se­quence of col­ors is sen­sory — it’s an ex­pe­rience within Cai, trig­gered by en­vi­ron­men­tal changes. Cai con­jec­tures that since its vi­sual field comes in at most four col­ors, its vi­sual field’s quad­rants prob­a­bly rep­re­sent its four ad­ja­cent tiles. The left­most color comes from a south­ern stim­u­lus, the next one to the right from a west­ern stim­u­lus, then a north­ern one, then an east­ern one. And the south, west, north, east cy­cle re­peats again and again.

Cai’s vi­sual ex­pe­riences break down into quad­rants, cor­re­spond­ing to four di­rec­tions.

On this model, the way Cai’s senses or­ga­nize the data isn’t wholly veridi­cal; the four patches of color aren’t perfectly shaped like Cai’s en­vi­ron­ment. But the or­ga­ni­za­tion of Cai’s sen­sory ap­para­tus and the or­ga­ni­za­tion of the world around Cai are similar enough that Cai can re­con­struct many fea­tures of its world.

By link­ing its vi­sual pat­terns to pat­terns of chang­ing tiles, Cai can hy­poth­e­size laws that guide the world’s changes and ex­plain Cai’s sen­sory ex­pe­riences. Here’s one pos­si­bil­ity, Hy­poth­e­sis A:

  • Black cor­re­sponds to cyan, white to yel­low, and gray to ma­genta.

  • At pre­sent, the top two rows are white and the bot­tom row is black, ex­cept for the up­per-right tile (which is gray) and Cai it­self, a black mid­dle tile.

  • Ad­ja­cent gray and white tiles ex­change shades. Ex­cep­tion: When a white tile is pinned by a white and gray tile on ei­ther side, it turns black.

  • Black tiles pinned by white ones on ei­ther side turn white. Ex­cep­tion: When the black tile is ad­ja­cent to a third white tile, it re­mains black.

Hy­poth­e­sis A’s phys­i­cal con­tent. On the left: Cai’s be­lief about the world’s pre­sent state. On the right: Cai’s be­lief about the rules by which the world changes over time. The rules are sym­met­ric un­der ro­ta­tion and re­flec­tion.

Bridg­ing stim­u­lus and experience

So that’s one way of mod­el­ing Cai’s world; and it will yield a pre­dic­tion about the cel­lu­lar au­toma­ton’s next state, and there­fore about Cai’s next vi­sual ex­pe­rience. It will also yield retro­d­ic­tions of the cel­lu­lar au­toma­ton’s state dur­ing Cai’s three past sen­sory ex­pe­riences.

Hy­poth­e­sis A as­serts that tiles be­low Cai, to Cai’s left, above, and to Cai’s right re­late to Cai’s color ex­pe­riences via the rule {black ↔ cyan, white ↔ yel­low, gray ↔ ma­genta}. Corner tiles, and fu­ture world-states and ex­pe­riences, can be in­ferred from Hy­poth­e­sis A’s cell tran­si­tion rules.

Are there other, similar hy­pothe­ses that can ex­plain the same data? Here’s one, Hy­poth­e­sis B:

  • Nor­mally, the cor­re­spon­dences be­tween ex­pe­rienced col­ors and neigh­bor­ing tile states are {black ↔ cyan, white ↔ yel­low, gray ↔ ma­genta}, as in Hy­poth­e­sis A. But north­ern grays are per­ceived as though they were black, helping ex­plain ir­reg­u­lar­i­ties in the dis­tri­bu­tion of cyan.

  • Hy­poth­e­sis B’s cel­lu­lar au­toma­ton presently looks similar to Hy­poth­e­sis A’s, but with a gray tile in the up­per-left cor­ner.

  • Ad­ja­cent gray and white tiles ex­change shades. Noth­ing else changes.

The added com­plex­ity in the per­cep­tion-to-en­vi­ron­ment link al­lows Hy­poth­e­sis B to do away with most of the com­plex­ity in Hy­poth­e­sis A’s phys­i­cal laws. Break­ing down Hy­pothe­ses A and B into their re­spec­tive phys­i­cal and per­cep­tion-to-en­vi­ron­ment com­po­nents makes it more ob­vi­ous how the two differ:

A has the sim­pler bridge hy­poth­e­sis, while B has the sim­pler phys­i­cal hy­poth­e­sis.

Though they share a lot in com­mon, and both ac­count for Cai’s ex­pe­riences to date, these two hy­pothe­ses di­verge sub­stan­tially in the cel­lu­lar au­toma­ton states and fu­ture ex­pe­riences they pre­dict:

The two hy­pothe­ses in­fer differ­ent dis­tri­bu­tions and dy­nam­i­cal rules for the tile shades from the same per­cep­tual data. Th­ese wor­ldly differ­ences then di­verge in the fu­ture ex­pe­riences they pre­dict.

Hy­pothe­ses link­ing ob­ser­va­tions to the­o­rized en­tities ap­pear to be quite differ­ent from hy­poth­e­sis that just de­scribe the the­o­rized en­tities in their own right. In Cai’s case, the lat­ter hy­pothe­ses look like pic­tures of phys­i­cal wor­lds, while the former are ties be­tween differ­ent kinds of rep­re­sen­ta­tion. But in both cases it’s use­ful to treat these pro­cesses in hu­mans or ma­chines as be­liefs, since they can be as­signed weights of ex­pec­ta­tion and up­dated.

‘Phenomenol­ogy’ is a gen­eral term for an agent’s mod­els of its own in­tro­spected ex­pe­riences. As such, we can call these hy­pothe­ses link­ing ex­pe­rienced data to the­o­rized pro­cesses phe­nomenolog­i­cal bridge hy­pothe­ses. Or just ‘bridge hy­pothe­ses’, for short.

If we want to build an agent that tries to eval­u­ate the ac­cu­racy of a model based on the ac­cu­racy of its pre­dic­tions, we need some scheme to com­pare thin­gies in the model (like tiles) and thin­gies in the sen­sory stream (like col­ors). Thus a bridge rule ap­pears to be nec­es­sary to talk about in­duc­tion over mod­els of the world. And bridge hy­pothe­ses are just bridge rules treated as prob­a­bil­is­tic, up­dat­able be­liefs.

As the last figure above illus­trates, bridge hy­pothe­ses can make a big differ­ence for one’s sci­en­tific be­liefs and ex­pec­ta­tions. And bridge hy­pothe­ses aren’t a free lunch; it would be a mis­take to shunt all com­plex­ity onto them in or­der to sim­plify your phys­i­cal hy­pothe­ses. Allow your bridge hy­pothe­ses to get too com­pli­cated, and you’ll be able to jus­tify mad world-mod­els, e.g., ones where the uni­verse con­sists of a sin­gle apri­cot whose in­di­vi­d­ual atoms each get a sep­a­rate bridge to some com­plex ex­pe­rience. At the same time, if you de­mand too much sim­plic­ity from your bridge hy­pothe­ses, you’ll end up con­clud­ing that the phys­i­cal world con­sists of a se­ries of ob­jects shaped just like your men­tal states. That way you can get away with a com­i­cally sim­ple bridge rule like {ex­ists(x) ↔ ex­pe­riences(y,x)}.

In the ab­sence of fur­ther in­for­ma­tion, it may not be pos­si­ble to rule out Hy­poth­e­sis A or Hy­poth­e­sis B. The take­away is that trade­offs be­tween the com­plex­ity of bridg­ing hy­pothe­ses and the com­plex­ity of phys­i­cal hy­pothe­ses do oc­cur, and do mat­ter. Any ar­tifi­cial agent needs some way of for­mu­lat­ing good hy­pothe­ses of this type in or­der to be able to un­der­stand the uni­verse at all, whether or not it finds it­self in doubt af­ter it has done so.

Gen­er­al­iz­ing bridge rules and data

Rea­son­ers — both hu­man and ar­tifi­cial — don’t be­gin with perfect knowl­edge of their own de­sign. When they have work­ing self-mod­els at all, these self-mod­els are fal­lible. Aris­to­tle thought the brain was an or­gan for cool­ing the blood. We had to find out about neu­rons by open­ing up the heads of peo­ple who looked like us, putting the big cor­ru­gated gray or­gan un­der a micro­scope, see­ing (with our eyes, our vi­sual cor­tex, our senses) that the micro­scope (which we’d pre­vi­ously gen­er­al­ized shows us tiny things as if they were large) showed this in­cred­ibly fine mesh of con­nected blobs, and re­al­iz­ing, “Hey, I bet this does in­for­ma­tion pro­cess­ing and that’s what I am! The big gray cor­ru­gated or­gan that’s in­side my own head is me!”

The bridge hy­pothe­ses in Hy­pothe­ses A and B are about link­ing an agent’s en­vi­ron­ment-trig­gered ex­pe­riences to en­vi­ron­men­tal causes. But in fact bridge hy­pothe­ses are more gen­eral than that.

1. An agent’s ex­pe­riences needn’t all have en­vi­ron­men­tal causes. They can be caused by some­thing in­side the agent.

2. The cause-effect re­la­tion we’re bridg­ing can go the other way. E.g., a bridge hy­poth­e­sis can link an ex­pe­rienced de­ci­sion to a be­hav­ioral con­se­quence, or to an ex­pected out­come of the be­hav­ior.

3. The bridge hy­poth­e­sis needn’t link causes to effects at all. E.g., it can as­sert that the agent’s ex­pe­rienced sen­sa­tions or de­ci­sions just are a cer­tain phys­i­cal state. Or it can as­sert neu­tral cor­re­la­tions.

Phenomenolog­i­cal bridge hy­pothe­ses, then, can re­late the­o­ret­i­cal posits to any sort of ex­pe­ri­en­tial data. Ex­pe­ri­en­tial data are in­ter­nally ev­i­dent facts that get com­pared to hy­pothe­ses and cause up­dates — the kind of data of di­rect epistemic rele­vance to in­di­vi­d­ual sci­en­tists up­dat­ing their per­sonal be­liefs. Light shines on your retina, gets trans­duced to neu­ral firings, gets re­con­structed in your vi­sual cor­tex and then — this is the key part — that in­ter­nal fact gets used to de­cide what sort of uni­verse you’re prob­a­bly in.

The data from an AI’s en­vi­ron­ment is just one of many kinds of in­for­ma­tion it can use to up­date its prob­a­bil­ity dis­tri­bu­tions. In ad­di­tion to or­di­nary sen­sory con­tent such as vi­sion and smell, up­date-trig­ger­ing data could in­clude things like how much RAM is be­ing used. This is be­cause an in­ner RAM sense can tell you that the uni­verse is such as to in­clude a copy of you with at least that much RAM.

We nor­mally think of sci­ence as re­li­ant mainly on sen­sory fac­ul­ties, not in­tro­spec­tive ones. Ar­riv­ing at con­clu­sions just by ex­am­in­ing your own in­tu­itions and imag­in­ings sounds more like math or philos­o­phy. But for pre­sent pur­poses the dis­tinc­tion isn’t im­por­tant. What mat­ters is just whether the AGI forms ac­cu­rate be­liefs and makes good de­ci­sions. Pro­to­typ­i­cal sci­en­tists may shun in­tro­spec­tion­ism be­cause hu­mans do a bet­ter job of di­rectly ap­pre­hend­ing and com­mu­ni­cat­ing facts about their en­vi­ron­ments than facts about their own in­ner lives, but AGIs can have a very differ­ent set of strengths and weak­nesses. Although in­tro­spec­tion, like sen­sa­tion, is fal­lible, in­tro­spec­tive self-rep­re­sen­ta­tions some­times em­piri­cally cor­re­late with world-states.4 And that’s all it takes for them to con­sti­tute Bayesian ev­i­dence.

Bridg­ing hard­ware and experience

In my above dis­cus­sion, all of Cai’s world-mod­els in­cluded rep­re­sen­ta­tions of Cai it­self. How­ever, these rep­re­sen­ta­tions were very sim­ple — no more than a black tile in a spe­cific en­vi­ron­ment. Since Cai’s own com­pu­ta­tions are com­plex, it must be the case that ei­ther they are oc­cur­ring out­side the uni­verse de­picted (as though Cai is plugged into a cel­lu­lar au­toma­ton Ma­trix), or the uni­verse de­picted is much more com­plex than Cai thinks.5 Per­haps its model is wildly mis­taken, or per­haps the high-level cel­lu­lar pat­terns it’s hy­poth­e­sized arise from other, smaller-scale reg­u­lar­i­ties.

Re­gard­less, Cai’s com­pu­ta­tions must be em­bod­ied in some causal pat­tern. Cai will even­tu­ally need to con­struct bridge hy­pothe­ses be­tween its ex­pe­riences and their phys­i­cal sub­strate if it is to make re­li­able pre­dic­tions about its own be­hav­ior and about its re­la­tion­ship with its sur­round­ings.

Vi­su­al­ize the epistemic prob­lem that an agent needs to solve. Cai has ac­cess to a se­ries of sen­sory im­pres­sions. In prin­ci­ple we could also add in­tro­spec­tive data to that. But you’ll still get a se­ries of (pre­sum­ably time-in­dexed) facts in some na­tive for­mat of that mind. Those facts very likely won’t be struc­tured ex­actly like any on­tolog­i­cally ba­sic fea­ture of the uni­verse in which the mind lives. They won’t be a pre­cise po­si­tion of a New­to­nian par­ti­cle, for ex­am­ple. And even if we were deal­ing with sense data shaped just like on­tolog­i­cally ba­sic facts, a ra­tio­nal agent could never know for cer­tain that they were on­tolog­i­cally ba­sic, so it would still have to con­sider hy­pothe­ses about even more ba­sic par­ti­cles.

When hu­mans or AGIs try to match up hy­pothe­ses about uni­verses to sen­sory ex­pe­riences, there will be a type er­ror. Our rep­re­sen­ta­tion of the uni­verse will be in hy­po­thet­i­cal atoms or quan­tum fields, while our rep­re­sen­ta­tion of sen­sory ex­pe­riences will be in a na­tive for­mat like ‘red-green’.6 This is where bridge rules like Cai’s color con­ver­sions come in — bridges that re­late our ex­pe­riences to en­vi­ron­men­tal stim­uli, as well as ones that re­late our ex­pe­riences to the hard­ware that runs us.

Cai can form phys­i­cal hy­pothe­ses about its own in­ter­nal state, in ad­di­tion to ones about its en­vi­ron­ment. This means it can form bridge hy­pothe­ses be­tween its ex­pe­riences and its own hard­ware, in ad­di­tion to ones be­tween its ex­pe­riences and en­vi­ron­ment.

If you were an AI, you might be able to de­code your red-green vi­sual field into bi­nary data — on-vs.-off — and make very sim­ple hy­pothe­ses about how that cor­re­sponded to tran­sis­tors mak­ing you up. Once you used a micro­scope on your­self to see the tran­sis­tors, you’d see that they had bi­nary states of pos­i­tive and nega­tive voltage, and all that would be left would be a hy­poth­e­sis about whether the pos­i­tive (or nega­tive) voltage cor­re­sponded to an in­tro­spected 1 (or 0).

But even then, I don’t quite see how you could do with­out the bridge rules — there has to be some way to go from in­ter­nal sen­sory types to the types fea­tured in your hy­pothe­ses about phys­i­cal laws.

Our sen­sory ex­pe­rience of red, green, blue is cer­tain neu­rons firing in the vi­sual cor­tex, and these neu­rons are in turn made from atoms. But in­ter­nally, so far as in­for­ma­tion pro­cess­ing goes, we just know about the red, the green, the blue. This is what you’d ex­pect an agent made of atoms to feel like from the in­side. Our na­tive rep­re­sen­ta­tion of a pixel field won’t come with a lit­tle tag tel­ling us with in­fal­lible trans­parency about the un­der­ly­ing quan­tum me­chan­ics.

But this means that when we’re done posit­ing a phys­i­cal uni­verse in all its de­tail, we also need one last (hope­fully sim­ple!) step that con­nects hy­pothe­ses about ‘a brain that pro­cesses vi­sual in­for­ma­tion’ to ‘I see blue’.

One way to avoid wor­ry­ing about bridge hy­pothe­ses would be to in­stead code the AI to ac­cept bridge ax­ioms, bridge rules with no de­grees of free­dom and no un­cer­tainty. But the AI’s de­sign­ers are not in fact in­finitely con­fi­dent about how the AI’s per­cep­tual states emerge from the phys­i­cal world — that, say, quan­tum field the­ory is the One True An­swer, and shall be so from now un­til the end of time. Nor can they trans­mit in­finite ra­tio­nal con­fi­dence to the AI merely by mak­ing it more stub­bornly con­vinced of the view. If you pre­tend to know more than you do, the world will still bite back. As an agent in the world, you re­ally do have to think about and test a va­ri­ety of differ­ent un­cer­tain hy­pothe­ses about what hard­ware you’re run­ning on, what kinds of en­vi­ron­men­tal trig­gers pro­duce such-and-such ex­pe­riences, and so on. This is par­tic­u­larly true if your hard­ware is likely to un­dergo sub­stan­tial changes over time.

If you don’t al­low the AI to form prob­a­bil­is­tic, up­dat­able hy­pothe­ses about the re­la­tion be­tween its phe­nomenol­ogy and the phys­i­cal world, the AI will ei­ther be un­able to rea­son at all, or it will rea­son its way off a cliff. In my next post, Bridge Col­lapse, I’ll be­gin dis­cussing how the lat­ter prob­lem sinks an oth­er­wise ex­tremely promis­ing ap­proach to for­mal­iz­ing ideal AGI rea­son­ing: Solomonoff in­duc­tion.


1 By ‘map-like’, I mean that the pro­cesses look similar to the rep­re­sen­ta­tional pro­cesses in hu­man thought. They sys­tem­at­i­cally cor­re­late with ex­ter­nal events, within a pat­tern-track­ing sys­tem that can read­ily prop­a­gate and ex­ploit the cor­re­la­tion.

2 Agents need ini­tial as­sump­tions, built-in prior in­for­ma­tion. The prior is defined by what­ever al­gorithm the rea­soner fol­lows in mak­ing its very first up­dates.

If I leave an agent’s pri­ors un­defined, no ghost of rea­son­able­ness will in­ter­vene to give the agent a ‘de­fault’ prior. For ex­am­ple, it won’t de­fault to a uniform prior over pos­si­ble coin­flip out­comes in the ab­sence of rele­vant ev­i­dence. Rather, with­out some­thing that acts like a prior, the agent just won’t work — in the same way that a calcu­la­tor won’t work if you grant it the free­dom to do math how­ever it wishes. A fre­quen­tist AI might re­fuse to talk about pri­ors, but it would still need to act like it has pri­ors, else break.

3 This talk of ‘be­lief’ and ‘as­sump­tion’ and ‘per­cep­tion’ is an­thro­po­mor­phiz­ing, and the analo­gies to hu­man psy­chol­ogy won’t be perfect. This is im­por­tant to keep in view, though there’s only so much we can do to avoid vague­ness and analog­i­cal rea­son­ing when the ar­chi­tec­ture of AGIs re­mains un­known. In par­tic­u­lar, I’m not as­sum­ing that ev­ery ar­tifi­cial sci­en­tist is par­tic­u­larly in­tel­li­gent. Or par­tic­u­larly con­scious.

What I mean with all this ‘Cai be­lieves...’ talk is that Cai weights pre­dic­tions and se­lects ac­tions just as though it be­lieved it­self to be in a cel­lu­lar au­toma­ton world. One can treat Cai’s au­toma­ton-the­o­retic model as just a book­keep­ing de­vice for as­sign­ing Cox’s-the­o­rem-fol­low­ing real num­bers to en­coded images of color fields. But one can also treat Cai’s model as a psy­cholog­i­cal ex­pec­ta­tion, to the ex­tent it func­tion­ally re­sem­bles the cor­re­spond­ing hu­man men­tal states. Words like ‘as­sump­tion’ and ‘thinks’ here needn’t mean that the agent thinks in the same fash­ion hu­mans think; what we’re in­ter­ested in are the broad class of in­for­ma­tion-pro­cess­ing al­gorithms that yield similar be­hav­iors.

4 To illus­trate: In prin­ci­ple, even a hu­man pin­ing to be­come a par­ent could, by in­tro­spec­tion alone, in­fer that they might be an evolved mind (since they are ex­pe­rienc­ing a de­sire to self-repli­cate) and em­bed­ded in a uni­verse which had evolved minds with evolu­tion­ary his­to­ries. An AGI with more re­li­able in­ter­nal mon­i­tors could learn a great deal about the rest of the uni­verse just by in­ves­ti­gat­ing it­self.

5 In ei­ther case, we shouldn’t be sur­prised to see Cai failing to fully rep­re­sent its own in­ner work­ings. An agent can­not ex­plic­itly rep­re­sent it­self in its to­tal­ity, since it would then need to rep­re­sent it­self rep­re­sent­ing it­self rep­re­sent­ing it­self … ad in­fini­tum. En­vi­ron­men­tal phe­nom­ena, too, must usu­ally be com­pressed.

6 One re­sponse would be to place the blame on Cai’s posit­ing white, gray, and black for its world-mod­els, rather than stick­ing with cyan, yel­low, and ma­genta. But there will still be a type er­ror when one tries to com­pare per­ceived cyan/​yel­low/​ma­genta with hy­poth­e­sized (but per­cep­tu­ally in­visi­ble) cyan/​yel­low/​ma­genta. Ex­plic­itly in­tro­duc­ing sep­a­rate words for hy­poth­e­sized v. per­ceived col­ors doesn’t pro­duce the dis­tinc­tion; it just makes it eas­ier to keep track of a dis­tinc­tion that was already pre­sent.