Modelling Model Comparisons

Define a model as a set of ob­jects and their re­la­tion­ships. For ex­am­ple, when dis­cussing a model of mu­sic, “notes” would be the ob­jects and a pos­si­ble re­la­tion­ship would be “har­mony”. [Tech­ni­cally, the ob­jects would be (fre­quency, vol­ume, time), and from their you could define the re­la­tion­ships “har­mony”, “tempo”, “crescendo”, etc.]

Us­ing this, there are two differ­ent ways to com­pare mod­els: similar re­la­tion­ships and (Ob­ject, Re­la­tion­ship) = (Ob­ject)

1. Similar Re­la­tion­ships [Type 1 com­par­i­son]

Us­ing the re­la­tion­ship “Rep­e­ti­tion”, there is rep­e­ti­tion in mu­si­cal phrases (Fur Elise), in po­etic phrases (Annabel Lee), in song cho­rus’s, themes in movies, etc. Even though these four ex­am­ples con­tain differ­ent ob­jects, we’re able to find the similar re­la­tion­ships and com­pare them.

I can think of two uses for this type of com­par­i­son. The first is in metaphors. The sec­ond is in gen­er­al­iza­tion. Elab­o­rat­ing on the lat­ter, I find a piece of mu­sic an­noy­ing if it’s too repet­i­tive. The same is true, for po­ems, songs, and movies; how­ever, I very much en­joy a song if it strikes a good bal­ance be­tween rep­e­ti­tion and nov­elty. Learn­ing to strike that bal­ance when writ­ing mu­sic has gen­er­al­ized to do­ing the same in writ­ing lyrics and dance chore­og­ra­phy.

The same gen­er­al­iza­tion can hap­pen in non-art fields as well, such as re­al­iz­ing that two differ­ent func­tions are both con­vex or that two differ­ent prob­lems can be solved re­cur­sively, and so on.

2. (Ob­ject, Re­la­tion­ship) = (Ob­ject) [Type 2 com­par­i­son]

A good ex­am­ple to start is:

But this is more than just con­nect­ing quarks/​elec­trons to atoms or atoms to ob­jects in lan­guage. You can con­nect mod­els of var­i­ous lev­els to­gether. With the model of lan­guage, we can con­nect to the low-level ob­ject “quarks” and the high-level ob­ject “di­a­mond”. This has helped my un­der­stand­ing of how Multi-level Models re­late to trans­parency: if an AI can find the cor­rect Type 2 com­par­i­son be­tween what it’s do­ing and our hu­man lan­guage, then trans­parency is solved; how­ever, hu­man lan­guage is very com­plex.

Hu­man Lan­guage and Com­mu­ni­cat­ing Models

Let’s say you’re talk­ing with Alice. Two failures in com­mu­ni­cat­ing mod­els are when you’re (1) not dis­cussing the same ob­jects (Alice’s defi­ni­tion of “sound” your defi­ni­tion) or (2) you don’t know how Alice be­lieves those ob­jects re­late or vice versa.

One might naively say that the model of lan­guage is sim­ply “words” and how they re­late; how­ever, it’s more like “words, your mod­els that re­late to those words, and your model of Alice’s mod­els that re­late to those words”. The two failures above arise from think­ing in the naive model of lan­guage, and can be alle­vi­ated by the more com­plex model. Two helpful thoughts when talk­ing to Alice are:

1. “Are we dis­cussing the same ob­jects?”

2. “How does Alice think these ob­jects re­late?”

and of course ask­ing Alice the rele­vant ques­tions to figure that out.

Fu­ture Work:

1. Type 1 com­par­i­sons:

  • What are the very use­ful re­la­tion­ships used to solve past prob­lems, and could they be used to solve cur­rent prob­lems (ex. us­ing the same math tech­nique to solve an en­tire class of math prob­lems)

  • Is there a way to find those re­la­tion­ships sys­tem­at­i­cally?

2. Type 2 com­par­i­sons with hu­man lan­guage/​transparency

3. Creat­ing lots of ex­am­ples of Type 1 and Type 2 com­par­i­sons to un-con­fuse my­self (I pre­dict that what I wrote in this post is con­fused but at least points in the right di­rec­tion)

4. A bet­ter word for Type 1, 2 comparisons