Simulacra as free-floating schelling points could actually be good if they represent mathematical truth about coordination between agents within a reference class, intended to create better outcomes in the world?
But if a simulacrum corresponds to truth because people conform their future behavior to its meaning in the spirit of cooperation does it still count as a simulacrum?
It feels like you’re trying to implicitly import all of good intent, in its full potential, stuff it into the word “truth”, and claim it’s incompatible with the use of schelling points via the distortions:
the idea that the symbol had an original meaning and any change involving voluntary conformance to the new meaning would inherently be malicious
using an example (job title) which is already a simulacrum, but initially used cooperatively
assuming that people lagging in stage 1-3 would be exploited/arbitraged by people in stage 4
cooperative simulacrum (e.g. maps) are less contentious and so not salient examples of the word
In other words I think you’re assuming:
good intent = truth = in-principle CDT-verifiable truth (fair)
> From the inside, this is an experience that in-the-moment is enjoyable/satisfying/juicy/fun/rewarding/attractive to you/thrilling/etc etc.
people’s preferences change in different contexts since they are implicitly always trying to comply with what they think is permissible/safe before trying to get it, up to some level of stake outweighing this, along many different axes of things one can have a stake in
to see people’s intrinsic preferences we have to consider that people often aren’t getting what they want and are tricked into wanting suboptimal things wrt some of their long-suppressed wants, because of social itself
this has to be really rigorous because it’s competing against anti-inductive memes
this is really important to model because if we know anything about people’s terminal preferences modulo social we know we are confused about social anytime we can’t explain why they aren’t pursuing opportunities they should know about or anytime they are internally conflicted even though they know all the consequences of their actions relative to their real ideal-to-them terminal preferences
> Social sort of exists here, but only in the form that if an agent can give something you want, such as snuggles, then you want that interaction.
is it social if a human wants another human to be smiling because perception of smiles is good?