An analogy that might be banal, but might be interesting:
One reason (the main reason?) that computers use discrete encodings is to make error correction easier. A continuous signal will gradually drift over time. Conversely, if the signal is frequently rounded to the nearest discrete value, then it might remain error-free for a long time. (I think this is also the reason why two most complicated biological information-processing systems use discrete encodings: DNA base pairs and neural spikes. EDIT: Neural spikes may seem continuous in the time dimension but the concept of “brain waves” makes me suspect that the time intervals between them are better understood as discrete.)
Separately, agents tend to define discrete boundaries around themselves—e.g. countries try to have sharp borders rather than fuzzy borders. One reason (the main reason?) is to make themselves easier to defend: with sharp borders there’s a clear Schelling point for when to attack invaders. Without that, invaders might “drift in” over time.
The logistics of defending oneself vary by type of agent. For physical agents, perhaps fuzzy boundaries are just not possible to implement (e.g. humans need to literally hold the water inside us). However, many human groups (e.g. social classes) have initiation rituals which clearly demarcate who’s in and who’s out, even though in principle it’d be fairly easy for them to have a gradual/continuous metric of membership (like how many “points” members have gotten). We might be able to explain this as a way of giving them social defensibility.
A further potential extension here is to point out that modern hiveminds (Twitter / X / Bsky) changed group membership in many political groups from something explicit (“We let this person write in a our [Conservative / Liberal / Leftist / etc] magazine / published them in our newspaper”) to something very fuzzy and indeterminate (“Well, they call themselves an [Conservative / Liberal / Leftist / etc] , and they’re huge on Twitter, and they say some of the kinds of things [Conservative / Liberal / Leftist / etc] people say, so I guess they’re an [Conservative / Liberal / Leftist / etc] .”)
I think this is a really big part of why the free market of ideas has stopped working in the US over the last decade or two.
Yet more speculative is a preferred solution of mine; intermediate groups within hiveminds, such that no person can post in the hivemind without being part of such a group, and such that both person and group are clearly associated with each other. This permits:
Membership to be explicit
Bad actors (according to group norms) to be actually kicked out proactively, rather than degrading norms
Multi-level selection between group norms, where you can just block large groups that do not adopt truthseeking norms
More conscious shaping of the egregore.
But this solutioning is all more speculative than the problem.
This probably generalizes beyond ~agent boundaries to general partitioning of “raw information blobs” into concepts/abstractions. You choose a way of making sense of the world that minimizes the expected future amount of [runtime correction by re-computing the abstraction from “raw sense data”] (minimally leaky abstractions).
(Well, this actually feels like a restatement of Markov/causal blankets, but the connection to your point stands, and it seems like an interesting restatement to me.)
yes! this is an important point. I don’t quite know how to cash it out yet but I suspect I will eventually converge towards viewing concepts as “agents” which are trying to explain as much sensory data as possible while also cooperating/competing with each other.
I suspect I will eventually converge towards viewing concepts as “agents”
What is an “agent” in your ontology?
In one convo (at ILIAD 1), you named situational awareness as the criterion for distinguishing a part of an agent as a subagent. Is this also an important factor for calling something an “agent” simpliciter?
the neural spike discretion is important, but i immediately thought about how much information also gets encoded in the space of neural spike frequency… and how it’s perhaps a bit strange, that bio spent all this effort getting discrete encoding only to then immediately turn around and generate a new continuous function
One thing I had in an earlier draft of this shortform: the concept of “brain waves” makes me suspect that the timings of neural spikes are also best understood as discrete. But I don’t know enough about how brain waves actually work (or what they even are) to say anything substantive here.
An analogy that might be banal, but might be interesting:
One reason (the main reason?) that computers use discrete encodings is to make error correction easier. A continuous signal will gradually drift over time. Conversely, if the signal is frequently rounded to the nearest discrete value, then it might remain error-free for a long time. (I think this is also the reason why two most complicated biological information-processing systems use discrete encodings: DNA base pairs and neural spikes. EDIT: Neural spikes may seem continuous in the time dimension but the concept of “brain waves” makes me suspect that the time intervals between them are better understood as discrete.)
Separately, agents tend to define discrete boundaries around themselves—e.g. countries try to have sharp borders rather than fuzzy borders. One reason (the main reason?) is to make themselves easier to defend: with sharp borders there’s a clear Schelling point for when to attack invaders. Without that, invaders might “drift in” over time.
The logistics of defending oneself vary by type of agent. For physical agents, perhaps fuzzy boundaries are just not possible to implement (e.g. humans need to literally hold the water inside us). However, many human groups (e.g. social classes) have initiation rituals which clearly demarcate who’s in and who’s out, even though in principle it’d be fairly easy for them to have a gradual/continuous metric of membership (like how many “points” members have gotten). We might be able to explain this as a way of giving them social defensibility.
A further potential extension here is to point out that modern hiveminds (Twitter / X / Bsky) changed group membership in many political groups from something explicit (“We let this person write in a our [Conservative / Liberal / Leftist / etc] magazine / published them in our newspaper”) to something very fuzzy and indeterminate (“Well, they call themselves an [Conservative / Liberal / Leftist / etc] , and they’re huge on Twitter, and they say some of the kinds of things [Conservative / Liberal / Leftist / etc] people say, so I guess they’re an [Conservative / Liberal / Leftist / etc] .”)
I think this is a really big part of why the free market of ideas has stopped working in the US over the last decade or two.
Yet more speculative is a preferred solution of mine; intermediate groups within hiveminds, such that no person can post in the hivemind without being part of such a group, and such that both person and group are clearly associated with each other. This permits:
Membership to be explicit
Bad actors (according to group norms) to be actually kicked out proactively, rather than degrading norms
Multi-level selection between group norms, where you can just block large groups that do not adopt truthseeking norms
More conscious shaping of the egregore.
But this solutioning is all more speculative than the problem.
Say more on this? I don’t see the argument. It’s also not clear why this would only affect the US for instance.
@1a3orn Could you also elaborate on why you think the ideas marketplace has became more dysfunction over the last 10–20 years?
This probably generalizes beyond ~agent boundaries to general partitioning of “raw information blobs” into concepts/abstractions. You choose a way of making sense of the world that minimizes the expected future amount of [runtime correction by re-computing the abstraction from “raw sense data”] (minimally leaky abstractions).
(Well, this actually feels like a restatement of Markov/causal blankets, but the connection to your point stands, and it seems like an interesting restatement to me.)
yes! this is an important point. I don’t quite know how to cash it out yet but I suspect I will eventually converge towards viewing concepts as “agents” which are trying to explain as much sensory data as possible while also cooperating/competing with each other.
What is an “agent” in your ontology?
In one convo (at ILIAD 1), you named situational awareness as the criterion for distinguishing a part of an agent as a subagent. Is this also an important factor for calling something an “agent” simpliciter?
Certainly simpler to police if you have a clear rule for in vs out
have been thinking about this since i saw it
the neural spike discretion is important, but i immediately thought about how much information also gets encoded in the space of neural spike frequency… and how it’s perhaps a bit strange, that bio spent all this effort getting discrete encoding only to then immediately turn around and generate a new continuous function
One thing I had in an earlier draft of this shortform: the concept of “brain waves” makes me suspect that the timings of neural spikes are also best understood as discrete. But I don’t know enough about how brain waves actually work (or what they even are) to say anything substantive here.
wait, neuron stuff is discrete?
a neural spike either happens or not, you don’t get partial spikes