First, thank you so much for taking the time to reply! I sucks to write into the void, and your thorough comment gives me much needed feedback.
It’s unclear to me that the model you construct has much relationship to your tested idea
I must have been super unclear, yes. The model I suggested as a first approximation is very basic:
People with similar views attract (they talk and their views converge), people with divergent views repel (they talk and dislike what the other party says, and their views drift apart even more).
The interaction amount between people does not depend on how close their views are. This is not a great approximation in general, but gotta start somewhere. Also, in an online world it is hard to avoid interactions with those you disagree with, so the assumption does not seem to be totally without merit. But definitely can be improved upon.
The shape of the attraction/repulsion as a function of the distance between views is definitely largely arbitrary, just something simple that would reflect the first point above.
The model is memory-less, e.g. you don’t keep tabs on the past interactions, at each step each interaction between each two people is evaluated on its own merit.
I am not sure if this answers the question about the grounding, I am most likely missing something.
FWIW I went in to this expecting a very different sort of model, one more like a simulation using simple bots that interact in simplified ways you describe and then we could see how they end up clustering, maybe by each bot keeping an affinity score for the others and finding results about the affinity of the bots forming clusters.
I am not sure I follow. The bots indeed do end up clustering into 4 to 5 different clusters, where each cluster represents a certain convergent view. By “keeping the affinity score”, do you mean they keep track of the past interactions, not just compare current views at each step? That would be an interesting improvement, adding memory to the model, but that would be, well, an improvement, not necessarily something you put into a toy model from the beginning. Maybe you mean something else? I’m confused.
I am not sure I follow. The bots indeed do end up clustering into 4 to 5 different clusters, where each cluster represents a certain convergent view. By “keeping the affinity score”, do you mean they keep track of the past interactions, not just compare current views at each step? That would be an interesting improvement, adding memory to the model, but that would be, well, an improvement, not necessarily something you put into a toy model from the beginning. Maybe you mean something else? I’m confused.
Oh, this paragraph seems to suggest your model has a lot more going on that I got from reading this post. Maybe if I followed you links I would find some more details (sounded like they were just extra details that could be skipped)? I got the impression you found a function that has a shape illustrative of what you want and that was it, but this sounds like there’s a lot more going on not described in the text of this post!
Right, this an honest dynamical model, the curves from the followup post is the opinion bots converging or diverging as they interact. I thought I explained it, and I think it’s in one of the blog posts, but looking back at it, apparently not on this site. Thanks!
First, thank you so much for taking the time to reply! I sucks to write into the void, and your thorough comment gives me much needed feedback.
I must have been super unclear, yes. The model I suggested as a first approximation is very basic:
People with similar views attract (they talk and their views converge), people with divergent views repel (they talk and dislike what the other party says, and their views drift apart even more).
The interaction amount between people does not depend on how close their views are. This is not a great approximation in general, but gotta start somewhere. Also, in an online world it is hard to avoid interactions with those you disagree with, so the assumption does not seem to be totally without merit. But definitely can be improved upon.
The shape of the attraction/repulsion as a function of the distance between views is definitely largely arbitrary, just something simple that would reflect the first point above.
The model is memory-less, e.g. you don’t keep tabs on the past interactions, at each step each interaction between each two people is evaluated on its own merit.
I am not sure if this answers the question about the grounding, I am most likely missing something.
I am not sure I follow. The bots indeed do end up clustering into 4 to 5 different clusters, where each cluster represents a certain convergent view. By “keeping the affinity score”, do you mean they keep track of the past interactions, not just compare current views at each step? That would be an interesting improvement, adding memory to the model, but that would be, well, an improvement, not necessarily something you put into a toy model from the beginning. Maybe you mean something else? I’m confused.
Oh, this paragraph seems to suggest your model has a lot more going on that I got from reading this post. Maybe if I followed you links I would find some more details (sounded like they were just extra details that could be skipped)? I got the impression you found a function that has a shape illustrative of what you want and that was it, but this sounds like there’s a lot more going on not described in the text of this post!
Right, this an honest dynamical model, the curves from the followup post is the opinion bots converging or diverging as they interact. I thought I explained it, and I think it’s in one of the blog posts, but looking back at it, apparently not on this site. Thanks!