A Numerical Model of View Clusters: Results

Link post

As promised in the last post, here are some re­sults of the simu­la­tion.

So now that the hard work is done, let’s see what the model gives us. I cheated a bit and ad­justed the de­faults to show some­thing rea­son­able. You can see the hand-fudged co­effi­cients in the code, if you feel like.

You can also play with the live model your­self here.

So, let’s have a look:

50 peo­ple, views start­ing uniformly dis­tributed 20 points apart,
equil­ibrium sep­a­ra­tion is about 220 points.

To re­cap the origi­nal setup, we have a sin­gle di­men­sion of views, start­ing with the uniform dis­tri­bu­tion. This cor­re­sponds to the hori­zon­tal axis on the top two graphs (“po­ten­tial” and “force”), and to the ver­ti­cal axis on the bot­tom graph (the time evolu­tion of clus­ter­ing and aliena­tion of views). Note that each time step on the graph cor­re­sponds to 1000 steps in the code, i.e. the whole length of the simu­la­tion is 100,000 steps.

The ba­sic re­sults are not very sur­pris­ing:

  • Peo­ple with the views clos­est to each other con­verge, with those on the fence tak­ing awhile to pick a side, but even­tu­ally be­ing pul­led (and pushed) to­ward one of the view clusters.

  • The sep­a­ra­tion be­tween preva­lent views cor­re­sponds to the equil­ibrium be­tween re­pul­sion of an out­group and at­trac­tion of what one may thing as uni­ver­sal val­ues.

  • The num­ber of view clusters roughly cor­re­sponds to the ini­tial sep­a­ra­tion spread (1000 in this ex­am­ple) di­vided by the equil­ibrium dis­tance.

Here is an­other cou­ple of run with the same pa­ram­e­ters:

It is in­ter­est­ing to no­tice that, while we even­tu­ally end up with just 4 clusters, some­times there is an in­ter­me­di­ate metastable equil­ibrium with 5 view clusters, and even­tu­ally the clos­est two merge to­gether.

Let’s play around with the pa­ram­e­ters a lit­tle more. Up­ping the gain mod­els how strong the at­trac­tion/​re­pul­sion feel­ings are. Here are a cou­ple of runs with the “feel­ings” be­ing twice as in­tense:

It looks like the clus­ter­ing hap­pens faster when the feel­ings are more in­tense, not very sur­pris­ing. The num­ber of clusters does not seem to be af­fected, though. What if in­stead of chang­ing the in­ten­sity, we play with the “width”, mean­ing how quick the re­pul­sive feel­ings sub­side af­ter enough sep­a­ra­tion. Why should they sub­side? Scott Alexan­der dis­cussed it in I Can Tol­er­ate Any­thing Ex­cept The Out­group:

You wouldn’t cel­e­brate Osama’s death, only Thatcher’s. And you wouldn’t call ISIS sav­ages, only Fox News. Fox is the out­group, ISIS is just some ran­dom peo­ple off in a desert. You hate the out­group, you don’t hate ran­dom desert peo­ple.

So, let’s see what changes if we play with the width (“dis­tance”) of the at­trac­tion/​re­pul­sion feel­ings. Dou­ble vs. half:

Dou­ble the width

So, un­sur­pris­ingly, the sep­a­ra­tion of the view clusters in­creases to match the width in­crease. And, given that we didn’t touch the ini­tial spread, we ended up with 3 clusters in­stead of 4 or 5. Now to see what hap­pens if we re­duce the width. A pri­ori one would ex­pect the fi­nal sep­a­ra­tion to nar­row, and the num­ber of view clusters to in­crease to roughly the ra­tio of the ini­tial sep­a­ra­tion to the equil­ibrium sep­a­ra­tion. Here is one run:

Half the width (note the gain in­crease to keep the “strength” of the feel­ings the same)

Note that I had to crank up the over­all gain to keep the force roughly the same. And the sep­a­ra­tion be­tween clusters did in­deed get smaller, as pre­dicted. But! Some­thing I didn’t ex­pect hap­pened: we only ended up with 4 clusters in­stead of 8 or so. Let’s try an­other cou­ple of runs:

This time we ended up with 5 clusters, then 4 again. A few more runs show that, just like be­fore, we mostly end up with 4 clusters, but some­times with 5. Why? I have no good ex­pla­na­tion of it at this point. It con­tra­dicts my ini­tial in­tu­ition. Is my in­tu­ition wrong? Is there a bug in the model? That’s the beauty of do­ing nu­mer­i­cal simu­la­tions: some­times you see some­thing un­ex­pected, and then have to figure out why. And, un­like in real-life, it is easy to play with the mod­els and pa­ram­e­ters, and see what af­fects what and how, in­stead of com­mit­ting a car­di­nal philo­soph­i­cal sin: us­ing open-loop logic with no feed­back.

So, this was an ex­er­cise in quan­ti­ta­tive philos­o­phy, if you wish. Which, in my opinion, should be an es­sen­tial com­ple­ment to the usual qual­i­ta­tive philos­o­phy.

No comments.