Social effects of algorithms that accurately identify human behaviour and traits

Related to: Could auto-generated troll scores reduce Twitter and Facebook harassments?, Do we underuse the genetic heuristic? and Book review of The Reputation Society (part I, part II).

Today, algorithms can accurately identify personality traits and levels of competence from computer-observable data. FiveLabs and YouAreWhatYouLike are, for instance, able to reliably identify your personality traits from what you’ve written and liked on Facebook. Similarly, it’s now possible for algorithms to fairly accurately identify how empathetic counselors and therapists are, and to identify online trolls. Automatic grading of essays is getting increasingly sophisticated. Recruiters rely to an increasing extent on algorithms, which, for instance, are better at predicting levels of job retention among low-skilled workers than human recruiters.

These sorts of algorithms will no doubt become more accurate, and cheaper to train, in the future. With improved speech recognition, it will presumably be possible to assess both IQ and personality traits through letting your device overhear longer conversations. This could be extremely useful to, e.g. intelligence services or recruiters.

Because such algorithms could identify competent and benevolent people, they could provide a means to better social decisions. Now an alternative route to better decisions is by identifying, e.g. factual claims as true or false, or arguments as valid or invalid. Numerous companies are working on such issues, with some measure of success, but especially when it comes to more complex and theoretical facts or arguments, this seems quite hard. It seems to me unlikely that we will have algorithms that are able to point out subtle fallacies anytime soon. By comparison, it seems like it would be much easier for algorithms to assess people’s IQ or personality traits by looking at superficial features of word use and other readily observable behaviour. As we have seen, algorithms are already able to do that to some extent, and significant improvements in the near future seem possible.

Thus, rather than improving our social decisions by letting algorithms adjudicate the object-level claims and arguments, we rather use them to give reliable ad hominem-arguments against the participants in the debate. To wit, rather than letting our algorithms show that certain politicians claims are false and that his arguments are invalid, we let them point out that they are less than brilliant and have sociopathic tendencies. The latter seems to me significantly easier (even though it by no means will be easy: it might take a long time before we have such algorithms).

Now for these algorithms to lead to better social decisions, it is of course not enough that they are accurate: they must also be perceived as such by relevant decision-makers. In recruiting and the intelligence service, it seems likely that they will to an increasing degree, even though there will of course be some resistance. The resistance will probably be higher among voters, many of which might prefer their own judgements of politicians to deferring to an algorithm. However, if the algorithms were sufficiently accurate, it seems unlikely that they wouldn’t have profound effects on election results. Whoever the algorithms favour would scream their results from the roof-tops, and it seems likely that this will affect undecided voters.

Besides better political decisions, these algorithms could also lead to more competent rule in other areas in society. This might affect, e.g. GDP and the rate of progress.

What would be the impact for existential risk? It seems likely to me that if algorithms led to the rule of the competent and the benevolent, that would lead to more efforts to reduce existential risks, to more co-operation in the world, and to better rule in general, and that all of these factors would reduce existential risks. However, there might also be countervailing considerations. These technologies could have a large impact on society, and lead to chains of events which are very hard to predict. My initial hunch is that they mostly would play a positive role for X-risk, however.

Could these technologies be held back for reasons of integrity? It seems that secret use of these technologies to assess someone during everyday conversation could potentially be outlawed. It seems to me far less likely that it would be prohibited to use them to assess, e.g. a politician’s intelligence, trustworthiness and benevolence. However, these things, too, are hard to predict.