Like Raemon, I want to echo the point that following your intellectual curiosity is probably the best way to do research work, and generally make the most of your energy/time budget. But some specific considerations:
1. What seems important to Vaniver.
I expect that voting systems mostly won’t matter for AI outcomes. It seems like the primary question is whether or not the AI system we make does anything like what we like/endorse (i.e. whether or not existential accidents happen), and the secondary question is whether or not teams coordinated to form a coalition to build such a safe system (or otherwise prevented the creation of unsafe systems). Voting seems mostly useful for aggregating preferences over scarce joint decisions in a bandwidth-sensitive way (“where should the group go to lunch?” as opposed to “what do you personally want to eat?”, or “which of these four candidates should be president?” as opposed to “what are your complete views on politics?”), and the coalition-building problem will likely look more like negotiation (see this paper by Critch as an example of the sort of thing that seems useful to me in that space) and the preference-satisfaction solution in the glorious transhuman future will likely look more like telling Alexa how you want your personal environment to be and not having to worry much about scarcity or joint decision-making.
It’s possible that government policy will be important, and the health of public discourse will be important, but it seems quite unlikely to me that election reforms will have the desired effects in time.
---
2. Whether it’s the core problem of discourse, or will be sufficient to overcome modern challenges.
It seems like the forces pushing towards political polarization are considerably stronger than just the pressures from electoral systems, and mostly have to do with communication media stuff. Basically, current media technologies push the creation and curation of media closer to the consumer, who has different (and worse) incentives than elites, which leads to a general dumbing-down and coarsening of discourse. Superior election technology seems likely to help broadly-liked centrists defeat people who manage to eke out 51% support and 49% hate, but that doesn’t seem like it’ll fix discussions of cultural hot spots. (Will broadly liked centrists cause American politics to be more sensible on climate change, or the weird mix of negotiations about border security, or so on?)
Figuring out what’s upstream of worsening discourse and pushing on that (or seeking to create more good discourse, or so on) is probably more effective is better public conversations are actually the goal; and even if this effort helps, if it can’t help enough, it may be better to write off the thing that it would help.
---
3. Whether or not it’s important if it seems important to Vaniver.
There’s a claim in Inadequate Equilibria, specifically the end of Moloch’s Toolbox, which is that there are lots of problems that don’t get solved because there aren’t all that many people who are unbiased and will float to the problem that seems most important (the ‘maximizing altruists’) compared to the number of problems, and so you get problems that seem ‘quite serious’ but are also neglected because they’re more costly than human civilization can support at present. (This dynamic is common; when I worked in industry, there were many improvements that could be made to the system that weren’t being made because they weren’t the most important improvement to be making at the time.)
But also this sort of meta-work has its own costs. Compare Alice, who views LessWrong on her phone and notices a bug, and then fixes the bug and submits a pull request, and then moves on, with Beatrice, who considers all the bugs on LessWrong and decides which is most important, and then fixes that one and submits a pull request. Then compare both of them with Carol, who also considers all the different projects and tries to figure out which of them is most important, which also maybe requires considering all the different metrics of project importance, which also maybe requires considering all the different decision theories, which also maybe requires...
It seems good for Alice to not pay the costs of optimizing, and just do the local improvements, especially if the alternative is that Alice doesn’t make any improvements. Beatrice will do more important work, but is ‘paying twice’ for it, and in situations where the bugs are roughly equally important this means Beatrice is perhaps less effective than someone less reflective. I think that people who are naturally interested in this sort of maximizing altruism should do it, and people who aren’t (and want to just be Alice instead) should be Alice without worrying about it too much (or trying to convince themselves that, no, they are doing the maximizing altruism thing).
Like Raemon, I want to echo the point that following your intellectual curiosity is probably the best way to do research work, and generally make the most of your energy/time budget. But some specific considerations:
1. What seems important to Vaniver.
I expect that voting systems mostly won’t matter for AI outcomes. It seems like the primary question is whether or not the AI system we make does anything like what we like/endorse (i.e. whether or not existential accidents happen), and the secondary question is whether or not teams coordinated to form a coalition to build such a safe system (or otherwise prevented the creation of unsafe systems). Voting seems mostly useful for aggregating preferences over scarce joint decisions in a bandwidth-sensitive way (“where should the group go to lunch?” as opposed to “what do you personally want to eat?”, or “which of these four candidates should be president?” as opposed to “what are your complete views on politics?”), and the coalition-building problem will likely look more like negotiation (see this paper by Critch as an example of the sort of thing that seems useful to me in that space) and the preference-satisfaction solution in the glorious transhuman future will likely look more like telling Alexa how you want your personal environment to be and not having to worry much about scarcity or joint decision-making.
It’s possible that government policy will be important, and the health of public discourse will be important, but it seems quite unlikely to me that election reforms will have the desired effects in time.
---
2. Whether it’s the core problem of discourse, or will be sufficient to overcome modern challenges.
It seems like the forces pushing towards political polarization are considerably stronger than just the pressures from electoral systems, and mostly have to do with communication media stuff. Basically, current media technologies push the creation and curation of media closer to the consumer, who has different (and worse) incentives than elites, which leads to a general dumbing-down and coarsening of discourse. Superior election technology seems likely to help broadly-liked centrists defeat people who manage to eke out 51% support and 49% hate, but that doesn’t seem like it’ll fix discussions of cultural hot spots. (Will broadly liked centrists cause American politics to be more sensible on climate change, or the weird mix of negotiations about border security, or so on?)
Figuring out what’s upstream of worsening discourse and pushing on that (or seeking to create more good discourse, or so on) is probably more effective is better public conversations are actually the goal; and even if this effort helps, if it can’t help enough, it may be better to write off the thing that it would help.
---
3. Whether or not it’s important if it seems important to Vaniver.
There’s a claim in Inadequate Equilibria, specifically the end of Moloch’s Toolbox, which is that there are lots of problems that don’t get solved because there aren’t all that many people who are unbiased and will float to the problem that seems most important (the ‘maximizing altruists’) compared to the number of problems, and so you get problems that seem ‘quite serious’ but are also neglected because they’re more costly than human civilization can support at present. (This dynamic is common; when I worked in industry, there were many improvements that could be made to the system that weren’t being made because they weren’t the most important improvement to be making at the time.)
But also this sort of meta-work has its own costs. Compare Alice, who views LessWrong on her phone and notices a bug, and then fixes the bug and submits a pull request, and then moves on, with Beatrice, who considers all the bugs on LessWrong and decides which is most important, and then fixes that one and submits a pull request. Then compare both of them with Carol, who also considers all the different projects and tries to figure out which of them is most important, which also maybe requires considering all the different metrics of project importance, which also maybe requires considering all the different decision theories, which also maybe requires...
It seems good for Alice to not pay the costs of optimizing, and just do the local improvements, especially if the alternative is that Alice doesn’t make any improvements. Beatrice will do more important work, but is ‘paying twice’ for it, and in situations where the bugs are roughly equally important this means Beatrice is perhaps less effective than someone less reflective. I think that people who are naturally interested in this sort of maximizing altruism should do it, and people who aren’t (and want to just be Alice instead) should be Alice without worrying about it too much (or trying to convince themselves that, no, they are doing the maximizing altruism thing).