I don’t think this quite works, but I like the attempt. The problem I see here is that this is likely to create filter bubbles. One of my strategies for avoiding filter bubbles is that I often specifically seek out media that is unpopular and then simply try to get through a lot of it fast, because it is rare for what I want to correlate terribly strongly with what others want. Also, upvoting someone’s comments doesn’t mean that I agree with them, and agreeing with them doesn’t mean that I trust them to recognize what’s good in the same situations I would. I would suggest that a key problem with karma is in fact the issue that there’s a single direction of up/down, but I think there’s something more fundamentally funky about the idea of having a “upvote so others can see” view, even as it exists now. I’d personally suggest that votes should be at the same level as comments—votes should be seen as reviews, in the same sense as scientific reviews. And even scientific review has serious problems. index of last time I did a search for this.
In general, I think what we’d want would have some degree of intentional partitioning as new nodes get added, and some degree of intentional anti-partitioning; the graph should probably be near the edge of criticality in some key aspect, as most highly effective systems turn out to be, but figuring out which feature should be edge of criticality is left an open question by that claim.
It might make sense to separate simulacrum 1, 2, and 3 - fact, manipulation, and belonging - intentionally, if possible; getting them to stay separated, or to start out separated even for a new user, is not trivial. How could something like EigenKarma be adapted to do this? Dunno.
I don’t think this quite works, but I like the attempt. The problem I see here is that this is likely to create filter bubbles. One of my strategies for avoiding filter bubbles is that I often specifically seek out media that is unpopular and then simply try to get through a lot of it fast, because it is rare for what I want to correlate terribly strongly with what others want. Also, upvoting someone’s comments doesn’t mean that I agree with them, and agreeing with them doesn’t mean that I trust them to recognize what’s good in the same situations I would. I would suggest that a key problem with karma is in fact the issue that there’s a single direction of up/down, but I think there’s something more fundamentally funky about the idea of having a “upvote so others can see” view, even as it exists now. I’d personally suggest that votes should be at the same level as comments—votes should be seen as reviews, in the same sense as scientific reviews. And even scientific review has serious problems. index of last time I did a search for this.
In general, I think what we’d want would have some degree of intentional partitioning as new nodes get added, and some degree of intentional anti-partitioning; the graph should probably be near the edge of criticality in some key aspect, as most highly effective systems turn out to be, but figuring out which feature should be edge of criticality is left an open question by that claim.
It might make sense to separate simulacrum 1, 2, and 3 - fact, manipulation, and belonging - intentionally, if possible; getting them to stay separated, or to start out separated even for a new user, is not trivial. How could something like EigenKarma be adapted to do this? Dunno.