A wonderful vision of a world where you don’t need a job because you can make money by full-time arguing with people online!
However, any objections to various karma systems (e.g. you can get upvotes by posting clickbait) would apply the same here, only more strongly, because there would be a financial incentive now.
I think Reddit tried something like that; you could award people “Reddit gold”, not sure how it worked.
Prediction markets in forums and systems that support them, naturally giving rise to/being refutation bounties.
You need to have a way to evaluate the outcome. For example, you couldn’t use a prediction market to ask whether people have a free will, or what is the meaning of life. Probably not even whether Trump won the 2020 election, unless you specify how exactly the answer will be determined—because simply asking people won’t work.
A subscription model with fees being distributed to artists depending on post-watch user evaluations, allowing outsized rewards for media that’s initially hard for the consumer to appreciate the value of, but turns out to have immense value after they’ve fully understood it. (media economics by default are terminally punishing to works like that)
The details matter, because they determine how people will try to game this. I could imagine a system where you e.g. upvote the articles you liked, and then one year later it shows you the articles you liked, and you can now specify whether you like them on reflection. An, uhm, maybe 10% of your subscription is distributed to the articles you liked immediately, and 90% to those you liked on reflection? -- I just made this up, not sure what is the weakness, other than the authors having to wait for 1 year until the rewards for meaningful content start coming.
I think Reddit tried something like that; you could award people “Reddit gold”, not sure how it worked.
It didn’t do anything systemically, just made the comment look different.
You need to have a way to evaluate the outcome
What I plan on doing is evaluating comments partly based on expected eventual findings of deeper discussion of those comments. You can’t resolve a prediction market about whether free will is real, you can make a prediction market about what kind of consensus or common ground might be reached if you had Keith Frankish and Michael Edward Johnson undertake 8 hours of podcasting, because that’s a test that can/may be run.
Or you can make it about resolutions of investigations undertaken by clusters of the scholarly endorsement network.
The details matter, because they determine how people will try to game this.
The best way to game that is to submit your own articles to the system then allocate all of your gratitude to them, so that you get back the entirety of your subscription fee. But it’d be a small amount of money (well, ideally it wouldn’t be, access to good literature is tremendously undervalued, but at first it would be) and you’d have to be especially malignant to do it after spending a substantial amount of time reading and being transformed by other peoples’ work.
But I guess the manifestation of this that’s hardest to police is; will a user endorse a work even if they know the money will go entirely to a producer who they dislike especially given that the producer has since fired all of the creatives who made the work.
A wonderful vision of a world where you don’t need a job because you can make money by full-time arguing with people online!
However, any objections to various karma systems (e.g. you can get upvotes by posting clickbait) would apply the same here, only more strongly, because there would be a financial incentive now.
I think Reddit tried something like that; you could award people “Reddit gold”, not sure how it worked.
You need to have a way to evaluate the outcome. For example, you couldn’t use a prediction market to ask whether people have a free will, or what is the meaning of life. Probably not even whether Trump won the 2020 election, unless you specify how exactly the answer will be determined—because simply asking people won’t work.
The details matter, because they determine how people will try to game this. I could imagine a system where you e.g. upvote the articles you liked, and then one year later it shows you the articles you liked, and you can now specify whether you like them on reflection. An, uhm, maybe 10% of your subscription is distributed to the articles you liked immediately, and 90% to those you liked on reflection? -- I just made this up, not sure what is the weakness, other than the authors having to wait for 1 year until the rewards for meaningful content start coming.
It didn’t do anything systemically, just made the comment look different.
What I plan on doing is evaluating comments partly based on expected eventual findings of deeper discussion of those comments. You can’t resolve a prediction market about whether free will is real, you can make a prediction market about what kind of consensus or common ground might be reached if you had Keith Frankish and Michael Edward Johnson undertake 8 hours of podcasting, because that’s a test that can/may be run.
Or you can make it about resolutions of investigations undertaken by clusters of the scholarly endorsement network.
The best way to game that is to submit your own articles to the system then allocate all of your gratitude to them, so that you get back the entirety of your subscription fee. But it’d be a small amount of money (well, ideally it wouldn’t be, access to good literature is tremendously undervalued, but at first it would be) and you’d have to be especially malignant to do it after spending a substantial amount of time reading and being transformed by other peoples’ work.
But I guess the manifestation of this that’s hardest to police is; will a user endorse a work even if they know the money will go entirely to a producer who they dislike especially given that the producer has since fired all of the creatives who made the work.