I feel that intuitively as well—but the hard question for me, is, how do I square “the maximal utility of existence is related to diversity and uniqueness” with “the utility of a probability distribution is the probability times the outcome, even when the existence(s) within that distribution aren’t diverse or unique”?
core
Either way, see source:
Because the two individuals created by transporter duplication are identical to the person who existed prior to beaming, the term “transporter clone” could apply to either of them
Was so surprised that nobody’s raised this point that I made an account just to make it.
Large organizations and highly placed individuals who solve coordination problems can make lots of money for reasons other than market efficiency. Most obviously, because they are in the best position to be rent-seeking. Coordinators take advantage of network effects to make themselves indispensable, then have every incentive to enshittify—to use their position as a coordinator to extract rent and dictate what activities can be coordinated (picking up rideshare customers) and which cannot (coordinating contract negotiation, selling products the coordinator disapproves of, etc).
This isn’t so relevant to your point about freelancers, and I agree with the general point that coordination is enormously valuable to society. But coordination would also be well-paid and ‘taut’ in a world where coordination incentivizes net-negative economic efficiency via rent extraction.
Proof-of-stake and proof-of-work are both often implemented cryptographically, because in cryptographic domains, verification can be easier than generation. I think another option is to apply that principle to the problem directly, where possible. The best example: formalizing a math theorem in lean means it’s much easier to verify than it is to read. CS and ML papers can sometimes (if making software is now much easier) be implemented into toy examples, sized appropriately for reviewers (and reviewers’ AI instances) to check for hardcoding or cheating. Tough data analysis domains could be turned into raw data from a reputable source plus a minimal, non-steering prompt for reviewers’ models to re-discover what the author wanted to publish. Eventually, I think automated or simulated biology/chemistry labs could be funded to attempt to reproduce new papers’ results, putting their reputations on the line.
This is not sufficient to protect against motivated bullshitters, especially not in all domains, and I think in the near future institutions may be forced to fall back more to reputation. But I think it’s workable. Academic proof-of-work is only, at best, a proxy to avoid being overloaded with verification work, but I think we can make verification easier in other ways.