Trying to be original may be justifiable if people will buy a NEW!! product even if it’s inferior.
I appreciated your choice of examples. Conformist-nonconformism is about the most annoying thing in the world to me, in addition to making a lot of smart people useless (or worse).
An example, to point out that this isn’t necessarily a market failure caused by imperfect information/biases: fiction. Something new has a lower bar that something old. You can’t surprise me with the same plot twists, can’t give the same novel speculation (especially for the most important parts of the work, which I forget less).
Likewise if I have a way of detecting errors in e.g. code, I may want a completely-different-paradigm tester even if it’s on average worse, in hopes of catching the places where my first tester failed—likewise for emergency preparedness and backup techniques generally, where you want to minimize positive correlation in error so that something is very likely to work at all.
Sub-likewise, generally if you are willing to take a hit to the mean in favor of increasing variance (because you care about the positive heavy tails more than the negative ones, e.g. if you can take the max of your attempts, or if you need a hail mary in football to win) you will have an example of wanting worse but different.
Trying to be original may be justifiable if people will buy a NEW!! product even if it’s inferior.
I appreciated your choice of examples. Conformist-nonconformism is about the most annoying thing in the world to me, in addition to making a lot of smart people useless (or worse).
An example, to point out that this isn’t necessarily a market failure caused by imperfect information/biases: fiction. Something new has a lower bar that something old. You can’t surprise me with the same plot twists, can’t give the same novel speculation (especially for the most important parts of the work, which I forget less).
Likewise if I have a way of detecting errors in e.g. code, I may want a completely-different-paradigm tester even if it’s on average worse, in hopes of catching the places where my first tester failed—likewise for emergency preparedness and backup techniques generally, where you want to minimize positive correlation in error so that something is very likely to work at all.
Sub-likewise, generally if you are willing to take a hit to the mean in favor of increasing variance (because you care about the positive heavy tails more than the negative ones, e.g. if you can take the max of your attempts, or if you need a hail mary in football to win) you will have an example of wanting worse but different.