Generalized Efficient Markets and Academia
Generalized efficient markets (GEM) says that low-hanging fruit has already been picked. If there were some easy way to reliably make a fortune on the stock market, it would have already been done, and the opportunity would be gone. If some obvious theory explained all the known data on some phenomenon better than the current mainstream theory, that theory would already have been published and widely adopted. If there were some simple way to build a better, cheaper mousetrap, it would already be on the market.
Look back—one of those three is not like the others.
One nice property of markets is that it only takes a small number of people to remove an inefficiency. Once the better, cheaper mousetrap hits the market, the old mousetraps become obsolete; the new mousetrap is adopted even if there are initially many more old-mousetrap-makers than new-mousetrap-makers. Same with the stock market: even if most investors don’t notice a pattern, it only takes a handful to notice the opportunity and remove it. Competition drives market-beating profits to zero with only a handful of competitors, even if everyone else in the market misses the opportunity.
But academia is a different story.
In order for a theory to be adopted within an academic field, most people in the field must recognize the theory’s advantages. If a theory has fatal flaw which most people in the field won’t understand (e.g. because of insufficient mathematical fluency), it doesn’t matter how comprehensively fatal the flaw is, the theory won’t be dropped until some easier-to-understand evidence comes along (e.g. direct experimental refutation of the theory). A handful of people who understand the problem might sound the alarm, but that won’t be enough social signal to stand out from the constant social noise of people arguing for/against the theory in other ways.
Some examples of ways this might happen:
Most people in the field don’t understand statistical significance
Most people in the field don’t understand confounding
Most people in the field can’t distinguish a predictive theory from phlogiston
Most people in the field don’t have a notion of gears/causality, or don’t see why it matters
Most people in the field don’t understand equilibrium reasoning
Now, as long as people in the field recognize the importance of correctly predicting experimental outcomes, the right theory will probably win out eventually. But we should still expect to see low-hanging fruit in the meantime, among theories not yet fully nailed down by experiment.
People who do understand these sorts of things should expect to see obvious problems in the theories of fields where most people don’t understand these things. Of course, that doesn’t mean we can win fame in the field just by pointing out these errors—quite the opposite. The whole point is that the structure of academia means that one person can’t correct the errors of a field, when that field does not have the background needed to understand the error. But it does mean that, if we want to understand the topic of such a field for reasons other than making a career as an academic, then we should not be surprised if we can theorize better than the supposed experts. We don’t need to outperform the best in order to outperform the field; we just need to outperform the average.
The mousetrap thing seems pretty analogous—If I build a better mousetrap, but can’t convince people it’s better, it seems similar to building a better theory and not being able to convince them it’s better. Although, perhaps it’s easier in the second case, as the people I have to convince in the second case are experts, who presumably it’s cheaper to educate than laymen (on the other hand, I may have to spend less on uneducating laymen.)
Within the constraints of GEM, a ‘better mousetrap’ is one that customers agree is better.
The best theory isn’t the one that most accurately predicts reality, it’s the one that the academics agree is best.
Just as situations change and the best mousetrap for securing a clay granary is not the best mousetrap for a nursery, the best theory is not constant over time- but when a new theory becomes best, it is coincident with it becoming the academic standard.
This is similar to saying “Within the constraints of GEM, it only counts as a $10 bill lying on the ground if people know it’s there. ”
The EMH, in theory, applies to the value people get from a thing, whether or not they know the value. It may be that there’s a mousetrap that makes it less likely other mice would come, and saves me money in the long run.
If I understood and believed that, I would buy the mousetrap no question, even though it’s more expensive. However, there’s a cost to educating me on top of the cost of just building the mousetrap.
The EMH has all sorts of weaknesses and holes, but the biggest weaknesses are around “innovation”—both finding new innovations, and propagating them through the marketplace. See the issues of asymmetric information, the innovator’s dilemma, etc.
In other words, I think @johnwentsworth was being too optimistic in his assessment: innovations of any kind will run into this issue, whether they have a market attached or not.
Yes, GEM does not permit there to be money on the ground. It’s one of the limitations of the model.
Which is a great reason to not use it to model situations where there is money on the ground.
Mhmmm, I think we’re both in agreement that GEM is a good model neither for new mousetraps nor new scientific discoveries… The interesting question for me is: Does the model break down in the same way for both?
No, and yes.
For mousetraps, it has the implicit problems with ignoring the effects of marketing on customer decision.
For academic theories, it has the implicit problem where the *consumers* (the scientific community, which is presented with multiple theories and forms a consensus around zero or more of them) is being misstated as the producer.
You can get paid for seeing an error in someone’s mousetrap design and making a design that lacks that error, even if the error is “bad marketing”. You can’t get paid for seeing an error in a mousetrap buyer, especially if that error is “ignores the exemplary marketing or otherwise makes decisions based on the wrong factors”.
Academia pays in prestige, but it doesn’t pay in prestige for “doesn’t join or change the consensus”; it pays a little bit to join the consensus and a fair bit more for publishing results that change the consensus, but joining it is much easier than changing it.
Is someone has a theory that’s badly marketed in academia, you can earn in prestige for appropriating the theory and pushing it with better marketing/framing.
Maybe we could remove some confusion by calling it “a more addictive mousetrap” instead? :D
That they agree to be better before they actually use it. Plenty of customers buy microwaves with a lot more programs then they end up using. It’s one of many examples of Norman’s Design of Everyday Things were companies produce devices that have poor useability.
If a microwave is built faulty in a way were it radiates the room around it and causes health damage, that’s a bad microwave but customers might not know about the problem.
The microwave that radiates it’s customers is similar to the flawed scientific theory that gets believed because an academic field lacks the mathematical abilities to see the theory as flawed.
Buying an academic theory by spending attention and reputation on it is just like buying a product with money. The buying decision depends in both cases on the qualities of the product that the recipients can perceive.
Yes, and to fit the lawsuit against the manufacturer of a faulty microwave or the journal retracting a flawed paper into GEM requires adding epicycles.
The problem with the mousetrap market, even under your framing of the problem, is that people are merely incentivized not to make incorrect decisions; there’s no way to profit off of other peoples’ poor choice of moustrap. Mousetraps are to financial markets as metaculus is to predictit. A better example would be the market for zero-day exploits; there not only are people incentivized to find exploitable bugs, but also to search under rocks other people are leaving unturned. Academia is simply that much worse than the mousetrap market, because in that case the real “consumers” are the engineers/randos who read and use the scientific literature to inform decisions, and scientists are poorly aligned with their needs.
Yes, this is certainly a big issue. However, there are other fundamental issues with innovation that it’s upstream of.
Take for example the scarcity of information. Under normal assumptions of markets if I do a poor job of marketing my product, someone else will step in to do that for me (profiting off of my ability to market combined with the knowledge that the product is more valuable). However, If I’m the only one that KNOWS about my product, that assumption breaks down. This applies to both new types of mousetraps and new theories.
In addition, if I had some sort of IP around the innovation it may mean that even if someone DID know about it, they would not be incentivized to spread it because I would get all the value. This is analogous to someone creating a theory and other scientists not being incentivized to spread it. As you said, this is a problem of misaligned incentives between the people who spread ideas/innovations and the people who get value from them.
There’s no way to profit off of people preferring to buy mousetraps that are lest effective at trapping mice because, in that framing, the best mousetrap is *not* the one that is most effective at trapping mice.
You also can’t profit off of people buying fake 0-day exploits, unless you’re selling the best 0-days.
Your tautologically efficient market is not what economists or anyone else in the world is talking about when they talk about a market being efficient. In the trivial sense you’re posing the problem you would describe your market as satisfying people’s needs even if people had parasitic worms in their brains that tricked them into thinking that mouse traps cured cancer. In the real world, people buy mousetraps because they *expect* those mouse traps to be more valuable than the price, for some definition of value that does not include “they bought the mouse trap”. A pillar of one such value system might be “effectiveness at catching mice”. In this scenario, people might be mistaken—they could have bounded rationality, they could have exploitable cognitive biases abused by advertisers, etc. The thing that makes financial markets resilient against this sort of problem is the nice property that rational actors are incentivized to take advantage of other peoples’ missteps. Mouse trap buyers cannot do this—you can’t buy a hundred mouse traps from an underrated dealer to correct the market, and you can’t short a mouse trap dealer’s traps because they are running deceptive advertising.
Yes. That is a limitation of the model.
No, it’s mot *my* model.
But within the model, marketing is one of the factors that determines the quality of a mousetrap.
Reminiscent of cases where it’s hard to short something: you may be able to tell a theory is bunk, but not have any easy way to “short” (provide an intuitive explanation / design a simple refuting experiment)
Not even. An analogy there would imply you could take advantage of people’s lack of support of certain theories and collect against their underpricing of say, TDT. That would make it more efficient than the mouse trap market. Academics are normally just motivated wildly perpendicular to whatever consequences or value their work produces.
This is very very wrong.
Knowing that there’s an inefficiency is only part of the battle. In order of importance to counter your point:
You need to have enough volume to remove the inefficiency. For a trading strategy, this is the capital you can allocate to this strategy specifically. It’s not obvious how many hedge funds, for example, are capital constrained. In the mousetrap example this means actually producing the sufficient volume of mousetraps.
The opportunity costs matter. The people positioned best to notice the inefficiency are also positioned best to notice others. May be there is a good trading strategy but it’s *less* than what you would make with your other strategies. So you wouldn’t actually trade it, leaving it for others to pick up. Same with the mousetrap: may be you go build a cat trap, or may be you don’t build anything but make money consulting.
The inefficiency removal needs to stick. If you’re trading a strategy and then stop, the inefficiency will likely come back. You might stop for a multiple number of reasons.
The execution is important. Let’s say you can analyze the data perfectly and come up with a clever algorithm. You’d still need to figure out how to execute on the actual trading. Part of it is not introducing too many exploitable opportunities for other trades (e.g. market makers).
Overall, I’d say even if an inefficiency is publicly known, it usually takes a non-trivial amount of people*effort to remove all of it.
None of this actually invalidates the intended point: it only takes a small fraction of investors to remove an opportunity.
If the opportunity is actually market-beating, then those who notice it will gain capital over time, relative to those who don’t. The opportunity won’t disappear right away, but it will be removed eventually.
Obviously if those who notice an opportunity don’t find it worth exploiting, then the opportunity will persist. But it’s still true that it only takes a small fraction of investors actually exploiting the opportunity to remove it.
We do need more than just one exploiter to remove an opportunity. Some may stop sometimes, but there should be an equilibrium in which the opportunity yields just enough that people bother to exploit it. At that point, the market is efficient.
If an opportunity can’t actually be executed, then it isn’t an opportunity. It’s just a pattern in the data.
Bear in mind that when we talk about “removing” an opportunity, that doesn’t mean zero dollars can be made off of it. It means that the benefit from the opportunity does not exceed the opportunity cost.
Two of these things are not like the others. People can have cognitive biases or bounded rationality that prevent them from making good decisions about mouse trap purchases despite it costing money. Unlike academia there is a direct incentive for these people to change their behavior; but that’s as far as it goes. There’s no good way to correct their decisions for profit like there is in a financial market.
How confident would you be that a new company that produces a mouse-trap that’s 10% better at catching mice would be able to articulate the advantage in a way to get their new mouse-trap design to spread?
I’m very doubtful that’s the case. It’s very hard to know about that advantage for regular customers who stand in front of a shelf that provides different mouse-trap’s for sale.
It took the market a long time to adapt inventions as great as sliced bread. Donald Norman layed out in The Design of Everyday Things how it’s easy to build doors where it’s clear which side has to be pushed and which side has to be pulled. It still largely gets ignored and plenty of doors get build in ways where it’s not easy for the user to know whether they have to push or pull.
The market isn’t even strong enough to be able to clear other obviously bad door designs like door knobs to the extend that Australia went to forbid them in their new building codes.
I actually do expect the better mousetrap would spread, but it would be very slow. We’re talking metis timescales, not company-turns-a-profit timescales. A cheaper mousetrap would spread faster, although that wouldn’t hold for goods with a large price-signal-of-quality component.
Spelling it out a bit more, here’s two models of how a better mousetrap spreads:
Producer successfully communicates their advantage to consumers, so consumers switch to the new mousetrap immediately.
Whenever a consumer has a mouse, they buy a random mousetrap. If the mouse isn’t caught after some period of time, they switch to a different one. When a mousetrap works, they stick with it and maybe recommend it to their friends.
With only a 10% advantage, the second model likely won’t operate fast enough for the original inventor to benefit much from it, but it does drive adoption eventually.
People don’t buy random mousetraps. Incumbants in a market have various advantages over newcomers into a market. They have high volumn supply chains and an established brand for which they do marketing. Most purchasing decisions for mouse traps are not through recommendations from friends.
When it comes to many everyday things, the competition beween them is mostly a battle about marketing and as a result our everyday thing are a lot worse then they could be if innovations would effectively spread.
Random doesn’t mean uniformly random. As long as there’s some randomness, and people are more likely to stick with products which worked for them before, we expect drift toward the new design.
Marketing is important for any particular decision, but usually we wouldn’t expect one mousetrap design to have an inherent relative advantage in marketing over another; the marketing-relevant aspects are mostly orthogonal to the mouse-catching aspects. An incumbent company probably has a marketing advantage, but in the long run incumbent companies will adopt the new design, if they find that it sells slightly better.
With doors, the problem is quite different, because the person deciding what kind of door to install is often not the end consumer—especially for commercial properties—so the adopt-what-works force isn’t there.
Plogiston is predictive. It’s much better at predicting then thinking in terms of the four elements.
In the book Mere Thermodynamics by Don S. Lemons published in 2009, I was quite surprised to read: “While the caloric theory of heat is plausible and to this day remains useful in limited circumstances …”
This is the only book on thermodynamics I’ve ever read, so I can’t really elaborate on those limited circumstances, unfortunately.
Your claim about (science) academia is that is generally unable to do some things that it clearly should be able to do, if it is doing its job correctly. Would you care to provide some evidence for it?
Hold that thought. This post is mainly intended as background for an upcoming post, which will have a bunch of examples in it.
Efficient markets work mostly only in theory. Traders and Investors that find spots to exploit market inefficiencies hold them as secretive as possible. Those market inefficient often also don’t allow high volumes of money. So its often closed private funds that truly discover and use such spots for years.
Open Investment funds often try to sell exactly this Idea, but they are more sales people that money managers.
Another problem arises if the smart money is too small to make a difference. E.g. in January many smart investors started to play the Covid-19 Spot. Longing companies like Zoom and shorting SP500. Zoom Stock went up immediately because that handful of smart money was enough to move the price. But the SP500 continued to make new All time highs since the hand full of smart investor money was laughable against the mass of auto invest strategies into the SP500.
Another distinction is that in Science participants go a long way to explain to the other party why their theory is superior, there is a direct incentive of recognition and contributing to the field. In Investing the truly smart money is doing the opposite they try to stay as secretive as possible. The public discussions about economics by Traders, Investors and Economist we often witness are either pure marketing for their fund(Traders/Investors). Handwaving reasoning (Economists).