Don’t let people buy credit with borrowed funds
I.
Most large-scale fraud follows basically the same pattern:
1. Some trader or executive gets in a position where they can use a bunch of other people’s resources (either via borrowing them, or being given custody over them)
2. They spend some of those resources to increase their perceived creditworthiness
3. They use this to gain control over more resources
4. They use those additional resources to buy more creditworthiness, which they then use to get more resources, and so on
5. Eventually some market shock or similar event causes people to re-evaluate the creditworthiness of the trader or executive, at which point the whole thing collapses and their debts get called in (often in the literal form of a margin call, sometimes in the form of a criminal conviction)[1]
The exact mechanism by which each one of those steps is achieved is different from case to case, but the overall result is the same. Everyone is sad, and society updates how we evaluate the trustworthiness of others.
Going through a few concrete examples:
FTX
FTX builds a crypto exchange into which other people deposit their money (and attract investment)
Using that money they fund huge marketing campaigns to position themselves as “the trustworthy crypto exchange”
This causes people to trust them more and deposit more money into the exchange and invest more into them
FTX uses more of that money to cover up their losses and invest more into huge marketing campaigns
Crypto prices collapse, people try to withdraw their money from FTX, they fail, Sam goes to prison
Enron
Enron takes investor money promising above-market returns
Enron uses the money from recent investors to pay out earlier investors with huge returns
This attracts more money invested in Enron
They use that additional money to pay out even larger dividends
The whole thing collapses under pressure as Enron has basically exhausted the market of interested investors
Theranos
Elizabeth Holmes takes investors money promising to produce great medical imaging devices
Using that money, Elizabeth Holmes recruits a bunch of people with strong reputation that vouch for her
This attracts more investment
She uses that greater investment to hire more people, run more marketing campaigns, etc.
Eventually Theranos fails to deliver the promised products, investigations start, and the whole thing collapses
Some other case-studies which are left as an exercise for the reader: WeWork, Wirecard, Lehman Brothers, Ponzi, Madoff, and many cases of academic fraud including Amy Cuddy and “power posing”.
II.
The key vulnerability that is being exploited in the examples above is there being some way to convert a dollar worth of resources into the ability to control more than a dollar of stewardship over other people’s resources. When this is possible at a large scale, at least some individuals will see the opportunity to leverage up on other people’s resources, bet them big, and hope they get to walk away with the winnings (and, if they lose, leave the empty bag to the people whose resources they borrowed).
The emphasis here is on spend. To distinguish what is going on from marketing expenses, or simply producing valuable assets which can now serve as collateral for greater loans, I am focusing on situations where resources are used to purchase pure perceived creditworthiness, without assets or knowledge or skills that produce genuinely higher expected future returns for investors and creditors in the future.
This of course makes this all a fundamentally deceptive exercise, because the reason why you extend someone a line of credit, or invest in someone, or deposit your funds with someone, is because you expect to make returns on what you gave them. But in many cases an adversary can spend money more effectively on distorting your beliefs about future returns they can provide, than they can by doing things that produce genuinely higher returns, and when the cost to doing so becomes cheaper than the additional resources they can such extract, you have a positive feedback loop.
This phenomenon extends beyond the realm of large scale fraud. The Stanford president resigning after decades of academic fraud leveraged into one of the most powerful positions in academia is one such interesting case.
Early in his career Tessier-Lavigne published a number of high-profile papers in neuroscience journals. These papers already contained significant data issues and were probably fradulent but this was not discovered until much later.
Using the trust and prestige gained from those fradulent papers, Tessier-Lavigne received a number of large government grants and high-prestige faculty positions
Tessier-Lavigne, using those resources, now hires dozens of post-docs and research assistants and announces multiple (later retracted and found fradulent) breakthroughs in neuroscience and Alzheimer prevention
Those breakthroughs, in turn, led to him receiving even more prestigious positions, and Genentech hiring him as CSO
Using his position as Genentech CSO and highly acclaimed academic, his ability to discredit anyone concerned about his results, and his ability to hype up his own papers, continuously increases, until he is eventually offered the position of president of Stanford University (possibly the most prestigious academic position in the world)
This eventually collapsed when journalists investigate his old work and found many cases of scientific misconduct
Similarly, a common corporate story is someone squeezing short-term profits out of some assets they are managing while (unbeknownst to upper management) lowering long-run returns (also known as “milking an asset”). Then, being hailed as a success they move on to a bigger project where they can repeat the same playbook, having used up less than a dollar of the resources under their stewardship to end up with more than an additional dollar of resources under their control.
Excerpts from the book Moral Mazes summarizing this dynamic:
Both Covenant Corporation and Weft Corporation, for instance, place a great premium on a division’s or a subsidiary’s return on assets (ROA); managers who can successfully squeeze assets are first in line, for instance, for the handsome rewards allotted through bonus programs. One good way for business managers to increase their ROA is to reduce assets while maintaining sales. Usually, managers will do everything they can to hold down expenditures in order to decrease the asset base at the end of a quarter or especially at the end of the fiscal year. The most common way of doing this is by deferring capital expenditures, everything from maintenance to innovative investments, as long as possible. Done over a short period, this is called “starving a plant”; done over a longer period, it is called “milking a plant.”
[...]
For instance, I could negotiate a contract that might have a phrase that would trigger considerable harm to the company in the event of the occurrence of some set of circumstances. The chances are that no one would ever know. But if something did happen and the company got into trouble, and I had moved on from that job to another, it would never be traced to me. The problem would be that of the guy who presently has responsibility. And it would be his headache. There’s no tracking system in the corporation. Some managers argue that outrunning mistakes is the real meaning of “being on the fast track,” the real key to managerial success. The same lawyer continues: In fact, one way of looking at success patterns in the corporation is that the people who are in high positions have never been in one place long enough for their problems to catch up with them. They outrun their mistakes. That’s why to be successful in a business organization, you have to move quickly.
[...]
At the very top of organizations, one does not so much continue to outrun mistakes as tough them out with sheer brazenness. In such ways, bureaucracies may be thought of, in C. Wright Mills’s phrase, as vast systems of organized irresponsibility.
III.
Now, the issue is of course that there are many different ways people evaluate track records and many different chains in the great web of reputational deference. Most resources can somehow be traded for other resources, and so it’s hard to guarantee that pure perceived creditworthiness itself is never for sale. Or more generally, the process that allocates creditworthiness is often much dumber than the most competent individuals, and in the resulting information warfare, it’s hard to guarantee that nobody can be duped out of more than one dollar worth of stuff with less than one dollar worth of investment.
That said, paying attention to the specific mechanism of “purchased creditworthiness” is IMO often good enough to catch a non-trivial fraction of social dysfunction, shut down fraud early on before it gets too big, and be helpful for staying away from things that will likely explode in violent and destructive ways later on.
Some maybe non-obvious heuristics I have for determining whether someone might actually be leveraged up on a bunch of creditworthiness purchases and is likely to explode in the future:
Don’t trust young organizations that hire PR agencies. PR agencies are the obvious mechanism by which you can translate money into reputation. As such, spending on PR agencies is a pretty huge flag! Not everyone who works with PR agencies is doing illegitimate things, but especially if an organization has not yet done anything else legible that isn’t traceable to their PR agency or other splashy PR efforts, it should be an obvious red flag.
Charity is a breeding ground for this kind of scheme. Many charities are good! Nevertheless, a lot of charities do just use most of their money to do more marketing to get more money, with basically no feedback loop that is routed through actually helping anyone. The absence of needing to provide market value make the fundraising feedback loops here particularly tempting.
Pay a lot of attention if an organization is quickly ramping up their PR spending. If an organization becomes overleveraged like this the value of maintaining creditworthiness becomes greater and greater. Often also the cost of purchasing additional dollars of creditworthiness goes up over time as the most credulous creditors have been exhausted, or suspicion mounts. This means many organizations in the throes of a cycle like this will ramp up their spending on PR a lot.
Beware of organizations that have many accolades for being “the most trustworthy” or “the most innovative” or “the most revolutionary”. On a competitive level, organizations that optimize for appearing trustworthy are often doing so because they have no other business proposition to optimize for. Of course, most of the time the most trustworthy institutions are indeed trustworthy, but seeing an organization that is a big outlier in its perceived trustworthiness, or where the actions of the CEO seem centrally oriented around optimizing for trustworthiness or reputation, often indicates this kind of runaway leveraged game.
IV.
At the institutional design level, the lesson here is “don’t sell creditworthiness”. If you, either on an individual, community, or institutional level have a vulnerability where someone can use resources under their stewardship in a way that results them being extended a bigger line of credit, without actually increasing future expected returns or more security for the assets under stewardship, someone will probably find some way to exploit that at some point.
This can often be quite tricky! As one example of where this kind of vulnerability can come in but is often hard to spot: Mutual reputation protection alliances are one of the most common ways in which creditworthiness ends up for sale: “A powerful potential ally with many resources approaches you with an offer: I say good things about you, you say good things about me, everyone is happy”.
Of course, what you are doing when agreeing to this deal is to fuck over everyone who was using your word to determine who is creditworthy. Often this enables exactly the kind of runaway dynamic explained in this post playing out in social capital instead of dollars.
As is common for adversarial situations like this, I doubt there is a generic silver bullet that solves this problem. Ultimately every credit allocation mechanism will have vulnerabilities, and those vulnerabilities will be easier to exploit from a position of greater trust and reputation. All we can do for now is to be vigilant, see when the mechanisms go wrong, and try to build incrementally more robust mechanisms and institutions for determining creditworthiness.
Postscript.
Another aspect of this whole dynamic, which is related to yesterday’s post about paranoia, is that this is one of the most common ways in which you end up with actors exercising strong direct optimization pressure on your beliefs, and which can cause you end up in environments where paranoia is the appropriate response. Of course there is often resources to be gained by duping and deceiving others, but the case of a creditworthiness bubble you have two things that rarely happen at the same time:
A feedback loop in which someone is gaining more and more resources
The control over those resources is very highly sensitive to people believing false things about their expected future returns
This produces actors who need very tightly control over what other people believe about them and say about them, and where the consequences of failing to do so are catastrophic, which produces a much greater willingness to spend large amounts of resources achieve those aims.
In addition to that, in many of these cases, the personal costs to the scheme falling apart have long since become insensitive to the size of the damage. It is genuinely unclear what Sam Bankman Fried or Elizabeth Holmes could have done to not end up in prison for decades by the time they ended up in their overleveraged positions, and trying to somehow keep things going for longer and hope for a big market surge, was from a purely selfish perspective possibly the best thing for them to do. Society does not punish you with more than prison or death, even if you caused much more harm than one person’s life, and so by the time you are in the middle of something like this, trying to de-incentivize this kind of behavior is very hard.
FAQ
OK, but shouldn’t I be happy if I give money to a charity that can raise more than a dollar from other people if I give it a dollar?
I like to think through this case via the lens of public good funding. Public goods are legitimately often underfunded, because the benefits are diffuse, and it’s hard to coordinate to all pay into the commons appropriately.
In those cases, you can provide real surplus value by using money to raise more money from other people if ultimately the total funds you raised are less valuable than the benefit you produce to society via the real services you (eventually) provide.
Because coordination problems loom large in public goods funding, good public goods projects often look like a creditworthiness-purchasing-scheme early on, but actually provide real value by solving a difficult coordination problem among public good funders, using those funds.
Does this really always collapse? I feel like sometimes it just happens, and everything is fine and normal?
In some situations, creditworthiness and trustworthiness are evaluated in an environment that has a lot of Keynesian beauty contest nature. I.e. a large amount of resources and power accrues to whoever people think will be the most popular target for those resources. Coups and more broadly political elections tend to have a lot of this nature, especially when conducted using insane voting systems like first-past-the-post voting.
In those situations someone’s creditworthiness might genuinely increase the more investment they have attracted, as the fact that they have attracted more investment is indeed a very strong predictor of their likelihood to be the receiver of the Keynesian beauty contest price. This still often explodes and causes lots of issues, but in a way that seems more fundamental to the dynamics of Keynesian beauty contests than any inherent deception going on.
In the cases of military control or elections, the key thing that resolves the inherent instability and overleveraged nature of this situation is that in filling the role of leader, a truly important and difficult coordination problem will have been solved, and from that position all the people who invested in the winner can be made whole. This is not the case if you are e.g. running a straightforward ponzi scheme with no payout on the horizon.
How is this different from just Ponzi schemes?
Ponzi schemes are just one instance of this general dynamic. Yes, Ponzi schemes rely on being able to purchase more than one dollar of creditworthiness for less than one dollar, in the form of paying out your early investors and promising your later investors the same. But many other situations I list above are not the same as Ponzi schemes. I certainly wouldn’t call the Stanford President situation a straightforward “Ponzi scheme” and also don’t really think it fits what happened with FTX or Theranos.
I think the broader category is more useful for making a broader range of accurate predictions about the world.
Ok, but how is this different from “marketing”?
Marketing, as a broad term for “distributing information about you and your organization widely” can certainly be used for this purpose! But it is not centrally what marketing is used for.
The normal context of marketing is to pay someone to get information about your product out to potential buyers. They then use that information to evaluate whether their product is worth more than its cost to them, and they offer you a trade if they do think so. In this world, marketing solved a real problem, and marketing spending genuinely increased your future expected returns, and creates lots of surplus value in the world.
Of course, if you are a business like FTX, you might run marketing campaigns for your product that emphasize its nature as a safe space to deposit your funds. This is specifically targeting your creditworthiness as a receiver of those deposits, which you can then use to attract more deposits. This only becomes an issue when you are not using the fees you collect on the deposits to do this, but the deposits itself, since that is when the purchase becomes a pure purchase of perceived creditworthiness, and not a genuine signal that you can maintain positive returns for the people who gave you their money.
- ^
In some rare cases the scheme might also never fully collapse, but simply result in someone more permanently taking ownership over the resources the others have given them stewardship over. See the FAQ for some of my thoughts on this.
This complaint about an important problem is expressed in a way that confuses me; it’s not clear whom the “should” formulation is advising and about what. (By contrast, when I wrote Effective Altruism is Self-Recommending, I think it was clear who I was advising (people persuaded that EA was creditable and effective) and what the advice was (don’t count claims that you will do X as evidence that someone has done X, check whether job 1 got completed adequately before using it as a track record for job 2).
The process you’re describing seems to involve type errors about credit, and corruption of the currency. Making these upstream problems more explicit could help people avoid being suckered by them.
Money IS a form of creditability; it’s evidence that you had the capacities needed to get the money, and in relatively just societies it’s evidence that you did something for money that would otherwise have earned you a corresponding amount of gratitude.[1]
In the case of financial creditworthiness, using money or other liquid assets as collateral seems unobjectionable, so it seems like the problem is in the currency conversion between money and nonfinancial credit. But staking money on claims is proepistemic.
The difference between staking money on one’s claims, and what you describe, is the difference between making bets or offering insurance—effectively buying credence for specific propositions—and buying a vaguer sort of credibility. A less vague way to describe this is the difference between buying insurance and bribing a third party evaluator.
So there’s one piece of usable advice: check whether someone’s bribed the third party evaluators you’re relying on.
But we shouldn’t expect that sort of problem to persist in an otherwise just system. So we need to explain why and how corruption is persistently subsidized, as I’ve tried to begin to do in There Is a War, The Debtors’ Revolt, and The Domestic Product, which advance the claim that much of the political dark matter of the 20th century is explained by correlated defaults as a mechanism for debtors to extract from creditors.
Talents, Oppression and production are competing explanations for wealth inequality.
Thank you for the comment! I have found much of your writing helpful when thinking about these things.
The “should” is aimed in a relatively broad sense at people who care about how well societies and groups of people function. I am also not super happy with the title, which is approximately the only place where I use “should”. If you have better title suggestions, I would actually greatly appreciate that.
I am not actually trying to talk that much about a “vaguer sort of credibility”. I am trying to talk about the specific case of “whatever causes other people to then extend you a greater line of credit[1] in whatever resource you used up to cause them to do so”. This is pretty concrete. In the case of FTX, it was the case that FTX found many opportunities to translate their agency and money into more people giving them more deposits, which they could then use to get more deposits.
I was indeed hoping to avoid a broader treatment of trustworthiness by focusing on the specific case of creditworthiness, as I think the former is a lot trickier and less well-defined. I use the former still a few times, which is maybe a mistake, but largely in an attempt at helping people understand that creditworthiness extends into more domains than just dollars.
Yep, I agree this is helpful, but does fail to cover the Theranos and FTX cases, where I don’t think anyone was expecting the presence of a third-party evaluator, but nevertheless especially the FTX case seems like a central example of a dynamic I characterize in my post.
No, I argue the issue is specifically in the currency conversion between any asset, and creditworthiness for that asset. In the case of money, it’s specifically the conversion between money and financial credit. The very basic mechanism I am trying to point to is that if you are in a situation where you can borrow a dollar, then spend that dollar, and end up now in a position to borrow more than one additional dollar, and you can do that over and over again, then bad things are going to happen[2].
ETA: I have updated the OP with a substantial number of new paragraphs and edits trying to make my point clearer. Copied here to save you re-reading the whole post:
Hopefully this clarifies my models here a bit more. I think in writing these sections I have clarified my model a bit more myself, which has been helpful.
“Line of credit” a bit more broadly defined to here include any transfer of resources under someone else’s stewardship, which includes e.g. deposits and investments
And this is specifically talking about the situation where you did not end up with assets you could just use as the collateral for the next creditor, but where you spent/lost/burned the money in order to convince the next creditor. Of course many assets are intangible, which makes distinguishing e.g. legitimate marketing use-cases and adversarial attacks on creditworthiness tricky to distinguish.
Thanks for trying to highlight the changes, but I’m a bit confused by this response.
“Whatever causes other people to then extend you a greater line of credit in whatever resource you used up to cause them to do so” seems to me straightforwardly vaguer than credence in a specific propositional claim, such as an insurance claim.
FTX seems to have straightforwardly bribed third parties with independent reputations to testify to its creditworthiness in ads. The case with Theranos is only slightly subtler, but to name a couple clear examples, Bill Frist and Henry Kissinger had independent reputations that they lent Theranos in exchange for anticipated financial gain.
You say you tried to narrow the scope to “creditworthiness” rather than “trustworthiness,” but I don’t know what that means. I guess an example of trust that isn’t credit in the relevant sense might be loyalty; the examples you gave all involved some sort of system of more or less explicit accounting in which somewhat quantifiable progress can be made, which isn’t how loyalty works. I think I was also assuming the narrower definition; can you point to some specific way in which I seem to be responding irrelevantly by wrongly assuming the broader definition?
As you point out, “many assets,” such as the ones in the academic example, are intangible, so it’s not clear that your “no collateral” qualifier helps much; whether there’s an intangible asset or verified capacity corresponding to the promise is exactly the controversy at issue.
Overall your reply seems substantially unresponsive; the level of disconnect is such that it’s not clear to me what I could even say to get a better response.
By creditworthiness, in this post, I mean the literal degree to which you are happy to transfer some specific resource that you own into someone else’s stewardship with the expectation that you will get them back (or make a positive return in-expectation). Creditworthiness is here specific to a resource that is transferrable. Dollars are the most obvious case. Social capital sometimes can also be modelled this way, though it gets more tricky. Creditworthiness does not need to extend into trustworthiness in general.
For example, as investors face very limited liability for investing in fradulent institutions, seeing someone willing to break the law (or be generally untrustworthy) can sometimes increase expected returns! In those situations creditworthiness (which I here try to measure in expectations of good stewardship or expected future profit) and trustworthiness (which would be measured in a broader propensity to not fuck people over) come strongly apart.
I think I am confused what “controversy” you are talking about here. I agree with you that in-practice, the line here is very hard to identify (as one would expect in a high-level adversarial information game).
My main aim with this post is largely to create a model that explains some situations where in-retrospect there is IMO little uncertainty that something of this shape went wrong.
Like, the specific sentence I objected to was: “so it seems like the problem is in the currency conversion between money and nonfinancial credit”.
And I think in the model and situations I outline, I am confused how you could end up with this impression? Like, I think the central dynamic with FTX was their ability to translate money[1] into more financial credit (in the form of customer deposits). Yes, there might have been some nonfinancial credit intermediary steps, and of course they also did lots of other things that are worth analyzing, but the thing that produced a positive feedback loop is the step where they could convert funds in their stewardship into more creditworthiness, which resulted in them getting more and more assets in their custody.
Trying to think harder about what you were saying, I thought your objection might be that there are too many legitimate cases in which you of course want to translate assets under your management into more assets under your management, i.e. by producing assets that are more valuable than the resources you were given stewardship over. So I tried to clarify that I was talking about a dynamic where you spend/irrecoverably lose resources to increase perceived creditworthiness, not where you make good use of resources that actually increase future expected returns.
Bribing third-party evaluators is of course an example of what I am talking about, but it strikes me as too narrow, and most importantly it doesn’t capture the central feedback loop of this creditworthiness bubble that I think explains many of the relevant dynamics that I go into in my new last section. Yes, I agree you should pay attention to someone bribing third-party evaluators, but even in that situation, one of the key variables that determines how bad it is to bribe third-party evaluators is whether by bribing the third party evaluator with a dollar, you end up with more than one additional dollar under your stewardship. That returns ratio really matters and is what I am trying to draw attention to, and I am not sure whether you are objecting to is as a thing, or just don’t find it interesting, or have some other objection.
Broadly construed here to include cryptocurrency
How does this narrow definition of creditworthiness apply to Tessier-Lavigne? Who were the owners of what assets that were transferred to him & what would ROI have looked like?
Applying my model to the Tessier-Lavigne situation suggests the key exploit he used was that you could produce academic prestige more cheaply, and at a higher ROI, via optimizing papers purely for prestige and ignoring factual accuracy, than via writing papers with the constraint of only saying true things and aiming for informativeness.
Furthermore, the so gained academic prestige can then be translated into two things:
More labor available to produce a greater volume of prestige-optimized papers, and to market existing papers more aggressively, as many additional PhD students and postdocs and professors want to work with you. Funding comes into play a bit at this point as well, but the central unit by which labor gets allocated in academia is prestige.
Greater ability to direct the enforcement mechanisms of academia towards any potential leakers, auditors or investigators, which even as the scale grows, prevents information about the deceptively overleveraged prestige from becoming widely known
Both of these allowed Tessier-Lavigne to re-invest the academic prestige gained by the publishing of their first papers into an actually large-scale academic fraud.
If either of the above was not the case, a very large-scale fraud like Tessier-Lavigne wouldn’t be possible. Of course, sometimes someone can live a career of doing academic fraud at a constant rate, but in order for the rate of fraud to become actually substantially big, the mechanism that produced the fraud needs to gain resources as a result of the fraud that then get used to produce more of the fraud.
If Tessier-Lavigne had not been able to recruit more talent as a result of the prestige, their academic output would have been limited to the outputs of a single scholar, which would be much less substantial. Similarly, if Tessier-Lavigne hadn’t been able to use the prestige to suppress investigations into their research, the likelihood they could have kept up such a long career of fraud would have also been much lower.
The owners of that academic prestige were the existing members of the academic community. A lot of it was transferred from Stanford to Tessier-Lavigne, and much of it was transferred from the journals the fraudulent papers were published in, and much of it was transferred from the people who endorsed Tessier-Lavigne across their career. Social capital and prestige don’t fully behave quite like normal assets, for example, there is no clear ledger of who owns what at any given point in time, and it’s not as owner-independent as most things we think of as “assets” are (i.e. more of academic prestige comes is non-transferrable, or transferrable at greater cost), but I think it still works well-enough as the unit of analysis here.
I am not overwhelmingly confident my model of what happened here is right, but it’s my current best understanding of the situation, and it fits the model I am trying to explain pretty well.
Your overall model of the Tessier-Lavigne situation seems plausible. But it seems like a stretch to use the narrow “creditworthiness” framework of investors and assets. The “owners” of academic prestige (Stanford, journals, endorsers) aren’t really in the same position as owners of financial assets. They didn’t “transfer” prestige to Tessier-Lavigne in the way depositors transferred money to FTX. There’s no clear ROI calculation because there’s no actual stewardship relationship—nobody gave Tessier-Lavigne their prestige to manage with expectation of returns.
If anything, academia seems more in the position of a central bank managing a fiat currency—trying to maintain an aggregate level of activity, as well as the perceived value of credit within the system, by adjusting the aggregate level of credit extended—than in the position of the owner of a rivalrous asset like money investing it in a specific venture. Obviously individuals within academia face different problems and incentives, as do individuals within a fiat economy, but there doesn’t seem to be a clear analogue in academia to the financial investor.
I think we disagree somewhat to the degree to which social capital does actually follow rules largely analogous to normal capital. For example, I do think that endorsing someone can largely be seen as analogous to investing some of your social credit into them. If you endorse many people, your endorsement is worth less, so there is some kind of conserved quantity, and if the people you endorse go on and become more widely respected, your investment pays off, so there is something like a market price that goes up.[1]
I do agree there are really crucial disanalogies! I think the specific disanalogies don’t happen to break the model I propose in this post, but I am not enormously confident.
I would love to have better language that describes the actual dynamics of social capital/reputation/status, with the cleanliness and precision of the language that we have for financial currency. But of course, that’s a big ask, and IMO worthy of being one of the great big projects of humanity akin to the whole study of economics in its own right. In the meantime, I think there is a lot of mileage that can be gotten by applying existing models of financial terms to social capital, even if they don’t perfectly fit, and even if I have to handwave a bit to make it work out.
This doesn’t mean critiques that point out the disanalogies aren’t important! Indeed, I find myself wanting to write a follow-up post that’s just something like “ways social capital does not behave like financial capital” that improves my ability to better notice when it’s inappropriate to apply a financial capital lense to social dynamics.
I agree that academia at large has central bank dynamics, but I think the specific institutions and individuals that were duped by Tessier-Lavigne and extend their credit to him were not in very much of a central bank position. I think Stanford just lost a bunch of prestige, as did a lot of the people who worked with Tessier-Lavigne and endorsed him, and Stanford does not consider itself responsible for the reputation of all of academia, or the flow of credit within all of academia.
While there are some dynamics where academia has a more centrally planned status-economy, I think most social capital gets allocated by the choices of specific individuals who want to get ahead in the social status game of academia.
One of the things that I think doesn’t have a great analogue is the process of selling/”exiting your market position”. Like, in a market you have a clear point of selling your asset, and in a social capital market you can start removing your endorsement from someone, or start calling them overrated, but the connection does not feel as clean.
We can imagine prestige very imperfectly as an asset with a quantifiable value, but while this is fairly (but not entirely) accurate for tournament structures like organized sports, in academia it’s more like being a central location in a canonical reference map; not the sort of thing that’s easy to use in ROI calculations.
If we can operationalize it well I’d likely bet against the claim that Stanford lost a lot of prestige. The centrality of the biggest institutions is hard to dislodge, as they’re sufficiently mutually entangled that problems like this seem to do more to demoralize academia generally, than to specifically discredit any one institution. Nor do I think academia’s losing credit in any straightforward sense, as it’s widely considered too big to fail even by many dissenters, who e.g. are extremely disappointed with standards in scientific academia but still automatically equate academia with science in general.
What happens as a result of the kinds of failures you describe is not at all like a decline in price, a little bit like a decline in the aggregate purchasing power of money, somewhat more like increased vulnerability to speculative attack, and most similar to a decrease in transaction volume as people see fewer and fewer opportunities for profitable transactions within the system. E.g. publishing papers seems less appealing as a way to inform others, reading papers seems less effective as a way to be informed, giving and receiving grants seems less effective as a way to organize efforts to figure things out.
Huh, I do think our world models must differ here. My current sense is societal trust and reliance on academia is dropping pretty sharply, partially though not centrally as a result of things like this, and I similarly expect the market value of things like PhDs to drop relatively intensely in the coming decade (barring major AI disruption making that question moot). I would be happy to bet on this, if you disagree.
I found this set of potential analogies helpful! I do think I still disagree about the relative appropriateness for each one of these analogies to the situation. Not sure how much value I would provide by going through them all in this comment thread, though I might take the opportunity and do it in a top-level post.
I don’t think the central-case valuable PhDs can be bought or sold so I’m not sure what you mean by market value here. If you can clarify, I’ll have a better idea whether it’s something I’d bet against you on.
I would bet a fair amount at even odds that Stanford academics won’t decline >1 sigma YOY in collective publication impact score like h-index, Stanford funding won’t decrease >1 sigma vs Ivy League + MIT + Chicago, Stanford new-PhD aggregate income won’t decline >1 sigma vs overall aggregate PhD income, and overall aggregate US PhD income won’t decline >1 sigma. I think 1 sigma is a reasonable threshold for signal vs noise.
I think that if these kinds of crises caused academia to be devalued, then when the Protestant Reformation and Enlightenment revealed the rot in late-medieval scholastic “science,” clerical institutions in the Roman Catholic model like Oxford and Cambridge would have become irrelevant or even collapsed, rather than continuing to be canonical intellectual centers in the new regime.
TBTF institutions usually don’t collapse outside strong outside conquest or civilizational collapse, or Maoist Cultural Revolution levels of violence directed at such change, since they specialize in creating loyalty to the institution. So academia losing value would look more like the Mandarin exam losing value by the civilization it was embedded in collapsing, than like Dell Computer losing value via its share price declining.
I was thinking of the salary premium that having a PhD provides (i.e. how much more people with PhDs make compared to people without PhDs), which of course is measuring a mixture of real signaling value, and simply just measuring correlations in aptitude, but I feel like it would serve as a good enough proxy here at least directionally.
What’s the sigma here? Like, what population are we measuring the variance over? Top 20 universities? All universities? I certainly agree that Stanford won’t lose one sigma of status/credibility/etc. as measured in all universities, that would require dropping Stanford completely from the list of top universities. I think losing 1 sigma of standing among top 20 universities, i.e. Stanford moving from something like “top 3” to “top 8″ seems plausible to me, though my guess is a bit too intense.
To be clear, my offered bet was more about you saying that academia at large is “too big to fail”. I do think Stanford will experience costs from this, but at that scale I do think noise will drown out almost any signal.
Hmm, I don’t currently believe this, but it’s plausible enough that I would want to engage with it in more detail. Do you have arguments for this? I currently expect more of a gradual devaluing of the importance of academic status in society, together with more competition about the relevant signifiers of status creating more noise, resulting in a relationship to academia somewhat more similar (though definitely not all the way there) as pre-WW2 society had to academia (which to my understanding was a much less central role in government and societal decision-making).
I would expect PhD value to mostly be affected by underlying demographic factors; they’re already structurally on an inflationary trajectory and I expect that to be more important than whether they’re understood to be fake or real. No one thinks Bitcoins contain powerful knowledge but they still have exchange value.
If there’s a demographic model of PhD salary premium with a good track record (not just backtested, has to have been a famous model before the going-forward empirical validation) I might bet strongly against deviation from that. If not, too noisy.
Variance (and thus sigma) for funding could be calculated on basis of historical YOY % variation in funding for all US universities, weighted by either # people enrolled or by aggregate revenue of the institution. Can do something similar for h-index. Obviously many details to operationalize but the level of confusion you’re reporting seems surprising to me. Maybe you can try to tell me how you would operationalize your “dropping pretty sharply” / “drop relatively intensely” claim.
Less than a sigma seems like it can’t really be a clear quantitative signal unless most of the observed variance is very well explained (in which case it should be more than a sigma of remaining variance). Events as big as Stanford moving from top 3 to top 8 have happened multiple times in the last few decades without any major crises of confidence.
I agree the disagreement about academia at large is important enough to focus on, thanks for clarifying that that’s where you see the main disagreement.
One argument for the TBTF paragraph was in the immediately prior paragraph. The posts I linked to at the end of the first comment in this thread are also in large part arguments in support of this thesis. Pre-WWII the US had a much weaker state. Hard to roll that back without constituting a regime collapse.
At this point I feel that I’m repeating myself enough that I don’t see how to continue this conversation productively; I don’t expect saying the same things again will lead to engagement, and I don’t expect that complaining about the problem procedurally will get a constructive response either. If you propose a well-operationalized bet and an adjudicator and escrow arrangement I will accept or reject the proposal.
I wasn’t trying to say that you had provided no argument for it, sorry! I was just curious whether you had written about this previously with a handy link. It feels like a theme in a bunch of your writing, but you seemed in a better position to remember any specific essay or section.
I’ll think about it over the next day or two and see whether I can find something. I am currently skeptical we can find something given that I don’t expect shifts at the scale of “Stanford stop being a top university at all”. But I’ll try for a bit.
I agree that the kinds of pathological feedback loops you describe exist, are bad, and are important. I don’t think the emphasis on financial returns is helpful, though; one of your main examples is nonfinancial and hard to quantify, and the thing that makes these processes bad is what’s going on outside the financial substrate: recruiting people into complicity.
You seem to be treating the question of whether the money is being “burned” to raise more money, or made productive use of (thus justifying further investment) as the easy part, but that’s the whole problem! Without an understanding of how the conversion process works, we don’t understand anything about this, we just have a black box causing nominal ROI >1, which could be either very good or very bad.
I don’t think allowing financial fraud is a thing current institutions mostly want? The difficulty is more in figuring out how to stop it without stopping legitimate activity as well (A lot of successful entrepreneurship will look kind of a lot like this, I think). If you are calling for normal speculative investment to be banned, it’s very likely not worth the loss of innovation. (It may make sense perhaps to be more strict about what level of falsehood leads to fraud prosecution, but I would keep it to banning false claims).
Sorry, I think I am failing to parse this comment. I agree that financial fraud is a thing people don’t want. This post is telling them about one dynamic that tends to cause a bunch of it. I agree that of course all the difficulty of stopping fraud lies in the difficulty of distinguishing fraud from non-fraud. This post tries to help you distinguish fraud from non-fraud, and e.g. the FAQ section addresses some specific ways in which the dynamic here can be distinguished from entrepreneurship and marketing.
You might disagree that this is possible, or have some other logical issue with the post, but I feel like you are largely just saying things that are true and said in the post, but then say “if you are calling for normal speculative investment to be banned”, which like, I am of course not doing and the post is not implying, and I feel like I have a bunch of paragraphs in there in clarifying that I am not calling for speculative investment to be banned.
In my view:
facilitation of stag-hunt-like cooperation is really useful. Because cooperating to do stuff beyond the capabilities of individuals is useful but hard.
the dynamic you discuss in the post applies to stag hunt facilitation because their success depends on willingness of others to provide more resources (up to some point where they can generate more)
the difference between, e.g., Theranos and standard entrepreneurship does not lie in the dynamic you discuss in the post. It lies in how egregiously Elizabeth Holmes was lying relative to the standard level of misleadingness. (and of course, more honesty would be better...)
It would of course be very valuable to determine if a stag hunt will pay off or will fail! But the difference between the two does not lie in the dynamic you discuss in the post (which applies to both ultimately successful and unsuccessful stag hunts).
Cool, that’s not a crazy view. I might engage with it more, but I feel like I understand where you are coming from now.
This is not necessarily true? You could maintain a strict standard of only taking this implicit deal with people who you actually respect, and who you are honestly talking up. Much like an “influencer”, who only promotes products that they actually like and use.
I agree that if you have two people who mutually already respect each other then such an alliance would be a null-operation, but like, why then make such alliances in the first place? Can it really be said that the alliance is therefore fine to make? Doesn’t such an alliance bind you to not say something if your opinion changes?
I agree this dynamic seems fishy, and I’m suspicious that on a detailed analysis, it will turn out that an agreement like this is useful at all to exactly the extent that it involves misleading others.
That said...
There might be lots of people who you respect, but you’ll make a special point to promote the reputation of people who you expect will reciprocate the favor to you.
Small business owners (in different industries) sometimes form associations in which they explicitly direct clients to each other. eg The therapist directs customers to the mechanic (if it seems like they need a mechanic) and vis versa. This can be beneficial to the customer, because if they have a profesional that they like in one domain, they might trust that person’s recommendations in other domains, and prefer that to trying to evaluate marketing and third-party reviews (which are out to get them).
If the profesional association has some basic standards for who they let in, such that their recommendations are good (or at least good enough to outweigh the cost of needing to identify skilled/trustworthy professionals for yourself) there’s a mutually beneficial trade to be had.
Yeah, I agree. Thinking more about this, you can think about it a bit as a mechanism for splitting the surplus of spreading accurate information fairly. Like, you are creating positive externalities by telling people that this person you respect is someone they should work with, which they get to capture. The person you respect thinks the same about you, but they are not putting in the effort to share that with others. This seems a bit unfair! It seems reasonable to be like “hey mate, I am investing in the commons in this way, and you are not, can we please both do our part?”.
It still seems a bit dicey, but like, in-principle this seems good and like it improves the world.
In The Submarine, Paul Graham talks about his startup ViaWeb hiring a PR firm back during the dotcom bubble:
Agree! I found that section quite interesting to read and ran into it when researching for this post.
I think I disagree with Paul Graham’s lax-seeming relationship to telling selective truths and leveraging that for your company’s success. Separately, I do also think Viaweb running a non-trivial fraction of the store fronts on the internet made it ambiguously a young company. I would be curious to learn how close this was to Yahoo’s acquisition of Viaweb, which could sway me either way on this being a counterexample or not.
Separately, I am pretty sure Paul Graham has said things other places where he warns startups not to hire PR companies, or something to that affect, but I can’t find it. This HN comment refers to it.
Later you say that this is just one ingredient, but in the beginning you describe a rigid structure of compounding and shoehorn examples into bullet points. I think that structure is too rigid and I object to the examples. I think Theranos and academic Tessier-Levine fit it well. Indeed, for research, pretty much all people have is their reputation.
You describe Enron as a simple Ponzi scheme. I think this is just wrong. My understanding is that the main thing with Enron was a one-time shift from brokering energy today and owning the physical capital to deliver that energy, to brokering long-term deals that were purely virtual. By having unmoored forecasts, they could make arbitrary forecasts about the state of the company. It is not clear how much the senior executives knew they were doing this; perhaps they just accidentally designed poor incentives by allowing the salesmen to price their own deals and get an immediate bonus with no real-world feedback of how the deal turned out years later. You can certainly say that this is an example of extending credit from Enron’s old business to its new business, but I think it was a one-time change and not repeated compounding of credit. It is true that the new business having no physical capital was able to scale very far on financial credit. (Enron is my hobby horse. “Everybody knows” that you’re supposed to hold it up as an example, but it did 3 bad things and no one notices that other people are talking about different things. The main problem was that there whole business was mispriced. When the senior executives discovered this they something more like a Ponzi scheme, but that was just a couple billion, a rounding error. And the third was manipulating California energy prices.)
Similarly, FTX made a one time pivot from trader to bank. As I understand, it stole mainly to cover the trading losses and not to pay for advertising. If it had accepted the trading losses and wound down the trading, it wouldn’t have looked very different from the outside. Trying to convert its credit for trading to being trusted as a steward might be an example of the credit conversion you’re talking about, but not a compounded phenomenon. If that was central to its strategy, then maybe would make it hard to accept the loss and wind down that business.
What scale is “quickly ramping up their PR spending”? Theranos lasted 10 years. Enron lasted 15 years from the merger of much older companies. I don’t think a heuristic about speed would identify either of them. It sounds like you’re only talking about FTX.
Advertising spending ramped up as losses increased as well, as did political spending (FTX spent much more on political lobbying in the months before it collapsed than previous months, IIRC).
I actually think any analysis of FTX on this dimension without looking at FTT has a hole in it, and I might update this post, or write a follow-up one. Patio11 and Matt Levine have written about FTX and FTT and how this resulted in crazy leverage in almost the exact way I document here.
FTX at various points, via indirect channels like Alameda, used FTT, basically it’s own stock as a collateral to get more loans. The value of its own stock was largely determined by the investment it was taking on, which functionality constituted more debt. This was as Matt Levine called it “deep dark magic” of the kind I am talking about in this post.
This full coverage:
What channels other than Alameda? If this was entirely about Alameda, how is FTT adding anything to the story above saying that they stole to give to Alameda? Who are they fooling other than their own internal accounts? The Coindesk article is very late in the story because no one saw the accounts before they tried to get a bailout from Binance, who wasn’t fooled.
If a customer puts up FTT as collateral to short bitcoin, then FTX is confused about how much collateral it has. But this is the customer exploiting FTX, not vice versa! This is FTX exploiting all the other customers by falsely claiming that it has hedged risk. But this isn’t what took it down. It did manage to largely liquidate shorts before they ran out of collateral.
The problem is that Alameda then used that FTT as collateral for other loans from external parties, while of course presenting the FTT as an uncorrelated asset!
Caroline said directly:
I agree that if Alameda had just borrowed from FTX, then only FTX would have been fooled. But given that Alameda loaned billions of dollars from Genesis using the FTT as collateral means external parties were fooled in the way I am trying to describe in this post!
Young organizations that want to succeed are usually unknown, so logically, they will beenfit from buying attention. If the leadership is product rather than marketing focused, engaging a professional firm is a sign of additional credibility isn’t it?
An alternative to paying for creditworthiness would be reputation within a small community. Seems like a breeding ground for collusion through an ‘old boys network’. Military bureaucracies are notorious for this kind of corruption, mutual participation in conflict as a way of developing cohesion keeps outsiders out, and motivates protecting insiders from outside scrutiny.
Buy my attention and hire the best auditors you can find, early please.
Totally! I think I will add a section to the FAQ that’s something like “so are you saying all marketing spending is bad?”. Because I am really not centrally talking about “marketing”. Marketing is usually good! Getting information about your product out to more people who need to hear it genuinel increases your future returns and makes you legitimately a better investment target.
The thing I am talking about is marketing, or persuasion, or pressure, specifically targeted at the “creditworthiness” dimension. FTX Marketing that propagated the information that they are the most trustworthy cryptoexchange, Theranos media campaigns that tried to squash any doubt about their product working, Enron’s payouts to shareholders combined with the explicit assurance that future investors would receive the same, those are not straightforward marketing actions, they are expenditures narrowly aimed at increasing specifically creditworthiness, not increasing expected future realized returns. Marketing is usually the latter and so totally fine!
I think it would be helpful to list information that you can spread via marketing that isn’t about credit-wrothiness.
What problem your product is trying to solve
How much your product costs
What your product looks like
Who your product is for
These are all different from
The company is trustworthy and reliable
Did you read the relevant section of the FAQ I added? I could list more examples, but I feel like that section is relatively clear.
Yeah. For my taste your paragraph is still written on a slightly higher level abstraction than what is helpful for people who have never actually done any marketing of things before (i.e. says “The normal context of marketing is to pay someone to get information about your product out to potential buyers” but doesn’t follow up with “For instance, how big your product is, or what shops they can buy it in”). But it is just a matter of taste.
It isn’t obvious to me that “credit worthiness for sale” is bad on net. There are the high publicity cases of people committing fraud by way of purchasing creditworthiness in some respect, but there are also (and I’d guess more wide-spread less exciting) legitimate purchases of creditworthiness.
For example, it might not optimal for very competent newer investors/inventors/organizations slowly get capital investment or other support. If you think of the marketing campaigns, sponsorships, etc. as a bond to show they believe in their performance, and believe their purchase will pay off from success, then it is somewhat less concerning and also valuable to do some amount of sponsorship/marketing.
Someone very close to me is a very competent real estate investor, who has had exceptional returns (and I believe exceptional risk adjusted returns) much higher than what most other real estate operators are making for their investments. It is not efficient for him to slowly get investors while others who have had longer to collect investors get more capital for lower returning projects, it makes sense for him to “buy creditworthiness” in a sense, selling some of his share of his investments to entice people to refer investors to him.
Buying creditworthiness allows competent new entrants to get investors/support grow quicker than they otherwise would. Scams will exist, but that is honestly part of the creative destruction of the market (not that they shouldn’t be rooted out). Those that don’t do their due diligence will have a smaller say in how capital is allocated.
I totally agree that marginal sales of creditworthiness are often totally fine. A system does not become badly exploitable just because on some occasion someone can buy some creditworthiness (and indeed, as I try to describe in some of the FAQ, there are often legitimate coordination problems that can be solved with money, which should increase expected future returns, and as such legitimately increase your creditworthiness, which can look like direct purchases of perceived creditworthiness, but actually increase your underlying creditworthiness in a way that makes it fine).
The issue is when you can keep pressing the perceived creditworthiness buy button without actually increasing future expected total returns, and when you can do so a lot of times even as you grow.
I think you are conflating here, as I try to clarify in the previous part of this comment, the concept of “buying creditworthiness” and “marketing”. Marketing is not centrally about buying creditworthiness (though a bit of it is). It’s mostly about information exchange. Referral programs are about incentivizing people to cause important information about opportunities to be exchanged, not for people to vouch for you. Referral programs are totally fine, unless they extend into misleading people about your actual creditworthiness.
Relevant Patio11 tweet: https://x.com/patio11/status/1933975792721207316