This complaint about an important problem is expressed in a way that confuses me; it’s not clear whom the “should” formulation is advising and about what. (By contrast, when I wrote Effective Altruism is Self-Recommending, I think it was clear who I was advising (people persuaded that EA was creditable and effective) and what the advice was (don’t count claims that you will do X as evidence that someone has done X, check whether job 1 got completed adequately before using it as a track record for job 2).
The process you’re describing seems to involve type errors about credit, and corruption of the currency. Making these upstream problems more explicit could help people avoid being suckered by them.
Money IS a form of creditability; it’s evidence that you had the capacities needed to get the money, and in relatively just societies it’s evidence that you did something for money that would otherwise have earned you a corresponding amount of gratitude.[1]
In the case of financial creditworthiness, using money or other liquid assets as collateral seems unobjectionable, so it seems like the problem is in the currency conversion between money and nonfinancial credit. But staking money on claims is proepistemic.
The difference between staking money on one’s claims, and what you describe, is the difference between making bets or offering insurance—effectively buying credence for specific propositions—and buying a vaguer sort of credibility. A less vague way to describe this is the difference between buying insurance and bribing a third party evaluator.
So there’s one piece of usable advice: check whether someone’s bribed the third party evaluators you’re relying on.
But we shouldn’t expect that sort of problem to persist in an otherwise just system. So we need to explain why and how corruption is persistently subsidized, as I’ve tried to begin to do in There Is a War, The Debtors’ Revolt, and The Domestic Product, which advance the claim that much of the political dark matter of the 20th century is explained by correlated defaults as a mechanism for debtors to extract from creditors.
Thank you for the comment! I have found much of your writing helpful when thinking about these things.
it’s not clear whom the “should” formulation is advising and about what.
The “should” is aimed in a relatively broad sense at people who care about how well societies and groups of people function. I am also not super happy with the title, which is approximately the only place where I use “should”. If you have better title suggestions, I would actually greatly appreciate that.
and buying a vaguer sort of credibility
I am not actually trying to talk that much about a “vaguer sort of credibility”. I am trying to talk about the specific case of “whatever causes other people to then extend you a greater line of credit[1] in whatever resource you used up to cause them to do so”. This is pretty concrete. In the case of FTX, it was the case that FTX found many opportunities to translate their agency and money into more people giving them more deposits, which they could then use to get more deposits.
I was indeed hoping to avoid a broader treatment of trustworthiness by focusing on the specific case of creditworthiness, as I think the former is a lot trickier and less well-defined. I use the former still a few times, which is maybe a mistake, but largely in an attempt at helping people understand that creditworthiness extends into more domains than just dollars.
So there’s one piece of usable advice: check whether someone’s bribed the third party evaluators you’re relying on.
Yep, I agree this is helpful, but does fail to cover the Theranos and FTX cases, where I don’t think anyone was expecting the presence of a third-party evaluator, but nevertheless especially the FTX case seems like a central example of a dynamic I characterize in my post.
so it seems like the problem is in the currency conversion between money and nonfinancial credit
No, I argue the issue is specifically in the currency conversion between any asset, and creditworthiness for that asset. In the case of money, it’s specifically the conversion between money and financial credit. The very basic mechanism I am trying to point to is that if you are in a situation where you can borrow a dollar, then spend that dollar, and end up now in a position to borrow more than one additional dollar, and you can do that over and over again, then bad things are going to happen[2].
ETA: I have updated the OP with a substantial number of new paragraphs and edits trying to make my point clearer. Copied here to save you re-reading the whole post:
The key vulnerability that is being exploited in the above is that there was some way to spend a dollar of resources into the ability to control more than a dollar of stewardship over other people’s resources, and to do so repeatedly. When this becomes possible, at least some individuals will see the opportunity to leverage up on other people’s resources, bet them big, and hope they get to walk away with the winnings (and, if they lose, leave the empty bag to the people whose resources they borrowed).
The emphasis here is on spend. To distinguish what is going on here from marketing expenses, or simply producing valuable assets which can now serve as collateral for greater loans, we are focusing on situations where resources are used to purchase pure perceived creditworthiness, without assets or knowledge or skills that produce genuinely higher expected future returns for investors and creditors in the future.
This of course makes this all a fundamentally deceptive exercise, because the reason why you extend someone a line of credit, or invest in someone, or deposit your funds with someone, is because you expect to make returns on what you gave them. But in many cases an adversary can spend money more effectively on distorting your beliefs about future returns they can provide, than they can by doing things that produce genuinely higher returns, and when the cost to doing so becomes cheaper than the additional resources they can such extract, you have a positive feedback loop.
Postscript.
Another aspect of this whole dynamic, which is related to yesterday’s post about paranoia, is that this is one of the most common ways in which you end up with actors exercising strong direct optimization pressure on your beliefs, and which can cause you end up in environments where paranoia is the appropriate response. Of course there is often resources to be gained by duping and deceiving others, but the case of a creditworthiness bubble you have two things that rarely happen at the same time:
A feedback loop in which someone is gaining more and more resources
The control over those resources is very highly sensitive to people believing false things about their expected future returns
This produces actors who need very tightly control over what other people believe about them and say about them, and where the consequences of failing to do so are catastrophic, which produces a much greater willingness to spend large amounts of resources achieve those aims.
In addition to that, in many of these cases, the personal costs to the scheme falling apart have long since become insensitive to the size of the damage. It is genuinely unclear what Sam Bankman Fried or Elizabeth Holmes could have done to not end up in prison for decades by the time they ended up in their overleveraged positions, and trying to somehow keep things going for longer and hope for a big market surge, was from a purely selfish perspective possibly the best thing for them to do. Society does not punish you with more than prison or death, even if you caused much more harm than one person’s life, and so by the time you are in the middle of something like this, trying to de-incentivize this kind of behavior is very hard.
Ok, but how is this different from “marketing”?
Marketing, as a broad term for “distributing information about you and your organization widely” can certainly be used for this purpose! But it is not centrally what marketing is used for.
The normal context of marketing is to pay someone to get information about your product out to potential buyers. They then use that information to evaluate whether their product is worth more than its cost to them, and they offer you a trade if they do think so. In this world, marketing solved a real problem, and marketing spending genuinely increased your future expected returns, and creates lots of surplus value in the world.
Of course, if you are a business like FTX, you might run marketing campaigns for your product that emphasize its nature as a safe space to deposit your funds. This is specifically targeting your creditworthiness as a receiver of those deposits, which you can then use to attract more deposits. This only becomes an issue when you are not using the fees you collect on the deposits to do this, but the deposits itself, since that is when the purchase becomes a pure purchase of perceived creditworthiness, and not a genuine signal that you can maintain positive returns for the people who gave you their money.
Hopefully this clarifies my models here a bit more. I think in writing these sections I have clarified my model a bit more myself, which has been helpful.
“Line of credit” a bit more broadly defined to here include any transfer of resources under someone else’s stewardship, which includes e.g. deposits and investments
And this is specifically talking about the situation where you did not end up with assets you could just use as the collateral for the next creditor, but where you spent/lost/burned the money in order to convince the next creditor. Of course many assets are intangible, which makes distinguishing e.g. legitimate marketing use-cases and adversarial attacks on creditworthiness tricky to distinguish.
Thanks for trying to highlight the changes, but I’m a bit confused by this response.
“Whatever causes other people to then extend you a greater line of credit in whatever resource you used up to cause them to do so” seems to me straightforwardly vaguer than credence in a specific propositional claim, such as an insurance claim.
FTX seems to have straightforwardly bribed third parties with independent reputations to testify to its creditworthiness in ads. The case with Theranos is only slightly subtler, but to name a couple clear examples, Bill Frist and Henry Kissinger had independent reputations that they lent Theranos in exchange for anticipated financial gain.
You say you tried to narrow the scope to “creditworthiness” rather than “trustworthiness,” but I don’t know what that means. I guess an example of trust that isn’t credit in the relevant sense might be loyalty; the examples you gave all involved some sort of system of more or less explicit accounting in which somewhat quantifiable progress can be made, which isn’t how loyalty works. I think I was also assuming the narrower definition; can you point to some specific way in which I seem to be responding irrelevantly by wrongly assuming the broader definition?
As you point out, “many assets,” such as the ones in the academic example, are intangible, so it’s not clear that your “no collateral” qualifier helps much; whether there’s an intangible asset or verified capacity corresponding to the promise is exactly the controversy at issue.
Overall your reply seems substantially unresponsive; the level of disconnect is such that it’s not clear to me what I could even say to get a better response.
You say you tried to narrow the scope to “creditworthiness” rather than “trustworthiness,” but I don’t know what that means.
By creditworthiness, in this post, I mean the literal degree to which you are happy to transfer some specific resource that you own into someone else’s stewardship with the expectation that you will get them back (or make a positive return in-expectation). Creditworthiness is here specific to a resource that is transferrable. Dollars are the most obvious case. Social capital sometimes can also be modelled this way, though it gets more tricky. Creditworthiness does not need to extend into trustworthiness in general.
For example, as investors face very limited liability for investing in fradulent institutions, seeing someone willing to break the law (or be generally untrustworthy) can sometimes increase expected returns! In those situations creditworthiness (which I here try to measure in expectations of good stewardship or expected future profit) and trustworthiness (which would be measured in a broader propensity to not fuck people over) come strongly apart.
whether there’s an intangible asset or verified capacity corresponding to the promise is exactly the controversy at issue.
I think I am confused what “controversy” you are talking about here. I agree with you that in-practice, the line here is very hard to identify (as one would expect in a high-level adversarial information game).
My main aim with this post is largely to create a model that explains some situations where in-retrospect there is IMO little uncertainty that something of this shape went wrong.
Like, the specific sentence I objected to was: “so it seems like the problem is in the currency conversion between money and nonfinancial credit”.
And I think in the model and situations I outline, I am confused how you could end up with this impression? Like, I think the central dynamic with FTX was their ability to translate money[1] into more financial credit (in the form of customer deposits). Yes, there might have been some nonfinancial credit intermediary steps, and of course they also did lots of other things that are worth analyzing, but the thing that produced a positive feedback loop is the step where they could convert funds in their stewardship into more creditworthiness, which resulted in them getting more and more assets in their custody.
Trying to think harder about what you were saying, I thought your objection might be that there are too many legitimate cases in which you of course want to translate assets under your management into more assets under your management, i.e. by producing assets that are more valuable than the resources you were given stewardship over. So I tried to clarify that I was talking about a dynamic where you spend/irrecoverably lose resources to increase perceived creditworthiness, not where you make good use of resources that actually increase future expected returns.
Bribing third-party evaluators is of course an example of what I am talking about, but it strikes me as too narrow, and most importantly it doesn’t capture the central feedback loop of this creditworthiness bubble that I think explains many of the relevant dynamics that I go into in my new last section. Yes, I agree you should pay attention to someone bribing third-party evaluators, but even in that situation, one of the key variables that determines how bad it is to bribe third-party evaluators is whether by bribing the third party evaluator with a dollar, you end up with more than one additional dollar under your stewardship. That returns ratio really matters and is what I am trying to draw attention to, and I am not sure whether you are objecting to is as a thing, or just don’t find it interesting, or have some other objection.
How does this narrow definition of creditworthiness apply to Tessier-Lavigne? Who were the owners of what assets that were transferred to him & what would ROI have looked like?
Applying my model to the Tessier-Lavigne situation suggests the key exploit he used was that you could produce academic prestige more cheaply, and at a higher ROI, via optimizing papers purely for prestige and ignoring factual accuracy, than via writing papers with the constraint of only saying true things and aiming for informativeness.
Furthermore, the so gained academic prestige can then be translated into two things:
More labor available to produce a greater volume of prestige-optimized papers, and to market existing papers more aggressively, as many additional PhD students and postdocs and professors want to work with you. Funding comes into play a bit at this point as well, but the central unit by which labor gets allocated in academia is prestige.
Greater ability to direct the enforcement mechanisms of academia towards any potential leakers, auditors or investigators, which even as the scale grows, prevents information about the deceptively overleveraged prestige from becoming widely known
Both of these allowed Tessier-Lavigne to re-invest the academic prestige gained by the publishing of their first papers into an actually large-scale academic fraud.
If either of the above was not the case, a very large-scale fraud like Tessier-Lavigne wouldn’t be possible. Of course, sometimes someone can live a career of doing academic fraud at a constant rate, but in order for the rate of fraud to become actually substantially big, the mechanism that produced the fraud needs to gain resources as a result of the fraud that then get used to produce more of the fraud.
If Tessier-Lavigne had not been able to recruit more talent as a result of the prestige, their academic output would have been limited to the outputs of a single scholar, which would be much less substantial. Similarly, if Tessier-Lavigne hadn’t been able to use the prestige to suppress investigations into their research, the likelihood they could have kept up such a long career of fraud would have also been much lower.
The owners of that academic prestige were the existing members of the academic community. A lot of it was transferred from Stanford to Tessier-Lavigne, and much of it was transferred from the journals the fraudulent papers were published in, and much of it was transferred from the people who endorsed Tessier-Lavigne across their career. Social capital and prestige don’t fully behave quite like normal assets, for example, there is no clear ledger of who owns what at any given point in time, and it’s not as owner-independent as most things we think of as “assets” are (i.e. more of academic prestige comes is non-transferrable, or transferrable at greater cost), but I think it still works well-enough as the unit of analysis here.
I am not overwhelmingly confident my model of what happened here is right, but it’s my current best understanding of the situation, and it fits the model I am trying to explain pretty well.
Your overall model of the Tessier-Lavigne situation seems plausible. But it seems like a stretch to use the narrow “creditworthiness” framework of investors and assets. The “owners” of academic prestige (Stanford, journals, endorsers) aren’t really in the same position as owners of financial assets. They didn’t “transfer” prestige to Tessier-Lavigne in the way depositors transferred money to FTX. There’s no clear ROI calculation because there’s no actual stewardship relationship—nobody gave Tessier-Lavigne their prestige to manage with expectation of returns.
If anything, academia seems more in the position of a central bank managing a fiat currency—trying to maintain an aggregate level of activity, as well as the perceived value of credit within the system, by adjusting the aggregate level of credit extended—than in the position of the owner of a rivalrous asset like money investing it in a specific venture. Obviously individuals within academia face different problems and incentives, as do individuals within a fiat economy, but there doesn’t seem to be a clear analogue in academia to the financial investor.
I think we disagree somewhat to the degree to which social capital does actually follow rules largely analogous to normal capital. For example, I do think that endorsing someone can largely be seen as analogous to investing some of your social credit into them. If you endorse many people, your endorsement is worth less, so there is some kind of conserved quantity, and if the people you endorse go on and become more widely respected, your investment pays off, so there is something like a market price that goes up.[1]
I do agree there are really crucial disanalogies! I think the specific disanalogies don’t happen to break the model I propose in this post, but I am not enormously confident.
I would love to have better language that describes the actual dynamics of social capital/reputation/status, with the cleanliness and precision of the language that we have for financial currency. But of course, that’s a big ask, and IMO worthy of being one of the great big projects of humanity akin to the whole study of economics in its own right. In the meantime, I think there is a lot of mileage that can be gotten by applying existing models of financial terms to social capital, even if they don’t perfectly fit, and even if I have to handwave a bit to make it work out.
This doesn’t mean critiques that point out the disanalogies aren’t important! Indeed, I find myself wanting to write a follow-up post that’s just something like “ways social capital does not behave like financial capital” that improves my ability to better notice when it’s inappropriate to apply a financial capital lense to social dynamics.
If anything, academia seems more in the position of a central bank managing a fiat currency—trying to maintain an aggregate level of activity, as well as the perceived value of credit within the system, by adjusting the aggregate level of credit extended—than in the position of the owner of a rivalrous asset like money investing it in a specific venture.
I agree that academia at large has central bank dynamics, but I think the specific institutions and individuals that were duped by Tessier-Lavigne and extend their credit to him were not in very much of a central bank position. I think Stanford just lost a bunch of prestige, as did a lot of the people who worked with Tessier-Lavigne and endorsed him, and Stanford does not consider itself responsible for the reputation of all of academia, or the flow of credit within all of academia.
While there are some dynamics where academia has a more centrally planned status-economy, I think most social capital gets allocated by the choices of specific individuals who want to get ahead in the social status game of academia.
One of the things that I think doesn’t have a great analogue is the process of selling/”exiting your market position”. Like, in a market you have a clear point of selling your asset, and in a social capital market you can start removing your endorsement from someone, or start calling them overrated, but the connection does not feel as clean.
We can imagine prestige very imperfectly as an asset with a quantifiable value, but while this is fairly (but not entirely) accurate for tournament structures like organized sports, in academia it’s more like being a central location in a canonical reference map; not the sort of thing that’s easy to use in ROI calculations.
If we can operationalize it well I’d likely bet against the claim that Stanford lost a lot of prestige. The centrality of the biggest institutions is hard to dislodge, as they’re sufficiently mutually entangled that problems like this seem to do more to demoralize academia generally, than to specifically discredit any one institution. Nor do I think academia’s losing credit in any straightforward sense, as it’s widely considered too big to fail even by many dissenters, who e.g. are extremely disappointed with standards in scientific academia but still automatically equate academia with science in general.
What happens as a result of the kinds of failures you describe is not at all like a decline in price, a little bit like a decline in the aggregate purchasing power of money, somewhat more like increased vulnerability to speculative attack, and most similar to a decrease in transaction volume as people see fewer and fewer opportunities for profitable transactions within the system. E.g. publishing papers seems less appealing as a way to inform others, reading papers seems less effective as a way to be informed, giving and receiving grants seems less effective as a way to organize efforts to figure things out.
Nor do I think academia’s losing credit in any straightforward sense, as it’s widely considered too big to fail even by many dissenters, who e.g. are extremely disappointed with standards in scientific academia but still automatically equate academia with science in general.
Huh, I do think our world models must differ here. My current sense is societal trust and reliance on academia is dropping pretty sharply, partially though not centrally as a result of things like this, and I similarly expect the market value of things like PhDs to drop relatively intensely in the coming decade (barring major AI disruption making that question moot). I would be happy to bet on this, if you disagree.
What happens as a result of the kinds of failures you describe is not at all like a decline in price, a little bit like a decline in the aggregate purchasing power of money, somewhat more like increased vulnerability to speculative attack, and most similar to a decrease in transaction volume as people see fewer and fewer opportunities for profitable transactions within the system.
I found this set of potential analogies helpful! I do think I still disagree about the relative appropriateness for each one of these analogies to the situation. Not sure how much value I would provide by going through them all in this comment thread, though I might take the opportunity and do it in a top-level post.
I don’t think the central-case valuable PhDs can be bought or sold so I’m not sure what you mean by market value here. If you can clarify, I’ll have a better idea whether it’s something I’d bet against you on.
I would bet a fair amount at even odds that Stanford academics won’t decline >1 sigma YOY in collective publication impact score like h-index, Stanford funding won’t decrease >1 sigma vs Ivy League + MIT + Chicago, Stanford new-PhD aggregate income won’t decline >1 sigma vs overall aggregate PhD income, and overall aggregate US PhD income won’t decline >1 sigma. I think 1 sigma is a reasonable threshold for signal vs noise.
I think that if these kinds of crises caused academia to be devalued, then when the Protestant Reformation and Enlightenment revealed the rot in late-medieval scholastic “science,” clerical institutions in the Roman Catholic model like Oxford and Cambridge would have become irrelevant or even collapsed, rather than continuing to be canonical intellectual centers in the new regime.
TBTF institutions usually don’t collapse outside strong outside conquest or civilizational collapse, or Maoist Cultural Revolution levels of violence directed at such change, since they specialize in creating loyalty to the institution. So academia losing value would look more like the Mandarin exam losing value by the civilization it was embedded in collapsing, than like Dell Computer losing value via its share price declining.
I don’t think the central-case valuable PhDs can be bought or sold so I’m not sure what you mean by market value here. If you can clarify, I’ll have a better idea whether it’s something I’d bet against you on.
I was thinking of the salary premium that having a PhD provides (i.e. how much more people with PhDs make compared to people without PhDs), which of course is measuring a mixture of real signaling value, and simply just measuring correlations in aptitude, but I feel like it would serve as a good enough proxy here at least directionally.
I would bet a fair amount at even odds that Stanford academics won’t decline >1 sigma in collective publication impact score like h-index, Stanford funding won’t decrease >1 sigma vs Ivy League + MIT + Chicago, Stanford new-PhD aggregate income won’t decline >1 sigma vs overall aggregate PhD income, and overall aggregate US PhD income won’t decline >1 sigma. I think 1 sigma is a reasonable threshold for signal vs noise.
What’s the sigma here? Like, what population are we measuring the variance over? Top 20 universities? All universities? I certainly agree that Stanford won’t lose one sigma of status/credibility/etc. as measured in all universities, that would require dropping Stanford completely from the list of top universities. I think losing 1 sigma of standing among top 20 universities, i.e. Stanford moving from something like “top 3” to “top 8″ seems plausible to me, though my guess is a bit too intense.
To be clear, my offered bet was more about you saying that academia at large is “too big to fail”. I do think Stanford will experience costs from this, but at that scale I do think noise will drown out almost any signal.
TBTF institutions usually don’t collapse outside strong outside conquest or civilizational collapse, or Maoist Cultural Revolution levels of violence directed at such change, since they specialize in creating loyalty to the institution. So academia losing value would look more like the Mandarin exam losing value, than like Dell Computer losing value.
Hmm, I don’t currently believe this, but it’s plausible enough that I would want to engage with it in more detail. Do you have arguments for this? I currently expect more of a gradual devaluing of the importance of academic status in society, together with more competition about the relevant signifiers of status creating more noise, resulting in a relationship to academia somewhat more similar (though definitely not all the way there) as pre-WW2 society had to academia (which to my understanding was a much less central role in government and societal decision-making).
I would expect PhD value to mostly be affected by underlying demographic factors; they’re already structurally on an inflationary trajectory and I expect that to be more important than whether they’re understood to be fake or real. No one thinks Bitcoins contain powerful knowledge but they still have exchange value.
If there’s a demographic model of PhD salary premium with a good track record (not just backtested, has to have been a famous model before the going-forward empirical validation) I might bet strongly against deviation from that. If not, too noisy.
Variance (and thus sigma) for funding could be calculated on basis of historical YOY % variation in funding for all US universities, weighted by either # people enrolled or by aggregate revenue of the institution. Can do something similar for h-index. Obviously many details to operationalize but the level of confusion you’re reporting seems surprising to me. Maybe you can try to tell me how you would operationalize your “dropping pretty sharply” / “drop relatively intensely” claim.
Less than a sigma seems like it can’t really be a clear quantitative signal unless most of the observed variance is very well explained (in which case it should be more than a sigma of remaining variance). Events as big as Stanford moving from top 3 to top 8 have happened multiple times in the last few decades without any major crises of confidence.
I agree the disagreement about academia at large is important enough to focus on, thanks for clarifying that that’s where you see the main disagreement.
One argument for the TBTF paragraph was in the immediately prior paragraph. The posts I linked to at the end of the first comment in this thread are also in large part arguments in support of this thesis. Pre-WWII the US had a much weaker state. Hard to roll that back without constituting a regime collapse.
At this point I feel that I’m repeating myself enough that I don’t see how to continue this conversation productively; I don’t expect saying the same things again will lead to engagement, and I don’t expect that complaining about the problem procedurally will get a constructive response either. If you propose a well-operationalized bet and an adjudicator and escrow arrangement I will accept or reject the proposal.
I wasn’t trying to say that you had provided no argument for it, sorry! I was just curious whether you had written about this previously with a handy link. It feels like a theme in a bunch of your writing, but you seemed in a better position to remember any specific essay or section.
If you propose a well-operationalized bet and an adjudicator and escrow arrangement I will accept or reject the proposal.
I’ll think about it over the next day or two and see whether I can find something. I am currently skeptical we can find something given that I don’t expect shifts at the scale of “Stanford stop being a top university at all”. But I’ll try for a bit.
I agree that the kinds of pathological feedback loops you describe exist, are bad, and are important. I don’t think the emphasis on financial returns is helpful, though; one of your main examples is nonfinancial and hard to quantify, and the thing that makes these processes bad is what’s going on outside the financial substrate: recruiting people into complicity.
You seem to be treating the question of whether the money is being “burned” to raise more money, or made productive use of (thus justifying further investment) as the easy part, but that’s the whole problem! Without an understanding of how the conversion process works, we don’t understand anything about this, we just have a black box causing nominal ROI >1, which could be either very good or very bad.
aimed in a relatively broad sense at people who care about how well societies and groups of people function.
I don’t think allowing financial fraud is a thing current institutions mostly want? The difficulty is more in figuring out how to stop it without stopping legitimate activity as well (A lot of successful entrepreneurship will look kind of a lot like this, I think). If you are calling for normal speculative investment to be banned, it’s very likely not worth the loss of innovation. (It may make sense perhaps to be more strict about what level of falsehood leads to fraud prosecution, but I would keep it to banning false claims).
Sorry, I think I am failing to parse this comment. I agree that financial fraud is a thing people don’t want. This post is telling them about one dynamic that tends to cause a bunch of it. I agree that of course all the difficulty of stopping fraud lies in the difficulty of distinguishing fraud from non-fraud. This post tries to help you distinguish fraud from non-fraud, and e.g. the FAQ section addresses some specific ways in which the dynamic here can be distinguished from entrepreneurship and marketing.
You might disagree that this is possible, or have some other logical issue with the post, but I feel like you are largely just saying things that are true and said in the post, but then say “if you are calling for normal speculative investment to be banned”, which like, I am of course not doing and the post is not implying, and I feel like I have a bunch of paragraphs in there in clarifying that I am not calling for speculative investment to be banned.
facilitation of stag-hunt-like cooperation is really useful. Because cooperating to do stuff beyond the capabilities of individuals is useful but hard.
the dynamic you discuss in the post applies to stag hunt facilitation because their success depends on willingness of others to provide more resources (up to some point where they can generate more)
the difference between, e.g., Theranos and standard entrepreneurship does not lie in the dynamic you discuss in the post. It lies in how egregiously Elizabeth Holmes was lying relative to the standard level of misleadingness. (and of course, more honesty would be better...)
It would of course be very valuable to determine if a stag hunt will pay off or will fail! But the difference between the two does not lie in the dynamic you discuss in the post (which applies to both ultimately successful and unsuccessful stag hunts).
This complaint about an important problem is expressed in a way that confuses me; it’s not clear whom the “should” formulation is advising and about what. (By contrast, when I wrote Effective Altruism is Self-Recommending, I think it was clear who I was advising (people persuaded that EA was creditable and effective) and what the advice was (don’t count claims that you will do X as evidence that someone has done X, check whether job 1 got completed adequately before using it as a track record for job 2).
The process you’re describing seems to involve type errors about credit, and corruption of the currency. Making these upstream problems more explicit could help people avoid being suckered by them.
Money IS a form of creditability; it’s evidence that you had the capacities needed to get the money, and in relatively just societies it’s evidence that you did something for money that would otherwise have earned you a corresponding amount of gratitude.[1]
In the case of financial creditworthiness, using money or other liquid assets as collateral seems unobjectionable, so it seems like the problem is in the currency conversion between money and nonfinancial credit. But staking money on claims is proepistemic.
The difference between staking money on one’s claims, and what you describe, is the difference between making bets or offering insurance—effectively buying credence for specific propositions—and buying a vaguer sort of credibility. A less vague way to describe this is the difference between buying insurance and bribing a third party evaluator.
So there’s one piece of usable advice: check whether someone’s bribed the third party evaluators you’re relying on.
But we shouldn’t expect that sort of problem to persist in an otherwise just system. So we need to explain why and how corruption is persistently subsidized, as I’ve tried to begin to do in There Is a War, The Debtors’ Revolt, and The Domestic Product, which advance the claim that much of the political dark matter of the 20th century is explained by correlated defaults as a mechanism for debtors to extract from creditors.
Talents, Oppression and production are competing explanations for wealth inequality.
Thank you for the comment! I have found much of your writing helpful when thinking about these things.
The “should” is aimed in a relatively broad sense at people who care about how well societies and groups of people function. I am also not super happy with the title, which is approximately the only place where I use “should”. If you have better title suggestions, I would actually greatly appreciate that.
I am not actually trying to talk that much about a “vaguer sort of credibility”. I am trying to talk about the specific case of “whatever causes other people to then extend you a greater line of credit[1] in whatever resource you used up to cause them to do so”. This is pretty concrete. In the case of FTX, it was the case that FTX found many opportunities to translate their agency and money into more people giving them more deposits, which they could then use to get more deposits.
I was indeed hoping to avoid a broader treatment of trustworthiness by focusing on the specific case of creditworthiness, as I think the former is a lot trickier and less well-defined. I use the former still a few times, which is maybe a mistake, but largely in an attempt at helping people understand that creditworthiness extends into more domains than just dollars.
Yep, I agree this is helpful, but does fail to cover the Theranos and FTX cases, where I don’t think anyone was expecting the presence of a third-party evaluator, but nevertheless especially the FTX case seems like a central example of a dynamic I characterize in my post.
No, I argue the issue is specifically in the currency conversion between any asset, and creditworthiness for that asset. In the case of money, it’s specifically the conversion between money and financial credit. The very basic mechanism I am trying to point to is that if you are in a situation where you can borrow a dollar, then spend that dollar, and end up now in a position to borrow more than one additional dollar, and you can do that over and over again, then bad things are going to happen[2].
ETA: I have updated the OP with a substantial number of new paragraphs and edits trying to make my point clearer. Copied here to save you re-reading the whole post:
Hopefully this clarifies my models here a bit more. I think in writing these sections I have clarified my model a bit more myself, which has been helpful.
“Line of credit” a bit more broadly defined to here include any transfer of resources under someone else’s stewardship, which includes e.g. deposits and investments
And this is specifically talking about the situation where you did not end up with assets you could just use as the collateral for the next creditor, but where you spent/lost/burned the money in order to convince the next creditor. Of course many assets are intangible, which makes distinguishing e.g. legitimate marketing use-cases and adversarial attacks on creditworthiness tricky to distinguish.
Thanks for trying to highlight the changes, but I’m a bit confused by this response.
“Whatever causes other people to then extend you a greater line of credit in whatever resource you used up to cause them to do so” seems to me straightforwardly vaguer than credence in a specific propositional claim, such as an insurance claim.
FTX seems to have straightforwardly bribed third parties with independent reputations to testify to its creditworthiness in ads. The case with Theranos is only slightly subtler, but to name a couple clear examples, Bill Frist and Henry Kissinger had independent reputations that they lent Theranos in exchange for anticipated financial gain.
You say you tried to narrow the scope to “creditworthiness” rather than “trustworthiness,” but I don’t know what that means. I guess an example of trust that isn’t credit in the relevant sense might be loyalty; the examples you gave all involved some sort of system of more or less explicit accounting in which somewhat quantifiable progress can be made, which isn’t how loyalty works. I think I was also assuming the narrower definition; can you point to some specific way in which I seem to be responding irrelevantly by wrongly assuming the broader definition?
As you point out, “many assets,” such as the ones in the academic example, are intangible, so it’s not clear that your “no collateral” qualifier helps much; whether there’s an intangible asset or verified capacity corresponding to the promise is exactly the controversy at issue.
Overall your reply seems substantially unresponsive; the level of disconnect is such that it’s not clear to me what I could even say to get a better response.
By creditworthiness, in this post, I mean the literal degree to which you are happy to transfer some specific resource that you own into someone else’s stewardship with the expectation that you will get them back (or make a positive return in-expectation). Creditworthiness is here specific to a resource that is transferrable. Dollars are the most obvious case. Social capital sometimes can also be modelled this way, though it gets more tricky. Creditworthiness does not need to extend into trustworthiness in general.
For example, as investors face very limited liability for investing in fradulent institutions, seeing someone willing to break the law (or be generally untrustworthy) can sometimes increase expected returns! In those situations creditworthiness (which I here try to measure in expectations of good stewardship or expected future profit) and trustworthiness (which would be measured in a broader propensity to not fuck people over) come strongly apart.
I think I am confused what “controversy” you are talking about here. I agree with you that in-practice, the line here is very hard to identify (as one would expect in a high-level adversarial information game).
My main aim with this post is largely to create a model that explains some situations where in-retrospect there is IMO little uncertainty that something of this shape went wrong.
Like, the specific sentence I objected to was: “so it seems like the problem is in the currency conversion between money and nonfinancial credit”.
And I think in the model and situations I outline, I am confused how you could end up with this impression? Like, I think the central dynamic with FTX was their ability to translate money[1] into more financial credit (in the form of customer deposits). Yes, there might have been some nonfinancial credit intermediary steps, and of course they also did lots of other things that are worth analyzing, but the thing that produced a positive feedback loop is the step where they could convert funds in their stewardship into more creditworthiness, which resulted in them getting more and more assets in their custody.
Trying to think harder about what you were saying, I thought your objection might be that there are too many legitimate cases in which you of course want to translate assets under your management into more assets under your management, i.e. by producing assets that are more valuable than the resources you were given stewardship over. So I tried to clarify that I was talking about a dynamic where you spend/irrecoverably lose resources to increase perceived creditworthiness, not where you make good use of resources that actually increase future expected returns.
Bribing third-party evaluators is of course an example of what I am talking about, but it strikes me as too narrow, and most importantly it doesn’t capture the central feedback loop of this creditworthiness bubble that I think explains many of the relevant dynamics that I go into in my new last section. Yes, I agree you should pay attention to someone bribing third-party evaluators, but even in that situation, one of the key variables that determines how bad it is to bribe third-party evaluators is whether by bribing the third party evaluator with a dollar, you end up with more than one additional dollar under your stewardship. That returns ratio really matters and is what I am trying to draw attention to, and I am not sure whether you are objecting to is as a thing, or just don’t find it interesting, or have some other objection.
Broadly construed here to include cryptocurrency
How does this narrow definition of creditworthiness apply to Tessier-Lavigne? Who were the owners of what assets that were transferred to him & what would ROI have looked like?
Applying my model to the Tessier-Lavigne situation suggests the key exploit he used was that you could produce academic prestige more cheaply, and at a higher ROI, via optimizing papers purely for prestige and ignoring factual accuracy, than via writing papers with the constraint of only saying true things and aiming for informativeness.
Furthermore, the so gained academic prestige can then be translated into two things:
More labor available to produce a greater volume of prestige-optimized papers, and to market existing papers more aggressively, as many additional PhD students and postdocs and professors want to work with you. Funding comes into play a bit at this point as well, but the central unit by which labor gets allocated in academia is prestige.
Greater ability to direct the enforcement mechanisms of academia towards any potential leakers, auditors or investigators, which even as the scale grows, prevents information about the deceptively overleveraged prestige from becoming widely known
Both of these allowed Tessier-Lavigne to re-invest the academic prestige gained by the publishing of their first papers into an actually large-scale academic fraud.
If either of the above was not the case, a very large-scale fraud like Tessier-Lavigne wouldn’t be possible. Of course, sometimes someone can live a career of doing academic fraud at a constant rate, but in order for the rate of fraud to become actually substantially big, the mechanism that produced the fraud needs to gain resources as a result of the fraud that then get used to produce more of the fraud.
If Tessier-Lavigne had not been able to recruit more talent as a result of the prestige, their academic output would have been limited to the outputs of a single scholar, which would be much less substantial. Similarly, if Tessier-Lavigne hadn’t been able to use the prestige to suppress investigations into their research, the likelihood they could have kept up such a long career of fraud would have also been much lower.
The owners of that academic prestige were the existing members of the academic community. A lot of it was transferred from Stanford to Tessier-Lavigne, and much of it was transferred from the journals the fraudulent papers were published in, and much of it was transferred from the people who endorsed Tessier-Lavigne across their career. Social capital and prestige don’t fully behave quite like normal assets, for example, there is no clear ledger of who owns what at any given point in time, and it’s not as owner-independent as most things we think of as “assets” are (i.e. more of academic prestige comes is non-transferrable, or transferrable at greater cost), but I think it still works well-enough as the unit of analysis here.
I am not overwhelmingly confident my model of what happened here is right, but it’s my current best understanding of the situation, and it fits the model I am trying to explain pretty well.
Your overall model of the Tessier-Lavigne situation seems plausible. But it seems like a stretch to use the narrow “creditworthiness” framework of investors and assets. The “owners” of academic prestige (Stanford, journals, endorsers) aren’t really in the same position as owners of financial assets. They didn’t “transfer” prestige to Tessier-Lavigne in the way depositors transferred money to FTX. There’s no clear ROI calculation because there’s no actual stewardship relationship—nobody gave Tessier-Lavigne their prestige to manage with expectation of returns.
If anything, academia seems more in the position of a central bank managing a fiat currency—trying to maintain an aggregate level of activity, as well as the perceived value of credit within the system, by adjusting the aggregate level of credit extended—than in the position of the owner of a rivalrous asset like money investing it in a specific venture. Obviously individuals within academia face different problems and incentives, as do individuals within a fiat economy, but there doesn’t seem to be a clear analogue in academia to the financial investor.
I think we disagree somewhat to the degree to which social capital does actually follow rules largely analogous to normal capital. For example, I do think that endorsing someone can largely be seen as analogous to investing some of your social credit into them. If you endorse many people, your endorsement is worth less, so there is some kind of conserved quantity, and if the people you endorse go on and become more widely respected, your investment pays off, so there is something like a market price that goes up.[1]
I do agree there are really crucial disanalogies! I think the specific disanalogies don’t happen to break the model I propose in this post, but I am not enormously confident.
I would love to have better language that describes the actual dynamics of social capital/reputation/status, with the cleanliness and precision of the language that we have for financial currency. But of course, that’s a big ask, and IMO worthy of being one of the great big projects of humanity akin to the whole study of economics in its own right. In the meantime, I think there is a lot of mileage that can be gotten by applying existing models of financial terms to social capital, even if they don’t perfectly fit, and even if I have to handwave a bit to make it work out.
This doesn’t mean critiques that point out the disanalogies aren’t important! Indeed, I find myself wanting to write a follow-up post that’s just something like “ways social capital does not behave like financial capital” that improves my ability to better notice when it’s inappropriate to apply a financial capital lense to social dynamics.
I agree that academia at large has central bank dynamics, but I think the specific institutions and individuals that were duped by Tessier-Lavigne and extend their credit to him were not in very much of a central bank position. I think Stanford just lost a bunch of prestige, as did a lot of the people who worked with Tessier-Lavigne and endorsed him, and Stanford does not consider itself responsible for the reputation of all of academia, or the flow of credit within all of academia.
While there are some dynamics where academia has a more centrally planned status-economy, I think most social capital gets allocated by the choices of specific individuals who want to get ahead in the social status game of academia.
One of the things that I think doesn’t have a great analogue is the process of selling/”exiting your market position”. Like, in a market you have a clear point of selling your asset, and in a social capital market you can start removing your endorsement from someone, or start calling them overrated, but the connection does not feel as clean.
We can imagine prestige very imperfectly as an asset with a quantifiable value, but while this is fairly (but not entirely) accurate for tournament structures like organized sports, in academia it’s more like being a central location in a canonical reference map; not the sort of thing that’s easy to use in ROI calculations.
If we can operationalize it well I’d likely bet against the claim that Stanford lost a lot of prestige. The centrality of the biggest institutions is hard to dislodge, as they’re sufficiently mutually entangled that problems like this seem to do more to demoralize academia generally, than to specifically discredit any one institution. Nor do I think academia’s losing credit in any straightforward sense, as it’s widely considered too big to fail even by many dissenters, who e.g. are extremely disappointed with standards in scientific academia but still automatically equate academia with science in general.
What happens as a result of the kinds of failures you describe is not at all like a decline in price, a little bit like a decline in the aggregate purchasing power of money, somewhat more like increased vulnerability to speculative attack, and most similar to a decrease in transaction volume as people see fewer and fewer opportunities for profitable transactions within the system. E.g. publishing papers seems less appealing as a way to inform others, reading papers seems less effective as a way to be informed, giving and receiving grants seems less effective as a way to organize efforts to figure things out.
Huh, I do think our world models must differ here. My current sense is societal trust and reliance on academia is dropping pretty sharply, partially though not centrally as a result of things like this, and I similarly expect the market value of things like PhDs to drop relatively intensely in the coming decade (barring major AI disruption making that question moot). I would be happy to bet on this, if you disagree.
I found this set of potential analogies helpful! I do think I still disagree about the relative appropriateness for each one of these analogies to the situation. Not sure how much value I would provide by going through them all in this comment thread, though I might take the opportunity and do it in a top-level post.
I don’t think the central-case valuable PhDs can be bought or sold so I’m not sure what you mean by market value here. If you can clarify, I’ll have a better idea whether it’s something I’d bet against you on.
I would bet a fair amount at even odds that Stanford academics won’t decline >1 sigma YOY in collective publication impact score like h-index, Stanford funding won’t decrease >1 sigma vs Ivy League + MIT + Chicago, Stanford new-PhD aggregate income won’t decline >1 sigma vs overall aggregate PhD income, and overall aggregate US PhD income won’t decline >1 sigma. I think 1 sigma is a reasonable threshold for signal vs noise.
I think that if these kinds of crises caused academia to be devalued, then when the Protestant Reformation and Enlightenment revealed the rot in late-medieval scholastic “science,” clerical institutions in the Roman Catholic model like Oxford and Cambridge would have become irrelevant or even collapsed, rather than continuing to be canonical intellectual centers in the new regime.
TBTF institutions usually don’t collapse outside strong outside conquest or civilizational collapse, or Maoist Cultural Revolution levels of violence directed at such change, since they specialize in creating loyalty to the institution. So academia losing value would look more like the Mandarin exam losing value by the civilization it was embedded in collapsing, than like Dell Computer losing value via its share price declining.
I was thinking of the salary premium that having a PhD provides (i.e. how much more people with PhDs make compared to people without PhDs), which of course is measuring a mixture of real signaling value, and simply just measuring correlations in aptitude, but I feel like it would serve as a good enough proxy here at least directionally.
What’s the sigma here? Like, what population are we measuring the variance over? Top 20 universities? All universities? I certainly agree that Stanford won’t lose one sigma of status/credibility/etc. as measured in all universities, that would require dropping Stanford completely from the list of top universities. I think losing 1 sigma of standing among top 20 universities, i.e. Stanford moving from something like “top 3” to “top 8″ seems plausible to me, though my guess is a bit too intense.
To be clear, my offered bet was more about you saying that academia at large is “too big to fail”. I do think Stanford will experience costs from this, but at that scale I do think noise will drown out almost any signal.
Hmm, I don’t currently believe this, but it’s plausible enough that I would want to engage with it in more detail. Do you have arguments for this? I currently expect more of a gradual devaluing of the importance of academic status in society, together with more competition about the relevant signifiers of status creating more noise, resulting in a relationship to academia somewhat more similar (though definitely not all the way there) as pre-WW2 society had to academia (which to my understanding was a much less central role in government and societal decision-making).
I would expect PhD value to mostly be affected by underlying demographic factors; they’re already structurally on an inflationary trajectory and I expect that to be more important than whether they’re understood to be fake or real. No one thinks Bitcoins contain powerful knowledge but they still have exchange value.
If there’s a demographic model of PhD salary premium with a good track record (not just backtested, has to have been a famous model before the going-forward empirical validation) I might bet strongly against deviation from that. If not, too noisy.
Variance (and thus sigma) for funding could be calculated on basis of historical YOY % variation in funding for all US universities, weighted by either # people enrolled or by aggregate revenue of the institution. Can do something similar for h-index. Obviously many details to operationalize but the level of confusion you’re reporting seems surprising to me. Maybe you can try to tell me how you would operationalize your “dropping pretty sharply” / “drop relatively intensely” claim.
Less than a sigma seems like it can’t really be a clear quantitative signal unless most of the observed variance is very well explained (in which case it should be more than a sigma of remaining variance). Events as big as Stanford moving from top 3 to top 8 have happened multiple times in the last few decades without any major crises of confidence.
I agree the disagreement about academia at large is important enough to focus on, thanks for clarifying that that’s where you see the main disagreement.
One argument for the TBTF paragraph was in the immediately prior paragraph. The posts I linked to at the end of the first comment in this thread are also in large part arguments in support of this thesis. Pre-WWII the US had a much weaker state. Hard to roll that back without constituting a regime collapse.
At this point I feel that I’m repeating myself enough that I don’t see how to continue this conversation productively; I don’t expect saying the same things again will lead to engagement, and I don’t expect that complaining about the problem procedurally will get a constructive response either. If you propose a well-operationalized bet and an adjudicator and escrow arrangement I will accept or reject the proposal.
I wasn’t trying to say that you had provided no argument for it, sorry! I was just curious whether you had written about this previously with a handy link. It feels like a theme in a bunch of your writing, but you seemed in a better position to remember any specific essay or section.
I’ll think about it over the next day or two and see whether I can find something. I am currently skeptical we can find something given that I don’t expect shifts at the scale of “Stanford stop being a top university at all”. But I’ll try for a bit.
I agree that the kinds of pathological feedback loops you describe exist, are bad, and are important. I don’t think the emphasis on financial returns is helpful, though; one of your main examples is nonfinancial and hard to quantify, and the thing that makes these processes bad is what’s going on outside the financial substrate: recruiting people into complicity.
You seem to be treating the question of whether the money is being “burned” to raise more money, or made productive use of (thus justifying further investment) as the easy part, but that’s the whole problem! Without an understanding of how the conversion process works, we don’t understand anything about this, we just have a black box causing nominal ROI >1, which could be either very good or very bad.
I don’t think allowing financial fraud is a thing current institutions mostly want? The difficulty is more in figuring out how to stop it without stopping legitimate activity as well (A lot of successful entrepreneurship will look kind of a lot like this, I think). If you are calling for normal speculative investment to be banned, it’s very likely not worth the loss of innovation. (It may make sense perhaps to be more strict about what level of falsehood leads to fraud prosecution, but I would keep it to banning false claims).
Sorry, I think I am failing to parse this comment. I agree that financial fraud is a thing people don’t want. This post is telling them about one dynamic that tends to cause a bunch of it. I agree that of course all the difficulty of stopping fraud lies in the difficulty of distinguishing fraud from non-fraud. This post tries to help you distinguish fraud from non-fraud, and e.g. the FAQ section addresses some specific ways in which the dynamic here can be distinguished from entrepreneurship and marketing.
You might disagree that this is possible, or have some other logical issue with the post, but I feel like you are largely just saying things that are true and said in the post, but then say “if you are calling for normal speculative investment to be banned”, which like, I am of course not doing and the post is not implying, and I feel like I have a bunch of paragraphs in there in clarifying that I am not calling for speculative investment to be banned.
In my view:
facilitation of stag-hunt-like cooperation is really useful. Because cooperating to do stuff beyond the capabilities of individuals is useful but hard.
the dynamic you discuss in the post applies to stag hunt facilitation because their success depends on willingness of others to provide more resources (up to some point where they can generate more)
the difference between, e.g., Theranos and standard entrepreneurship does not lie in the dynamic you discuss in the post. It lies in how egregiously Elizabeth Holmes was lying relative to the standard level of misleadingness. (and of course, more honesty would be better...)
It would of course be very valuable to determine if a stag hunt will pay off or will fail! But the difference between the two does not lie in the dynamic you discuss in the post (which applies to both ultimately successful and unsuccessful stag hunts).
Cool, that’s not a crazy view. I might engage with it more, but I feel like I understand where you are coming from now.