Is there any coherent defense for using price volatility as a proxy for risk?
To me, this move just seems… stupid? Like not tracking what matters at all? I’ve USED this math in practice as a data scientist, when the product manager wanted to see financial statistics, but I didn’t BELIEVE it while I was using it. (My own hunch is that “actual risk” is always inherently subjective, and based on what “you” can predict and how precisely you can predict it, and when you know you can’t predict something very well you call that thing “risky”.)
If we treat this “attempt at a proxy for risk” as a really really terrible proxy for risk, such that its relation to actual risk is essentially random, then it seems like you should expect that doing statistics on “noise vs payout” would show “whatever the average payout is in general” is also “the average payout for each bucket whose members are defined by having a certain random quantity of this essentially noisy variable”.
If I understand correctly, this “it is all basically flat and similar” result is the numerical result in search of an explanation… I wonder if there some clever reason that “this is basically just noise” doesn’t count as a valid answer?
I don’t think that gilch answered the question correctly. His two games A and B are both “additive” games (unless I’m misunderstanding him). The wagers are not a percent of bankroll but are instead a constant figure each time. His mention of the Kelly criterion is relevant to questions about the effect of leverage on returns, but is relevant neither to his example games nor to your question of why volatility is used as a “proxy” for risk.
I’d say that to a large extent you are right to be suspicious of this decision to use variance as a proxy for risk. The choice to use volatility as a risk proxy was definitely a mathematical convenience that works almost all of the time, except when it absolutely doesn’t. And when it doesn’t work out, it does so ways that can negate all the time that it does work out for. The most commonly used model of a stock’s movements is Geometric Brownian Motion, which only has two parameters, µ and σ. Since σ is the sole determinant of the standard deviation of the next minute/day/month/year’s move, it is used as the “risk” parameter since it determines the magnitude distribution for how much you can expect to make/lose.
But to get to the heart of the matter (i.e. why people accept and use this model despite it’s failure to take into account “real” risk), I refer you to this stackexchange post.
Suppose I offer you two games:
A) You put up ten dollars. I flip a fair coin. Heads, I give it back and pay you one cent. Tails, I keep it all.
B) You put up $100,000. I flip a fair coin. Heads, I give it back and pay you $100. Tails, I keep it all.
You have the edge, right? Which bet is riskier? The only difference is scale.
What if we iterate? With game A, we trade some tens back and forth, but you accumulate one cent per head. It’s a great deal. With game B, I’ll probably have to put up some Benjamins, but eventually I’ll get a streak of enough tails to wipe you out. Then I keep your money because you can’t ante up.
The theoretically optimal investing strategy is Kelly, which accounts for this effect. The amount to invest is a function of your payoff distribution and the current size of your bankroll. Your bankroll size is known, but the payoff distribution is more difficult to calibrate. We could start with the past distribution of returns from the asset. Most of the time this looks like a modified normal distribution with much more kurtosis and negative skew.
The size of your risk isn’t the number of dollars you have invested. It’s how much you stand to lose and with what probability.
Volatility is much more predictable in practice than price. One can forecast it with much better accuracy than chance using e.g. a GARCH model.
Given these parameters, you can adjust your bet size for the forecast variance from your volatility model.
So volatility is most of what you need to know. There’s still some black swan risk unaccounted for. Outliers that are both extreme and rare might not have had time to show up in your past distribution data. But in practice, you can cut off the tail risk using insurance like put options, which cost more the higher the forecast volatility is. So volatility is still the main parameter here.
Given this, for a given edge size, it makes sense to set the bet size based on forecast volatility and to pick assets based on the ratio of expected edge to forecast volatility. So something like a Sharpe ratio.
I have so far neglected the benefits of diversification. The noise for uncorrelated bets will tend to cancel out, i.e. reduce volatility. You can afford to take more risk on a bet, i.e. allocate more dollars to it, if you have other uncorrelated bets that can pay off and make up for your losses when you get unlucky.
Is there any coherent defense for using price volatility as a proxy for risk?
To me, this move just seems… stupid? Like not tracking what matters at all? I’ve USED this math in practice as a data scientist, when the product manager wanted to see financial statistics, but I didn’t BELIEVE it while I was using it. (My own hunch is that “actual risk” is always inherently subjective, and based on what “you” can predict and how precisely you can predict it, and when you know you can’t predict something very well you call that thing “risky”.)
If we treat this “attempt at a proxy for risk” as a really really terrible proxy for risk, such that its relation to actual risk is essentially random, then it seems like you should expect that doing statistics on “noise vs payout” would show “whatever the average payout is in general” is also “the average payout for each bucket whose members are defined by having a certain random quantity of this essentially noisy variable”.
If I understand correctly, this “it is all basically flat and similar” result is the numerical result in search of an explanation… I wonder if there some clever reason that “this is basically just noise” doesn’t count as a valid answer?
I don’t think that gilch answered the question correctly. His two games A and B are both “additive” games (unless I’m misunderstanding him). The wagers are not a percent of bankroll but are instead a constant figure each time. His mention of the Kelly criterion is relevant to questions about the effect of leverage on returns, but is relevant neither to his example games nor to your question of why volatility is used as a “proxy” for risk.
I’d say that to a large extent you are right to be suspicious of this decision to use variance as a proxy for risk. The choice to use volatility as a risk proxy was definitely a mathematical convenience that works almost all of the time, except when it absolutely doesn’t. And when it doesn’t work out, it does so ways that can negate all the time that it does work out for. The most commonly used model of a stock’s movements is Geometric Brownian Motion, which only has two parameters, µ and σ. Since σ is the sole determinant of the standard deviation of the next minute/day/month/year’s move, it is used as the “risk” parameter since it determines the magnitude distribution for how much you can expect to make/lose.
But to get to the heart of the matter (i.e. why people accept and use this model despite it’s failure to take into account “real” risk), I refer you to this stackexchange post.
Suppose I offer you two games:
A) You put up ten dollars. I flip a fair coin. Heads, I give it back and pay you one cent. Tails, I keep it all.
B) You put up $100,000. I flip a fair coin. Heads, I give it back and pay you $100. Tails, I keep it all.
You have the edge, right? Which bet is riskier? The only difference is scale.
What if we iterate? With game A, we trade some tens back and forth, but you accumulate one cent per head. It’s a great deal. With game B, I’ll probably have to put up some Benjamins, but eventually I’ll get a streak of enough tails to wipe you out. Then I keep your money because you can’t ante up.
The theoretically optimal investing strategy is Kelly, which accounts for this effect. The amount to invest is a function of your payoff distribution and the current size of your bankroll. Your bankroll size is known, but the payoff distribution is more difficult to calibrate. We could start with the past distribution of returns from the asset. Most of the time this looks like a modified normal distribution with much more kurtosis and negative skew.
The size of your risk isn’t the number of dollars you have invested. It’s how much you stand to lose and with what probability.
Volatility is much more predictable in practice than price. One can forecast it with much better accuracy than chance using e.g. a GARCH model.
Given these parameters, you can adjust your bet size for the forecast variance from your volatility model.
So volatility is most of what you need to know. There’s still some black swan risk unaccounted for. Outliers that are both extreme and rare might not have had time to show up in your past distribution data. But in practice, you can cut off the tail risk using insurance like put options, which cost more the higher the forecast volatility is. So volatility is still the main parameter here.
Given this, for a given edge size, it makes sense to set the bet size based on forecast volatility and to pick assets based on the ratio of expected edge to forecast volatility. So something like a Sharpe ratio.
I have so far neglected the benefits of diversification. The noise for uncorrelated bets will tend to cancel out, i.e. reduce volatility. You can afford to take more risk on a bet, i.e. allocate more dollars to it, if you have other uncorrelated bets that can pay off and make up for your losses when you get unlucky.