Sam Bankman Fried did what he did primarily for the sake of “Effective Altruism,” as he understood it. Even though from a purely utilitarian perspective his actions were negative in expectation, he justified the fraud to himself because it was “for the greater good.” As such, poor messaging on our part[2] may be partially at fault for his downfall.
Without knowing his calculation it’s hard to know whether or not his actions were negative or positive in expectation given his values.
If you believe that each future person is as valuable as each present person and there will be 10^100 people in the future lightcone, the amount of people that were hurt by FTX blowing up is a rounding error.
In his 80000 hours interview Sam Bankman-Fried, talks about how he thinks taking a high-risk high-upside approach is very valuable. Almeda investing billions of dollars of FTX customers’ money is a high-upside bet.
Being at this point certain, that his actions were negative in expectation looks to me like highly motivated reasoning by people who don’t like to look at the ethics underlying effective altruism. They are neither willing to say that maybe Sam Bankman-Fried did things right nor willing to criticize the underlying ethical assumptions.
His 80000 interview suggests that he thought the chance of FTX blowing up is something between 1% and 10%. There he gives 50% odds for making more than 50 billion dollars that can be donated to EA causes.
If someone is saying that his action was negative in expectation, do they mean, that Sam Bankman-Fried lied about his expectations? Do they mean that a 10% chance of this happening should have been enough to tilt the expectation to be negative under the ethical assumptions of longtermism that puts most of the utility that’s produced in the far future? Are you saying something else?
His 80000 interview suggests that he thought the chance of FTX blowing up is something between 1% and 10%. There he gives 50% odds for making more than 50 billion dollars that can be donated to EA causes.
If someone is saying that his action was negative in expectation, do they mean, that Sam Bankman-Fried lied about his expectations? Do they mean that a 10% chance of this happening should have been enough to tilt the expectation to be negative under the ethical assumptions of longtermism that puts most of the utility that’s produced in the far future? Are you saying something else?
I wish I had any sort of trustworthy stats about the success rate of things in the reference class of steal from one pool of money in order to cover up losses in another pool of money, in the hope of making (and winning) big bets in the second pool of money to eventually make the first pool of money whole. I would expect the success rate to be very low (I would be extremely surprised if it were as high as 10%, somewhat surprised if it were as high as 1%), but it’s also the sort of thing where if you do it successfully, probably nobody finds out.
Do Ponzi schemes ever become solvent again? What about insolvent businesses that are hiding their insolvency?
If you believe that each future person is as valuable as each present person and there will be 10^100 people in the future lightcone, the amount of people that were hurt by FTX blowing up is a rounding error.
But you have to count the effect of the indirect harms on the future lightcone too. There’s a longtermist argument that SBF’s (alleged and currently very likely) crimes plausibly did more harm than all the wars and pandemics in history if...
Governments are now 10% less likely to cooperate with EAs on AI safety
The next 2 EA mega-donors decide to pass on EA
(Had he not been caught:) The EA movement drifted towards fraud and corruption
You are however only counting one side here. SBF appearing successful was a motivating example for others to start projects that would have made them Mega donors.
Governments are now 10% less likely to cooperate with EAs on AI safety
I don’t think that’s likely to be the case.
The next 2 EA mega-donors decide to pass on EA
There’s an unclearness here about what “pass on EA means”. Zvi wrote about Survival and Flourishing Fund not being an EA fund.
How to model all the related factors is complicated. Saying that you easily know the right answer to whether the effects are negative or positive in expectation without running any numbers seems to me unjustified.
In that comment I was only offering plausible counter-arguments to “the amount of people that were hurt by FTX blowing up is a rounding error.”
How to model all the related factors is complicated. Saying that you easily know the right answer to whether the effects are negative or positive in expectation without running any numbers seems to me unjustified.
I think we basically agree here.
I’m in favour of more complicated models that include more indirect effects, not less.
Maybe the difference is: I think in the long run (over decades, including the actions of many EAs as influential as SBF) an EA movement that has strong norms against lying, corruption and fraud actually ends up more likely to save the world, even if it gets less funding in the short term.
The fact that I can’t predict and quantify ahead of time all the possible harms that result from fraud doesn’t convince me that those concerns are unjustified.
We might be living in a world where SBF stealing money and giving $50B to longtermist causes very quickly really is our best shot at preventing AI disaster, but I doubt it.
Apart from anything else I don’t think money is necessarily the most important bottleneck.
We already have an EA movement where the leading organization has no problem editing out elements of a picture it publishes on its website because of possible PR risks. While you can argue that it’s not literally lying it comes very close and suggests the kind of environment that does not have the strong norms that would be desirable
I don’t think FTX/Almeda doing this in secret strongly damaged general norms against lying, corruption, and fraud.
Them blowing up like this actually is a chance for moving toward those norms. It’s a chance to actually look into ethics in a different way to make it more clear that being honest and transparent is good.
Saying “poor messaging on our part” which resulted in “actions were negative in expectation in a purely utilitarian perspective” is a way to avoid having the actual conversation about the ethical norms that might produce change toward stronger norms for truth.
Without knowing his calculation it’s hard to know whether or not his actions were negative or positive in expectation given his values.
If you believe that each future person is as valuable as each present person and there will be 10^100 people in the future lightcone, the amount of people that were hurt by FTX blowing up is a rounding error.
In his 80000 hours interview Sam Bankman-Fried, talks about how he thinks taking a high-risk high-upside approach is very valuable. Almeda investing billions of dollars of FTX customers’ money is a high-upside bet.
Being at this point certain, that his actions were negative in expectation looks to me like highly motivated reasoning by people who don’t like to look at the ethics underlying effective altruism. They are neither willing to say that maybe Sam Bankman-Fried did things right nor willing to criticize the underlying ethical assumptions.
His 80000 interview suggests that he thought the chance of FTX blowing up is something between 1% and 10%. There he gives 50% odds for making more than 50 billion dollars that can be donated to EA causes.
If someone is saying that his action was negative in expectation, do they mean, that Sam Bankman-Fried lied about his expectations? Do they mean that a 10% chance of this happening should have been enough to tilt the expectation to be negative under the ethical assumptions of longtermism that puts most of the utility that’s produced in the far future? Are you saying something else?
I wish I had any sort of trustworthy stats about the success rate of things in the reference class of steal from one pool of money in order to cover up losses in another pool of money, in the hope of making (and winning) big bets in the second pool of money to eventually make the first pool of money whole. I would expect the success rate to be very low (I would be extremely surprised if it were as high as 10%, somewhat surprised if it were as high as 1%), but it’s also the sort of thing where if you do it successfully, probably nobody finds out.
Do Ponzi schemes ever become solvent again? What about insolvent businesses that are hiding their insolvency?
Zombie banks would be one type of organization in that reference class.
But you have to count the effect of the indirect harms on the future lightcone too. There’s a longtermist argument that SBF’s (alleged and currently very likely) crimes plausibly did more harm than all the wars and pandemics in history if...
Governments are now 10% less likely to cooperate with EAs on AI safety
The next 2 EA mega-donors decide to pass on EA
(Had he not been caught:) The EA movement drifted towards fraud and corruption
etc.
You are however only counting one side here. SBF appearing successful was a motivating example for others to start projects that would have made them Mega donors.
I don’t think that’s likely to be the case.
There’s an unclearness here about what “pass on EA means”. Zvi wrote about Survival and Flourishing Fund not being an EA fund.
How to model all the related factors is complicated. Saying that you easily know the right answer to whether the effects are negative or positive in expectation without running any numbers seems to me unjustified.
In that comment I was only offering plausible counter-arguments to “the amount of people that were hurt by FTX blowing up is a rounding error.”
I think we basically agree here.
I’m in favour of more complicated models that include more indirect effects, not less.
Maybe the difference is: I think in the long run (over decades, including the actions of many EAs as influential as SBF) an EA movement that has strong norms against lying, corruption and fraud actually ends up more likely to save the world, even if it gets less funding in the short term.
The fact that I can’t predict and quantify ahead of time all the possible harms that result from fraud doesn’t convince me that those concerns are unjustified.
We might be living in a world where SBF stealing money and giving $50B to longtermist causes very quickly really is our best shot at preventing AI disaster, but I doubt it.
Apart from anything else I don’t think money is necessarily the most important bottleneck.
We already have an EA movement where the leading organization has no problem editing out elements of a picture it publishes on its website because of possible PR risks. While you can argue that it’s not literally lying it comes very close and suggests the kind of environment that does not have the strong norms that would be desirable
I don’t think FTX/Almeda doing this in secret strongly damaged general norms against lying, corruption, and fraud.
Them blowing up like this actually is a chance for moving toward those norms. It’s a chance to actually look into ethics in a different way to make it more clear that being honest and transparent is good.
Saying “poor messaging on our part” which resulted in “actions were negative in expectation in a purely utilitarian perspective” is a way to avoid having the actual conversation about the ethical norms that might produce change toward stronger norms for truth.