SpaceX can afford to fail at this; the decision makers know it.
Well, to be fair, the post is making the point that perhaps they can afford less than they thought. They completely ignored the effects their failure would have on the surrounding communities (which reeks highly of conceit on their part) and now they’re paying the price with the risk of a disproportionate crackdown. It’ll cost them more than they expected for sure.
The analogy suggests that entities capable of one-shotting problem X (presumably, by putting in a lot of preparatory effort, running analysis, and so on) will do so. I don’t think that’s true.
You’re right, but the analogy is also saying I think that if we were capable enough to one-shot AGI (which according to EY we need to), then we surely would be capable enough to also very cheaply one-shot a Starship launch, because it’s a simpler problem. Failure may be a good teacher, but it’s not a free one. If you’re competent enough to one-shot things with only a tiny bit of additional effort, you do it. Having this failure rate instead shows that you’re already straining yourself at the very limit of what’s possible, and the very limit is apparently… launching big rockets. Which while awesome in a general sense is really, really child’s play compared to getting superhuman AGI right, and on that estimate I do agree with Yud.
I would add that a huge part of solving alignment requires being keenly aware of and caring about human values in general, and in that sense, the sort of mindset that leads to not foreseeing or giving a damn about how pissed off people would be by clouds of launchpad dust in their towns really isn’t the culture you want to bring into AGI creation.
Well, to be fair, the post is making the point that perhaps they can afford less than they thought. They completely ignored the effects their failure would have on the surrounding communities (which reeks highly of conceit on their part) and now they’re paying the price with the risk of a disproportionate crackdown. It’ll cost them more than they expected for sure.
You’re right, but the analogy is also saying I think that if we were capable enough to one-shot AGI (which according to EY we need to), then we surely would be capable enough to also very cheaply one-shot a Starship launch, because it’s a simpler problem. Failure may be a good teacher, but it’s not a free one. If you’re competent enough to one-shot things with only a tiny bit of additional effort, you do it. Having this failure rate instead shows that you’re already straining yourself at the very limit of what’s possible, and the very limit is apparently… launching big rockets. Which while awesome in a general sense is really, really child’s play compared to getting superhuman AGI right, and on that estimate I do agree with Yud.
I would add that a huge part of solving alignment requires being keenly aware of and caring about human values in general, and in that sense, the sort of mindset that leads to not foreseeing or giving a damn about how pissed off people would be by clouds of launchpad dust in their towns really isn’t the culture you want to bring into AGI creation.