The biggest surprise to me was when he said that he thought short timelines were safer than long timelines. The reason for that is not obvious to me. Maybe something to do with contingent geopolitics.
What do you expect him to say? “Yeah, longer timelines and consolidated AGI development efforts are great, I’m shorting your life expectancies as we speak”? The only way you can be a Sam Altman is by convincing yourself that nuclear proliferation makes the world safer.
I know your question was probably just rhetorical, but to answer it regardless—I was confused in part because it would have made sense to me if he had said it would “better” if AGI timelines were short.
Lots of people want short AGI timelines because they think the alignment problem will be easy or otherwise aren’t concerned about it and they want the perceived benefits of AGI for themselves/their family and friends/humanity (eg eliminating disease, eliminating involuntary death, abundance, etc). And he could have just said “better” without really changing the rest of his argument.
At least the word “better” would make sense to me, even if, as you imply, it might be wrong and plenty of others would disagree with it.
So I expect I am missing something in his internal model that made him use the word “safer” instead of “better”. I can only guess at possibilities. Like thinking that if AGI timelines are too long, then the CCP might take over the USA/the West in AI capabilities, and care even less about AGI safety when it matters the most.
What do you expect him to say? “Yeah, longer timelines and consolidated AGI development efforts are great, I’m shorting your life expectancies as we speak”? The only way you can be a Sam Altman is by convincing yourself that nuclear proliferation makes the world safer.
Good point.
I know your question was probably just rhetorical, but to answer it regardless—I was confused in part because it would have made sense to me if he had said it would “better” if AGI timelines were short.
Lots of people want short AGI timelines because they think the alignment problem will be easy or otherwise aren’t concerned about it and they want the perceived benefits of AGI for themselves/their family and friends/humanity (eg eliminating disease, eliminating involuntary death, abundance, etc). And he could have just said “better” without really changing the rest of his argument.
At least the word “better” would make sense to me, even if, as you imply, it might be wrong and plenty of others would disagree with it.
So I expect I am missing something in his internal model that made him use the word “safer” instead of “better”. I can only guess at possibilities. Like thinking that if AGI timelines are too long, then the CCP might take over the USA/the West in AI capabilities, and care even less about AGI safety when it matters the most.