Thank you to your dad for offering to answer questions.
Sometimes people make the argument that the U.S. needs to race toward AGI-ASI as rapidly as possible, because if China obtains it first, then the risks to the U.S. are unacceptably high. However, this argument can also be an appealing excuse for people in the U.S. who would wish to go full-speed toward AGI-ASI even if there were no competition from China.
I imagine similar arguments are also made in China, with the roles of the U.S. and China reversed.
Does your dad have thoughts about these kinds of arguments, considering that analogous arguments were made about the nuclear arms race? How does your dad think about the interaction of people who make these arguments genuinely vs. those who use these arguments as an excuse?
Though I will say—It seems we need to find lessons somewhere in history, in part because we aren’t smart enough as a species to reason purely from first principles. I’m certainly not smart enough for that, anyway.
When looking for lessons on AI, nuclear development may be the least worst historical analogy.
Thank you to your dad for offering to answer questions.
Sometimes people make the argument that the U.S. needs to race toward AGI-ASI as rapidly as possible, because if China obtains it first, then the risks to the U.S. are unacceptably high. However, this argument can also be an appealing excuse for people in the U.S. who would wish to go full-speed toward AGI-ASI even if there were no competition from China.
I imagine similar arguments are also made in China, with the roles of the U.S. and China reversed.
Does your dad have thoughts about these kinds of arguments, considering that analogous arguments were made about the nuclear arms race? How does your dad think about the interaction of people who make these arguments genuinely vs. those who use these arguments as an excuse?
Sorry for the delay. I will ask him when I talk to him next.
Thanks & no apology needed : )
I asked my dad your questions, and he got hung up on stating very clearly that these situations are not analogous. The risks from AI are much higher.
Fair enough & I appreciate the follow-up.
Though I will say—It seems we need to find lessons somewhere in history, in part because we aren’t smart enough as a species to reason purely from first principles. I’m certainly not smart enough for that, anyway.
When looking for lessons on AI, nuclear development may be the least worst historical analogy.
He meant there was no analogy on the race dynamics. The ai ones are much more risky.