Heaving read the post “Does Trump’s AI Action plan have what it takes to win?” by Peter Wildeford, I realize that I do not understand what the word “winning” means here. I searched the Whitehouse document for the word and found it almost exclusively in the introduction. What is that race? What does it mean to win it? What happens next?
The reference to the space race in the introduction does not help (“Just like we won the space race, it is imperative that the United States and its allies win this race.”). Acoording to Wikipedia, the Soviets “achieved the first successful satellite launch, Sputnik 1, on October 4, 1957. It gained momentum when the USSR sent the first human, Yuri Gagarin, into space with the orbital flight of Vostok 1 on April 12, 1961. These were followed by a string of other firsts achieved by the Soviets over the next few years.” Then the US were the first country to land someone onthe moon. So they won the moon race, but that did not mean that the space race ended decisively. There were other space “firsts”, and being first was mostly symbolic. Maybe there are better comparisons? In the case of nuclear weapons, being first to build them was important, but making that an end point to other countries’ nuclear programmes would have required very unscrupulous behavior; therefore the “race” was conditional on the war against the Axis, or maybe even conditional on the war against the Nazis. The race was mainly ended by winning the war.
So what does it mean to win the AI race? Peter Wildeford writes: “I do expect some geopolitical ‘winner takes all’ or ‘winner takes most’ dynamics to achieving AGI, so in that sense the racing is very accurate. Whoever has a lead in developing AGI will have a significant say in shaping the post-AGI society, and it’s important for that to be shaped with freedom and American values, as opposed to authoritarianism.” What does it mean to “have a significant say in shaping the post-AGI society”? Is it like being the first country to have a nuclear bomb and then ending other countries’ efforts? Or Is it like being the first country to have a nuclear bomb and then not doing that? Or is it like being the country that has Apple and Meta and Alphabet and Microsoft? What does this “significant say” mean, concretely?
PW writes that “1. The Plan shows refreshing optimism” because “Historically, scientific progress has brought much wealth and opportunity to all of humanity. If AI becomes capable of automating this scientific progress and innovating across many domains, it is genuinely plausible we could enter into a true Golden Age. If done right, this would create a world where everyone is fully free and empowered to self-determine and self-actuate, without any barriers to living the lives they want to live.” I do not see the plan’s recipe for that, though maybe I am just overlooking it. How does this work if “3. The Plan acknowledges AI’s transformative potential but not its unique challenges” and “The problem is that the Plan focuses solely on the familiar risks from AI and ignores far more pressing future AGI problems.”? In context of the whole post, the section under Heading 8, “8. Retraining might not be enough to handle AGI-driven disemployment” reads as though PW sees a severe risk of social catastrophe and at the same time as though he thinks we should think about that somewhat more while not letting it reduce our optimism. All in all, the post seems like “let’s make sure we can win this race by really speeding up a lot! And then maybe we should also think a bit whether we are moving in the right direction.”
As a side note, with respect to the renewable-energy part, I don’t understand why pointing out that climate change is an important problem should be called a “crusade for climate change awareness”.
Heaving read the post “Does Trump’s AI Action plan have what it takes to win?” by Peter Wildeford, I realize that I do not understand what the word “winning” means here. I searched the Whitehouse document for the word and found it almost exclusively in the introduction. What is that race? What does it mean to win it? What happens next?
The reference to the space race in the introduction does not help (“Just like we won the space race, it is imperative that the United States and its allies win this race.”). Acoording to Wikipedia, the Soviets “achieved the first successful satellite launch, Sputnik 1, on October 4, 1957. It gained momentum when the USSR sent the first human, Yuri Gagarin, into space with the orbital flight of Vostok 1 on April 12, 1961. These were followed by a string of other firsts achieved by the Soviets over the next few years.” Then the US were the first country to land someone onthe moon. So they won the moon race, but that did not mean that the space race ended decisively. There were other space “firsts”, and being first was mostly symbolic. Maybe there are better comparisons? In the case of nuclear weapons, being first to build them was important, but making that an end point to other countries’ nuclear programmes would have required very unscrupulous behavior; therefore the “race” was conditional on the war against the Axis, or maybe even conditional on the war against the Nazis. The race was mainly ended by winning the war.
So what does it mean to win the AI race? Peter Wildeford writes: “I do expect some geopolitical ‘winner takes all’ or ‘winner takes most’ dynamics to achieving AGI, so in that sense the racing is very accurate. Whoever has a lead in developing AGI will have a significant say in shaping the post-AGI society, and it’s important for that to be shaped with freedom and American values, as opposed to authoritarianism.” What does it mean to “have a significant say in shaping the post-AGI society”? Is it like being the first country to have a nuclear bomb and then ending other countries’ efforts? Or Is it like being the first country to have a nuclear bomb and then not doing that? Or is it like being the country that has Apple and Meta and Alphabet and Microsoft? What does this “significant say” mean, concretely?
PW writes that “1. The Plan shows refreshing optimism” because “Historically, scientific progress has brought much wealth and opportunity to all of humanity. If AI becomes capable of automating this scientific progress and innovating across many domains, it is genuinely plausible we could enter into a true Golden Age. If done right, this would create a world where everyone is fully free and empowered to self-determine and self-actuate, without any barriers to living the lives they want to live.” I do not see the plan’s recipe for that, though maybe I am just overlooking it. How does this work if “3. The Plan acknowledges AI’s transformative potential but not its unique challenges” and “The problem is that the Plan focuses solely on the familiar risks from AI and ignores far more pressing future AGI problems.”? In context of the whole post, the section under Heading 8, “8. Retraining might not be enough to handle AGI-driven disemployment” reads as though PW sees a severe risk of social catastrophe and at the same time as though he thinks we should think about that somewhat more while not letting it reduce our optimism. All in all, the post seems like “let’s make sure we can win this race by really speeding up a lot! And then maybe we should also think a bit whether we are moving in the right direction.”
As a side note, with respect to the renewable-energy part, I don’t understand why pointing out that climate change is an important problem should be called a “crusade for climate change awareness”.