I’m very glad that you’re raising this topic for discussion. I’ve recently been thinking about the same thing myself. If I could delay the advent of AGI/TAI by, say, 100 years, would I? There are at least two relevant considerations here: 1) the commonly held (but not, to me, obviously true) assumption that delaying AGI/TAI increases the probability of it being safely deployed and 2) the costs of delaying AGI/TAI.
(2) motivates thinking really hard about whether (1) is true. General concern about AI safety also motivates thinking hard about whether (1) is true, since if delaying AGI/TAI does not increase the probability of a safe deployment, then we should think about what would increase it.
I’m very glad that you’re raising this topic for discussion. I’ve recently been thinking about the same thing myself. If I could delay the advent of AGI/TAI by, say, 100 years, would I? There are at least two relevant considerations here: 1) the commonly held (but not, to me, obviously true) assumption that delaying AGI/TAI increases the probability of it being safely deployed and 2) the costs of delaying AGI/TAI.
(2) motivates thinking really hard about whether (1) is true. General concern about AI safety also motivates thinking hard about whether (1) is true, since if delaying AGI/TAI does not increase the probability of a safe deployment, then we should think about what would increase it.