Why not ask him for his reasoning, then evaluate it?
If a person thinks there’s 10% x-risk over the next 100 years if we don’t develop superhuman AGI, and only a 1% x-risk if we do, then he’d suggest that anybody in favour of pausing AI progress was taking “unacceptable risks for the whole of himanity”.
Why not ask him for his reasoning, then evaluate it? If a person thinks there’s 10% x-risk over the next 100 years if we don’t develop superhuman AGI, and only a 1% x-risk if we do, then he’d suggest that anybody in favour of pausing AI progress was taking “unacceptable risks for the whole of himanity”.
The reasoning was given in the comment prior to it, that we want fast progress in order to get to immortality sooner.