there’s an extremely strong selection effect at labs for an extreme degree of positivity and optimism regardless of whether it is warranted.
Absolutely agree with this—and that’s a large part of why I think it’s incredibly noteworthy that despite that bias, there are tons of very well informed people at the labs, including Boaz, who are deeply concerned that things could go poorly, and many don’t think it’s implausible that AI could destroy humanity.
Absolutely agree with this—and that’s a large part of why I think it’s incredibly noteworthy that despite that bias, there are tons of very well informed people at the labs, including Boaz, who are deeply concerned that things could go poorly, and many don’t think it’s implausible that AI could destroy humanity.