Another (arguably similar) unintended consequence of underemphasizing the difficulty of AI alignment was that it led some to believe that if we don’t rush to build an ASI, we’ll be left defenseless against other X-risks, which would be a perfectly rational thought if alignment were easier.
Another (arguably similar) unintended consequence of underemphasizing the difficulty of AI alignment was that it led some to believe that if we don’t rush to build an ASI, we’ll be left defenseless against other X-risks, which would be a perfectly rational thought if alignment were easier.