Congratulations on winning this one pedantic battle. Indeed that may have been a clickbait headline; I’m sure you can also find perfectly rational explanations for all TWELVE THOUSAND people jailed by the UK for wrongspeech PER YEAR. https://x.com/OlgaBazova/status/1968376382452379753
What I understood about this website is that it appears you prefer pedantically quibbling over one specific detail, instead of actually engaging in good faith the ACTUAL ARGUMENT I was making, which you didn’t do, and nor did anyone else, because I’m obviously in the right and it makes you all very uncomfortable so you all ran away.
I don’t see why your aligned AI researcher is exempt from joining humans in the “we/us” below.
>”We can gather all sorts of information beforehand from less powerful systems that will not kill us if we screw up operating them; but once we are running more powerful systems, we can no longer update on sufficiently catastrophic errors. This is where practically all of the real lethality comes from, that we have to get things right on the first sufficiently-critical try. … That we have to get a bunch of key stuff right on the first try is where most of the lethality really and ultimately comes from; likewise the fact that no authority is here to tell us a list of what exactly is ‘key’ and will kill us if we get it wrong.”
https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities