That we have to get a bunch of key stuff right on the first try is where most of the lethality really and ultimately comes from; likewise the fact that no authority is here to tell us a list of what exactly is ‘key’ and will kill us if we get it wrong. (One remarks that most people are so absolutely and flatly unprepared by their ‘scientific’ educations to challenge pre-paradigmatic puzzles with no scholarly authoritative supervision, that they do not even realize how much harder that is, or how incredibly lethal it is to demand getting that right on the first critical try.)
Is anyone making a concerted effort to derive generalisable principles of how to get things right on the first try and/or work in the pre-paradigmatic mode? It seems like if we knew how to do that in general it would be a great boost to AI Safety research.
I have observed that this upvote does not effect the posters karma, this is a good design, but not clear from the text of this post.