It is also worth thinking if you put in context that people said “no, obviously, humans would not let it out of the box”. Their confident arguments persuaded smart people into thinking that this was not a problem.
You also have the camp “no, the problem will not be people telling the AI do bad stuff, but about this hard theoretical problem we have to spend years doing research on in order to save humanity” versus “we worry that people will use it for bad things” which in hindsight is the first problem that occurred, while alignment research either comes too late or becomes relevant only once many other problems already happened.
However, in the long run, alignment research might be like building the lighthouse in advance of ship traffic on the ocean. If you never seen the ocean before, a lighthouse factory seems mysterious as it is on land and has no seemingly purpose that is easy to relate to. Yet, such infrastructure might be the engine of civilizations that reaches the next Kardashev scale.
The use of Chu spaces is very interesting. This is also a great introduction to Chu spaces.
I was able to formalize the example in the research automated theorem prover Avalog: https://github.com/advancedresearch/avalog/blob/master/source/chu_space.txt
It is still very basic, but shows potential. Perhaps Avalog might be used to check some proofs about Cartesian frames.