My theory is that safety ai folk are taught that a rules framework is how to provide oversight over the ai...like the idea that you can define constraints, logic gates, or formal objectives, and keep the system within bounds, like a classic control theory…
I don’t know anyone in AI safety who have missed that fact that NNs are not GOFAI.
I don’t know anyone in AI safety who have missed that fact that NNs are not GOFAI.