full disclosure: I’m a professional cryptography research assistant. I’m not really interested in AI (yet) but there are obvious similarities when it comes to security.
I have to back Elizer up on the “Lots of strawmanning” part. No professional cryptographer will ever tell you there’s hope in trying to achieve “perfect level of safety” of anything and cryptography, unlike AI, is a very well formalized field. As an example, I’ll offer a conversation with a student:
How secure is this system? (such question is usually a shorthand for: “What’s the probability this system won’t be broken by methods X, Y and Z”)
The theorem says
What’s the probability that the proof of the theorem is correct?
… probably not
Now, before you go “yeah, right”, I’ll also say that I’ve already seen this once—there was a theorem in major peer reviewed journal that turned out to be wrong (counter-example found) after one of the students tried to implement it as a part of his thesis—so the probability was indeed not even close to
for any serious N. I’d like to point out that this doesn’t even include problems with the implementation of the theory.It’s really difficult to explain how hard this stuff really is to people who never tried to develop anything like it. That’s too bad (and a danger) because people who do get it rarely are in charge of the money. That’s one reason for the CFAR/rationality movement… you need a tool to explain it to other people too, am I right?
I quit my job for one that I’m much less comfortable with, but with more room for long-term improvement.
In your face, risk aversion!