I’m concerned about the probability of having some technical people get together and solve some incredibly deep research problems before some perhaps-slightly-less-technical people plough ahead and get practical results without the benefit of that research. I’m skeptical that we’ll see FAI before UFAI for the same reason I’m skeptical that we’ll see a Navier-Stokes existence proof before a macroscale DNS solution, I’m skeptical that we’ll prove P!=NP or even find a provably secure encryption scheme before making the world’s economy dependent on unproven schemes, etc.
Even some of the important subgoals of FAI, being worked on with far more resources than MIRI has yet, are barely showing on the radar. IIRC someone only recently produced a provably correct C compiler (and in the process exposed a bunch of bugs in the industry standard compilers) - wouldn’t we feel foolish if a provably FAI human-readable code turned UF simply because a bug was automatically introduced in the compilation? Or if a cosmic ray or slightly-out-of-tolerance manufacturing defect affected one of the processors? Fault-tolerant MPI is still leading-edge research, because although we’ve never needed it before, at exascale and above the predicted mean time between hardware-failures-on-some-node goes down to hours.
One of the reasons UFAI could be such an instant danger is the current ubiquitous nature of exploitable bugs on networked computers… yet “how do we write even simple high performance software without exploitable bugs” seems to be both a much more popular research problem than and a prerequisite to “how do we write a FAI”, and it’s not yet solved.
I’m skeptical that we’ll prove P!=NP or even find a provably secure encryption scheme before making the world’s economy dependent on unproven schemes, etc.
Nitpick, but finding a provably secure encryption scheme is harder than proving P!=NP, since if P=NP then no secure encryption scheme can exist.
if P=NP then no [provably] secure encryption scheme can exist.
What? Why? Just because RSA would be broken? Shor’s algorithm would also do so, even in a proven P!=NP world. There may be other substitutes for RSA, using different complexity classes. There are other approaches altogether. Not to mention one-time pads.
As I understand it, if P=NP in a practical sense, then almost all cryptography is destroyed as P=NP destroys one-way functions & secure hashes in general. So RSA goes down, many quantum-proof systems go down, and so on and so forth, and you’re left with basically just http://en.wikipedia.org/wiki/Information-theoretic_security
Really, if P=NP, then encoding your messages would be quite low on the priority list … however we’re not debating the practical impact here, but that “finding a provably secure encryption scheme is harder than proving P!=NP”, which was raised as a nitpick, and is clearly not the case.
Happiness or unhappiness of life with one-time pads notwithstanding.
I’m concerned about the probability of having some technical people get together and solve some incredibly deep research problems before some perhaps-slightly-less-technical people plough ahead and get practical results without the benefit of that research. I’m skeptical that we’ll see FAI before UFAI for the same reason I’m skeptical that we’ll see a Navier-Stokes existence proof before a macroscale DNS solution, I’m skeptical that we’ll prove P!=NP or even find a provably secure encryption scheme before making the world’s economy dependent on unproven schemes, etc.
Even some of the important subgoals of FAI, being worked on with far more resources than MIRI has yet, are barely showing on the radar. IIRC someone only recently produced a provably correct C compiler (and in the process exposed a bunch of bugs in the industry standard compilers) - wouldn’t we feel foolish if a provably FAI human-readable code turned UF simply because a bug was automatically introduced in the compilation? Or if a cosmic ray or slightly-out-of-tolerance manufacturing defect affected one of the processors? Fault-tolerant MPI is still leading-edge research, because although we’ve never needed it before, at exascale and above the predicted mean time between hardware-failures-on-some-node goes down to hours.
One of the reasons UFAI could be such an instant danger is the current ubiquitous nature of exploitable bugs on networked computers… yet “how do we write even simple high performance software without exploitable bugs” seems to be both a much more popular research problem than and a prerequisite to “how do we write a FAI”, and it’s not yet solved.
Nitpick, but finding a provably secure encryption scheme is harder than proving P!=NP, since if P=NP then no secure encryption scheme can exist.
What? Why? Just because RSA would be broken? Shor’s algorithm would also do so, even in a proven P!=NP world. There may be other substitutes for RSA, using different complexity classes. There are other approaches altogether. Not to mention one-time pads.
As I understand it, if P=NP in a practical sense, then almost all cryptography is destroyed as P=NP destroys one-way functions & secure hashes in general. So RSA goes down, many quantum-proof systems go down, and so on and so forth, and you’re left with basically just http://en.wikipedia.org/wiki/Information-theoretic_security
http://www.karlin.mff.cuni.cz/~krajicek/ri5svetu.pdf discusses some of this.
And life was so happy with just one-time pads?
Really, if P=NP, then encoding your messages would be quite low on the priority list … however we’re not debating the practical impact here, but that “finding a provably secure encryption scheme is harder than proving P!=NP”, which was raised as a nitpick, and is clearly not the case.
Happiness or unhappiness of life with one-time pads notwithstanding.