Actually I’m kind of more comfortable with MIRI math than with ML math, but the research group here is more interested in machine learning. If I recommended them to look into provability logic, they would get big eyes and say Whoa!, but no more. If, however, I do ML research in the direction of AI safety, they would get interested. (And they are getting interested, but (1) they can’t switch their research too quickly and (2) I don’t know enough Japanese and the students don’t know enough English to make any kind of lunchtime or hallway conversation about AI safety possible.)
Thanks for your varied suggestions!
Actually I’m kind of more comfortable with MIRI math than with ML math, but the research group here is more interested in machine learning. If I recommended them to look into provability logic, they would get big eyes and say Whoa!, but no more. If, however, I do ML research in the direction of AI safety, they would get interested. (And they are getting interested, but (1) they can’t switch their research too quickly and (2) I don’t know enough Japanese and the students don’t know enough English to make any kind of lunchtime or hallway conversation about AI safety possible.)
It seems like Toyota has some interest in provable correct software: https://www.infoq.com/news/2015/05/provably-correct-software .