Yes. It seems like to get pseudodeterministic AI, we will need to rebuild AI from the very beginning, and I am not sure that it will all work. For example, pseudodeterminism is harder to attain with stochastic or mini-batch gradient descent, so one might need to use all the training data whenever one updates the weights. I have so far been able to get pseudodeterministic multi-layered models for solving classification problems, word embeddings for NLP, models that are measurements of security of block ciphers such as the advanced encryption standard (the models evaluating the AES are very easy to train), and other things. I have not been able to make pseudodeterministic version of convolutional networks, transformers, GANs, etc. We can use pseudodeterminism for narrow AI or the first few layers of a deep neural network right now though. There is also a funding and exposure issue since not very many people are talking about pseudodeterminism. I have more posts planned about this though.
A trade of performance In exchange for interpretability is exactly what we want for AI safety.
Yes. It seems like to get pseudodeterministic AI, we will need to rebuild AI from the very beginning, and I am not sure that it will all work. For example, pseudodeterminism is harder to attain with stochastic or mini-batch gradient descent, so one might need to use all the training data whenever one updates the weights. I have so far been able to get pseudodeterministic multi-layered models for solving classification problems, word embeddings for NLP, models that are measurements of security of block ciphers such as the advanced encryption standard (the models evaluating the AES are very easy to train), and other things. I have not been able to make pseudodeterministic version of convolutional networks, transformers, GANs, etc. We can use pseudodeterminism for narrow AI or the first few layers of a deep neural network right now though. There is also a funding and exposure issue since not very many people are talking about pseudodeterminism. I have more posts planned about this though.
A trade of performance In exchange for interpretability is exactly what we want for AI safety.