[Question] What are some good examples of incorrigibility?

The idea of corrigibility is roughly that an AI should be aware that it may have faults, and therefore allow and facilitate human operators to correct these faults. I’m especially interested in scenarios where the AI system controls a particular input channel that is supposed to be used to control it, such as a shutdown button, a switch used to alter its mode of operation, or another device used to control its motivation. I’m especially interested in examples that would be compelling within the machine learning discipline, so I’d favour experimental accounts over hypotheticals.