David Ha’s (Twitter, blog) one of the more interesting deep learning researchers I follow. He works loosely on Model-based Reinforcement Learning and Evolutionary Algorithms, but in practice seems to explore whatever interests him. His most recent paper, Weight Agnostic Neural Networks looks at what happens when you do architecture search over neural nets initialized with random weights to try and better understand how much work structure is doing in neural nets.
I believe David satisfies all of your desiderata.
Often disagrees with the consensus on various questions around what are promising AI research directions.
Consistently produces original deep learning research that makes me go “wow, I never would have thought of that.”.
Has caused me to update on which aspects of neural nets are important for performance.
Is definitely effective as a researcher (see above).
Writes much clearer than average papers and also often uses visual aids and blog posts to explain his and others’ work (this is for the last two together).