[Question] Machine Learning vs Differential Privacy

edit: see below for clarifications by expert domain rpglover64 and a good pick of references from the gears to ascenscion. My one (cheatingly long) sentence takeaway is: it’s clear training does not automatically lead to DP, it’s unclear if DP can always or seldom help training, it’s likely that easy algorithms are not available yet, it’s unlikely that finding one is low hanging fruit.

From wikipedia, « an algorithm is differentially private if an observer seeing its output cannot tell if a particular individual’s information was used in the computation ». In other words, if some training process asymptotically converges toward generalisable knowledge only, then it should tend to become differentially private.

…or so it seems to me, but actually I’ve no idea if that’s common knowledge in ml- or crypto- educated folks, versus it’s pure personal guess and there’s no reason to believe that. What do you see as the best argument for or against that idea? Any guess on how to disprove or prove it?

Extra Good Samaritan point : my english sucks, so any comment rewriting this post in good english, even if for minor details, is a great help thank you.

This is version 0.1

No comments.