“When training ML models, it is not enough to ask if the model has learned its task well. Creators of ML models must ask what else their models have learned. Are they memorizing and leaking their training data? Are they discovering privacy-violating features that have nothing to do with their learning tasks? Are they hiding backdoor functionality? We need least-privilege ML models that learn only what they need for their task – and nothing more.” Vitaly Shmatikov ( July 26, 2018 )

Advertisements