Back

Machine learning has progressed significantly in various applications ranging from face recognition to text generation. …

Less plausible than Adversarial Reprogramming

3 stars

This paper covers a highly-effective (85%+) hijack attack where training data is tainted by an adversary, and then the model can be cajoled into performing other types of tasks. While this work is a steep closer to a more general-type of attack, the model is less plausible than inference-time attacks popularized in the Adversarial Reprogramming literature.