No cover

Two-in-One: A Model Hijacking Attack Against Text Generation Models (2023, Arxiv)

Published May 12, 2023 by Arxiv.

View on Arxiv

3 stars (1 review)

Machine learning has progressed significantly in various applications ranging from face recognition to text generation. However, its success has been accompanied by different attacks. Recently a new attack has been proposed which raises both accountability and parasitic computing risks, namely the model hijacking attack. Nevertheless, this attack has only focused on image classification tasks. In this work, we broaden the scope of this attack to include text generation and classification models, hence showing its broader applicability. More concretely, we propose a new model hijacking attack, Ditto, that can hijack different text classification tasks into multiple generation ones, e.g., language translation, text summarization, and language modeling. We use a range of text benchmark datasets such as SST-2, TweetEval, AGnews, QNLI, and IMDB to evaluate the performance of our attacks. Our results show that by using Ditto, an adversary can successfully hijack text generation models without jeopardizing their utility.

1 edition

Less plausible than Adversarial Reprogramming

3 stars

This paper covers a highly-effective (85%+) hijack attack where training data is tainted by an adversary, and then the model can be cajoled into performing other types of tasks. While this work is a steep closer to a more general-type of attack, the model is less plausible than inference-time attacks popularized in the Adversarial Reprogramming literature.