Can't Steal? Cont-Steal! Contrastive Stealing Attacks Against Image Encoders
Abstract
Self-supervised representation learning techniques have been developing rapidly to make full use of unlabeled images. They encode images into rich features that are oblivious to downstream tasks. Behind their revolutionary representation power, the requirements for dedicated model designs and a massive amount of computation resources expose image encoders to the risks of potential model stealing attacks - a cheap way to mimic the well-trained encoder performance while circumventing the demanding requirements. Yet conventional attacks only target supervised classifiers given their predicted labels and/or posteriors, which leaves the vulnerability of unsupervised encoders unexplored.
In this paper, we first instantiate the conventional stealing attacks against encoders and demonstrate their severer vulnerability compared with downstream classifiers. To better leverage the rich representation of encoders, we further propose Cont-Steal, a contrastive-learning-based attack, and validate its improved stealing effectiveness in various experiment settings. As a takeaway, we appeal to our community's attention to the intellectual property protection of representation learning techniques, especially to the defenses against encoder stealing attacks like ours.
Project members
Xinlei HE
Assistant Professor
Publications
Can’t Steal? Cont-Steal! Contrastive Stealing Attacks Against Image Encoders. Zeyang Sha, Xinlei He, Ning Yu, Michael Backes, and Yang Zhang.
Project Period
2023
Research Area
Data-driven AI & Machine Learning、Security and Privacy
Keywords
Contrastive Learning, Image Encoder, Model Stealing