Jiang, Wan;
Diao, Yunfeng;
Wang, He;
Sun, Jianxin;
Wang, Meng;
Hong, Richang;
(2023)
Unlearnable Examples Give a False Sense of Security: Piercing through Unexploitable Data with Learnable Examples.
In:
Proceedings of the 31st ACM International Conference on Multimedia.
Association for Computing Machinery (ACM): Ottawa ON, Canada.
Preview |
Text
2305.09241.pdf - Accepted Version Download (2MB) | Preview |
Abstract
Safeguarding data from unauthorized exploitation is vital for privacy and security, especially in recent rampant research in security breach such as adversarial/membership attacks. To this end,unlearnable examples (UEs) have been recently proposed as a compelling protection, by adding imperceptible perturbation to data so that models trained on them cannot classify them accurately on original clean distribution. Unfortunately, we find UEs provide a false sense of security, because they cannot stop unauthorized users from utilizing other unprotected data to remove the protection, by turning unlearnable data into learnable again. Motivated by this observation, we formally define a new threat by introducinglearnable unauthorized examples (LEs) which are UEs with their protection removed. The core of this approach is a novel purification process that projects UEs onto the manifold of LEs. This is realized by a new joint-conditional diffusion model which denoises UEs conditioned on the pixel and perceptual similarity between UEs and LEs. Extensive experiments demonstrate that LE delivers state-of-the-art countering performance against both supervised UEs and unsupervised UEs in various scenarios, which is the first generalizable countermeasure to UEs across supervised learning and unsupervised learning. Our code is available at https://github.com/jiangw-0/LE_JCDP.
Type: | Proceedings paper |
---|---|
Title: | Unlearnable Examples Give a False Sense of Security: Piercing through Unexploitable Data with Learnable Examples |
Event: | MM '23: The 31st ACM International Conference on Multimedia |
Open access status: | An open access version is available from UCL Discovery |
DOI: | 10.1145/3581783.3611833 |
Publisher version: | https://doi.org/10.1145/3581783.3611833 |
Language: | English |
Additional information: | This version is the author accepted manuscript. For information on re-use, please refer to the publisher’s terms and conditions. |
Keywords: | Unlearnable Examples, Data Protection, Deep Neural Network |
UCL classification: | UCL UCL > Provost and Vice Provost Offices > UCL BEAMS UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Computer Science |
URI: | https://discovery-pp.ucl.ac.uk/id/eprint/10180990 |
Archive Staff Only
![]() |
View Item |