NAVE
Networked Augmented Virtual Environment (NAVE) Group
Publication:Kaige Li, Maoxian Wan, Qichuan Geng, Weimin Shi, Xiaochun Cao,  Zhong Zhou. MAP: Masked Adversarial Perturbation for Boosting Black-box Attack Transferability[J]. IEEE Transactions on Image Processing (TIP). 2025, 34: 4426-4439. pdf (CCF A Journal, IF:10.6)
 
      The transferability of adversarial examples is vital for black-box attacks, as it enables the adversary to deceive the target model without knowing its internals. Despite numerous methods focusing on transferability,
they still struggle with transferring across models with distinct architectural components (e.g., CNNs and ViTs).   
In this work, we argue that the limited adversarial perturbation diversity leads to overfitting of the surrogate model, which acts as a key factor in reducing transferability.
To this end, we propose a Masked Adversarial Perturbation (MAP) method to boost adversarial transferability across various architectures from a novel perspective of diversifying perturbation.
Specifically, MAP randomly masks perturbation patches during iterations and compels the remaining ones to retain the attack effect, which diversifies perturbations to mitigate their overfitting to the surrogate model. 
Naturally, MAP spreads perturbation over local patches to alleviate their co-adaptation and prevent perturbations from overly relying on specific patterns. Consequently, it can deceive convolution operation and self-attention mechanism indiscriminately by attacking their basic input units, i.e., a single patch, showing superior transferability over previous methods. 
Extensive experiments illustrate that MAP consistently and significantly boosts diverse black-box attacks to achieve state-of-the-art performance.
create by admin at 2025-07-16 12:24:02