前言:这篇文章来总结一下针对AIGC检测的常见攻击手段,选取的研究工作均出自近5年AIGC检测相关文章。(论文被拒了需要补实验,先来看看别人怎么做的……)
2019 WIFS Detecting and Simulating Artifacts in GAN Fake Images
We show the robustness of the proposed method with two different post-processing methods: JPEG compression and image resize. For JPEG compression, we randomly select one JPEG quality factor from [100, 90, 70, 50] and apply it to each of the fake image. For image resize, we randomly select one image size from [256, 200, 150, 128].
2020 CVPR CNN-Generated Images Are Surprisingly Easy to Spot… for Now
To test this, we blurred (simulating re-sampling) and JPEG-compressed the real and fake images following the protocol in [38], and evaluated our ability to detect them (Figure 5).
2022 ECCV Detecting Generated Images by Real Images
Gaussian blurring (sigma: 0.1–1), JPEG quality factors (70–100), image cropping, and resizing (cropping/scaling factor: 0.25–1)
2022 ICIP Fusing Global and Local Features for Generalized AI-Synthesized Image Detection
We also evaluate the robustness of our method and the baselines on the images post-processed with Gaussian Blur and JPEG Compression.
2023 APSIPA ASC AI-Generated Image Detection using a Cross-Attention Enhanced Dual-Stream Network
In this section, we assess the robustness of our proposed model in the face of seven post-processing techniques, encompassing adjustments to chromaticity, brightness, contrast, sharpness, rotation, and the application of Gaussian blur and mean blur… To create a more realistic simulation of complex real-world scenarios, we’ve incorporated randomness into the parameters controlling the image alterations. For instance, the factors governing the degree of image manipulation (chromaticity, brightness, contrast) are randomly selected from a range of 0.5 to 2.5 for each image in the test dataset. Similarly, the factor controlling image sharpness is an arbitrary integer within the range of 0 to 4. Rotation degrees range from 0 to 360, and the kernel size for both Gaussian and mean filters is 5 × 5.
2023 ICASSP On the detection of synthetic images generated by diffusion models
For each image of the test, a crop with random (large) size and position is selected, resized to 200 × 200 pixels, and compressed using a random JPEG quality factor from 65 to 100.
2023 CVPR Learning on Gradients- Generalized Artifacts Representation for GAN-Generated Images Detection
To evaluate the robustness of the proposed framework to image perturbations, we apply common image perturbations on the test images with a probability of 50% following [13]. These perturbations include blurring, cropping, compression, adding random noise, and a combination of all of them. In this subsection, the discriminator of StyleGANbedroom is used as the transformation model.
2023 CCS DE-FAKE- Detection and Attribution of Fake Images Generated by Text-to-Image Generation Models
Specifically, we evaluate the robustness of detectors and attributors against adversary example attacks, which are the most common and severe attacks against machine learning models. We leverage three representative adversary example attacks, namely FGSM [14], BIM [18], and DI-FGSM [41] to conduct the robustness analysis. Furthermore, given that our hybrid detector and attributor consider both the image and its corresponding prompt, we propose HybridFool, which maximizes the distance between the embedding of a given image and the prompt by adding perturbations to the image. In the following, we first present each adversary example attack we consider in this robustness analysis. Then, we show the evaluation results.
2023 ICCV DIRE for Diffusion-Generated Image Detection
Here, we evaluate the robustness of detectors in two-class degradations, i.e., Gaussian blur and JPEG compression, following [47]. The perturbations are added under three levels for Gaussian blur (σ = 1, 2, 3) and two levels for JPEG compression (quality = 65, 30).
2024 CVPR-W Raising the Bar of AI-generated Image Detection with CLIP
JPEG compression (100-60), WEBP compression (100-60), Resizing (1.25, 1.0, 0.75, 0.5, 0.25)
2024 CVPR AEROBLADE- Training-Free Detection of Latent Diffusion Images Using Autoencoder Reconstruction Error
Following previous works [12, 59] we use JPEG compression (with quality q), center cropping (with crop factor f and subsequent resizing to the original size), Gaussian blur, and Gaussian noise (both with standard deviation σ).
2024 CVPR Forgery-aware Adaptive Transformer for Generalizable Synthetic Image Detection
Specifically, we adopt random cropping, Gaussian blurring, JPEG compression, and Gaussian noising, each with a probability of 50%.
2024 ICML DRCT- Diffusion Reconstruction Contrastive Training towards Universal Detection of Diffusion Generated Images
we adopt the experimental setup from (Wu et al., 2023a) to perform resizing (with scales of 0.5, 0.75, 1.0, 1.25, 1.5) and JPEG compression (with quality factors of 60, 70, 80, 90, 100) on the tested images, which include both real and generated images
20231102 arXiv Detecting Generated Images by Real Images Only
The post-processing operations we used include:
• Blurring: Gaussian filtering with a kernel size of 3 and sigma from 0.1 to 1.
• Brightness adjustment: the adjustment parameter is from 0.3 to 3.
• Contrast adjustment: Gamma transform with γ from 0.3 to 3.
• Random cropping: for a 256×256 image, the cropping size was from 256 to 96, and we up-sampled the amplitude spectrum of the LNP of the final cropped image back to 256.
• JPEG compression: quality factors from 70 to 100.
• Gaussian noise: the sigma was set from 1 to 10, and the PSNR of the original image and the image after adding noise is from 26 to 47.
• Pepper & Salt noise: the ratio of pepper to salt is 1:1. The density of the added noise is 0.001 to 0.01, and the PSNR of the noisy images is from 18 to 31.
• Speckle noise: the sigma ranges from 0.01 to 0.1, and the PSNR of the noisy images is from 22 to 57.
• Poisson noise: the lambda is set from 0.1 to 1, and the PSNR of the noisy images is from 3 to 58.
20240325 arXiv Let Real Images be as a Judger, Spotting Fake Images Synthesized with Generative Models
blur (0-3.0), compression (100-30), noise (0-3.0), resizing (1.0-0.2)
无鲁棒性检测实验(极个别现象)
- 2023 CVPR Towards Universal Fake Image Detectors that Generalize Across Generative Models
- 2024 CVPR LaRE2 Latent Reconstruction Error Based Method for Diffusion-Generated Image Detection
总结:常见的针对AIGC检测的鲁棒性测试手段有:JPEG压缩、resize(图像尺寸调整)、高斯模糊、高斯噪声,还有一些零散的攻击手段如图像裁剪、色度、亮度、对比度、锐化、旋转、随机噪声、椒盐噪声、散斑噪声、泊松噪声、对抗样本……注意在测试每种攻击手段时,要把攻击程度考虑进去,比如JPEG压缩就要考虑压缩因子。