单词级文本攻击—论文阅读

TAAD2.2论文概览

  • 0.前言
  • 1-10
    • 1.Bridge the Gap Between CV and NLP! A Gradient-based Textual Adversarial Attack Framework
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码
    • 2.TextHacker: Learning based Hybrid Local Search Algorithm for Text Hard-label Adversarial Attack
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码
    • 3.TextHoaxer: Budgeted Hard-Label Adversarial Attacks on Text
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码
    • 4.Query-Efficient and Scalable Black-Box Adversarial Attacks on Discrete Sequential Data via Bayesian Optimization
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码
    • 5.SemAttack: Natural Textual Attacks on Different Semantic Spaces
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码
    • 6.Gradient-based Adversarial Attacks against Text Transformers
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码
    • 7.A Strong Baseline for Query Efficient Attacks in a Black Box Setting
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码
    • 8.On the Transferability of Adversarial Attacks against Neural Text Classifier
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码
    • 9.Crafting Adversarial Examples for Neural Machine Translation
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码
    • 10.An Empirical Study on Adversarial Attack on NMT: Languages and Positions Matter
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码
  • 11-20
    • 11.A Closer Look into the Robustness of Neural Dependency Parsers Using Better Adversarial Examples
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码
    • 12.Contextualized Perturbation for Textual Adversarial Attack
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码
    • 13.Adv-OLM: Generating Textual Adversaries via OLM
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码
    • 14.Adversarial Stylometry in the Wild: Transferable Lexical Substitution Attacks on Author Profiling
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码
    • 15.Generating Natural Language Attacks in a Hard Label Black Box Setting
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码
    • 16.A Geometry-Inspired Attack for Generating Natural Language Adversarial Examples
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码
    • 17.BERT-ATTACK: Adversarial Attack Against BERT Using BERT
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码
    • 18.BAE: BERT-based Adversarial Examples for Text Classification
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码
    • 19.Detecting Word Sense Disambiguation Biases in Machine Translation for Model-Agnostic Adversarial Attacks
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码
    • 20.Imitation Attacks and Defenses for Black-box Machine Translation Systems
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码
  • 21-30
    • 21.Robustness to Modification with Shared Words in Paraphrase Identification
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码
    • 22.Word-level Textual Adversarial Attacking as Combinatorial Optimization
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码
    • 23.It's Morphin' Time! Combating Linguistic Discrimination with Inflectional Perturbations
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码
    • 24.On the Robustness of Language Encoders against Grammatical Errors
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码
    • 25.Evaluating and Enhancing the Robustness of Neural Network-based Dependency Parsing Models with Adversarial Examples
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码
    • 26.A Reinforced Generation of Adversarial Examples for Neural Machine Translation
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码
    • 27.Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码
    • 28.Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码
    • 29.Greedy Attack and Gumbel Attack: Generating Adversarial Examples for Discrete Data
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码
    • 30.On the Robustness of Self-Attentive Models
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码
  • 31-40
    • 31.Generating Natural Language Adversarial Examples through Probability Weighted Word Saliency
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码
    • 32.Generating Fluent Adversarial Examples for Natural Languages
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码
    • 33.Robust Neural Machine Translation with Doubly Adversarial Inputs
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码
    • 34.Universal Adversarial Attacks on Text Classifiers
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码
    • 35.Generating Natural Language Adversarial Examples
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码
    • 36.Breaking NLI Systems with Sentences that Require Simple Lexical Inferences
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码
    • 37.Deep Text Classification Can be Fooled
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码
    • 38.Interpretable Adversarial Perturbation in Input Embedding Space for Text
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码
    • 39.Towards Crafting Text Adversarial Samples
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码
    • 40.Crafting Adversarial Input Sequences For Recurrent Neural Networks
      • a. 背景
      • b. 方法
      • c. 结果
      • d. 论文及代码

Must-read Papers on Textual Adversarial Attack and Defense (TAAD)

必读的文本对抗性攻击与防御论文(TAAD)系列之2.2 Word-level Attack

2.2 Word-level Attack

0.前言

2024.3.3— 正在编辑

1-10

1.Bridge the Gap Between CV and NLP! A Gradient-based Textual Adversarial Attack Framework

a. 背景

b. 方法

c. 结果

d. 论文及代码

Bridge the Gap Between CV and NLP! A Gradient-based Textual Adversarial Attack Framework. Lifan Yuan, Yichi Zhang, Yangyi Chen, Wei Wei. Findings of ACL 2023. decision[pdf]

论文:[2110.15317] Bridge the Gap Between CV and NLP! A Gradient-based Textual Adversarial Attack Framework (arxiv.org)

代码:https://github.com/Phantivia/T-PGD.

2.TextHacker: Learning based Hybrid Local Search Algorithm for Text Hard-label Adversarial Attack

a. 背景

b. 方法

c. 结果

d. 论文及代码

TextHacker: Learning based Hybrid Local Search Algorithm for Text Hard-label Adversarial Attack. Zhen Yu, Xiaosen Wang, Wanxiang Che, Kun He. Findings of EMNLP 2022. decision[pdf][code]

论文:[2201.08193] TextHacker: Learning based Hybrid Local Search Algorithm for Text Hard-label Adversarial Attack (arxiv.org)

代码:GitHub - JHL-HUST/TextHacker: TextHacker: Learning based Hybrid Local Search Algorithm for Hard-label Text Adversarial Attack

3.TextHoaxer: Budgeted Hard-Label Adversarial Attacks on Text

a. 背景

b. 方法

c. 结果

d. 论文及代码

TextHoaxer: Budgeted Hard-Label Adversarial Attacks on Text. Muchao Ye, Chenglin Miao, Ting Wang, Fenglong Ma. AAAI 2022. decision [pdf] [code]

论文:TextHoaxer: Budgeted Hard-Label Adversarial Attacks on Text | Proceedings of the AAAI Conference on Artificial Intelligence

代码:GitHub - machinelearning4health/TextHoaxer: Implementation Code of TextHoaxer

4.Query-Efficient and Scalable Black-Box Adversarial Attacks on Discrete Sequential Data via Bayesian Optimization

a. 背景

b. 方法

c. 结果

d. 论文及代码

Query-Efficient and Scalable Black-Box Adversarial Attacks on Discrete Sequential Data via Bayesian Optimization. Deokjae Lee, Seungyong Moon, Junhyeok Lee, Hyun Oh Song. ICML 2022. score [pdf][code]

论文:[2206.08575] Query-Efficient and Scalable Black-Box Adversarial Attacks on Discrete Sequential Data via Bayesian Optimization (arxiv.org)

代码:GitHub - snu-mllab/DiscreteBlockBayesAttack: Official PyTorch implementation of “Query-Efficient and Scalable Black-Box Adversarial Attacks on Discrete Sequential Data via Bayesian Optimization” (ICML’22)

5.SemAttack: Natural Textual Attacks on Different Semantic Spaces

a. 背景

b. 方法

c. 结果

d. 论文及代码

SemAttack: Natural Textual Attacks on Different Semantic Spaces. Boxin Wang, Chejian Xu, Xiangyu Liu, Yu Cheng, Bo Li. Findings of NAACL 2022. gradient [pdf] [code]

论文:[2205.01287] SemAttack: Natural Textual Attacks via Different Semantic Spaces (arxiv.org)

代码:github.com

6.Gradient-based Adversarial Attacks against Text Transformers

a. 背景

b. 方法

c. 结果

d. 论文及代码

Gradient-based Adversarial Attacks against Text Transformers. Chuan Guo, Alexandre Sablayrolles, Hervé Jégou, Douwe Kiela. EMNLP 2021. gradient [pdf] [code]

论文:2021.emnlp-main.464.pdf (aclanthology.org)

代码:https://github.com/facebookresearch/text-adversarial-attack

7.A Strong Baseline for Query Efficient Attacks in a Black Box Setting

a. 背景

b. 方法

c. 结果

d. 论文及代码

A Strong Baseline for Query Efficient Attacks in a Black Box Setting. Rishabh Maheswary, Saket Maheshwary, Vikram Pudi. EMNLP 2021. score [pdf] [code]

论文:[2109.04775] A Strong Baseline for Query Efficient Attacks in a Black Box Setting (arxiv.org)

代码:GitHub - RishabhMaheshwary/query-attack: A Query Efficient Natural Language Attack in a Black Box Setting

8.On the Transferability of Adversarial Attacks against Neural Text Classifier

a. 背景

b. 方法

c. 结果

d. 论文及代码

On the Transferability of Adversarial Attacks against Neural Text Classifier. Liping Yuan, Xiaoqing Zheng, Yi Zhou, Cho-Jui Hsieh, Kai-Wei Chang. EMNLP 2021. [pdf]

论文:2011.08558.pdf (arxiv.org)

代码:

9.Crafting Adversarial Examples for Neural Machine Translation

a. 背景

b. 方法

c. 结果

d. 论文及代码

Crafting Adversarial Examples for Neural Machine Translation. Xinze Zhang, Junzhe Zhang, Zhenhua Chen, Kun He. ACL-IJCNLP 2021. score [pdf] [code]

论文:2021.acl-long.153.pdf (aclanthology.org)

代码:GitHub - JHL-HUST/AdvNMT-WSLS: Crafting Adversarial Examples for Neural Machine Translation

10.An Empirical Study on Adversarial Attack on NMT: Languages and Positions Matter

a. 背景

b. 方法

c. 结果

d. 论文及代码

An Empirical Study on Adversarial Attack on NMT: Languages and Positions Matter. Zhiyuan Zeng, Deyi Xiong. ACL-IJCNLP 2021. score [pdf]

论文:2021.acl-short.58.pdf (aclanthology.org)

代码:

11-20

11.A Closer Look into the Robustness of Neural Dependency Parsers Using Better Adversarial Examples

a. 背景

b. 方法

c. 结果

d. 论文及代码

A Closer Look into the Robustness of Neural Dependency Parsers Using Better Adversarial Examples. Yuxuan Wang, Wanxiang Che, Ivan Titov, Shay B. Cohen, Zhilin Lei, Ting Liu. Findings of ACL: ACL-IJCNLP 2021. score [pdf] [code]

论文:2021.findings-acl.207.pdf (aclanthology.org)

代码:github.com

12.Contextualized Perturbation for Textual Adversarial Attack

a. 背景

b. 方法

c. 结果

d. 论文及代码

Contextualized Perturbation for Textual Adversarial Attack. Dianqi Li, Yizhe Zhang, Hao Peng, Liqun Chen, Chris Brockett, Ming-Ting Sun, Bill Dolan. NAACL-HLT 2021. score [pdf] [code]

论文:2021.naacl-main.400.pdf (aclanthology.org)

代码:github.com

13.Adv-OLM: Generating Textual Adversaries via OLM

a. 背景

b. 方法

c. 结果

d. 论文及代码

Adv-OLM: Generating Textual Adversaries via OLM. Vijit Malik, Ashwani Bhat, Ashutosh Modi. EACL 2021. score [pdf] [code]

论文:2021.eacl-main.71.pdf (aclanthology.org)

代码:github.com

14.Adversarial Stylometry in the Wild: Transferable Lexical Substitution Attacks on Author Profiling

a. 背景

b. 方法

c. 结果

d. 论文及代码

Adversarial Stylometry in the Wild: Transferable Lexical Substitution Attacks on Author Profiling. Chris Emmery, Ákos Kádár, Grzegorz Chrupała. EACL 2021. blind [pdf] [code]

论文:2021.eacl-main.203.pdf (aclanthology.org)

代码:github.com

15.Generating Natural Language Attacks in a Hard Label Black Box Setting

a. 背景

b. 方法

c. 结果

d. 论文及代码

Generating Natural Language Attacks in a Hard Label Black Box Setting. Rishabh Maheshwary, Saket Maheshwary, Vikram Pudi. AAAI 2021. decision [pdf] [code]

论文:2012.14956.pdf (arxiv.org)

代码:https://github.com/RishabhMaheshwary/hard-label-attack

16.A Geometry-Inspired Attack for Generating Natural Language Adversarial Examples

a. 背景

b. 方法

c. 结果

d. 论文及代码

A Geometry-Inspired Attack for Generating Natural Language Adversarial Examples. Zhao Meng, Roger Wattenhofer. COLING 2020. gradient [pdf] [code]

论文:2020.coling-main.585.pdf (aclanthology.org)

代码:https://github.com/zhaopku/nlp_geometry_attack

17.BERT-ATTACK: Adversarial Attack Against BERT Using BERT

a. 背景

b. 方法

c. 结果

d. 论文及代码

BERT-ATTACK: Adversarial Attack Against BERT Using BERT. Linyang Li, Ruotian Ma, Qipeng Guo, Xiangyang Xue, Xipeng Qiu. EMNLP 2020. score [pdf] [code]

论文:aclanthology.org/2020.emnlp-main.500.pdf

代码:GitHub - LinyangLee/BERT-Attack: Code for EMNLP2020 long paper: BERT-Attack: Adversarial Attack Against BERT Using BERT

18.BAE: BERT-based Adversarial Examples for Text Classification

a. 背景

b. 方法

c. 结果

d. 论文及代码

BAE: BERT-based Adversarial Examples for Text Classification. Siddhant Garg, Goutham Ramakrishnan. EMNLP 2020. score [pdf] [code]

论文:2020.emnlp-main.498.pdf (aclanthology.org)

代码:TextAttack/textattack/attack_recipes/bae_garg_2019.py at master · QData/TextAttack · GitHub

19.Detecting Word Sense Disambiguation Biases in Machine Translation for Model-Agnostic Adversarial Attacks

a. 背景

b. 方法

c. 结果

d. 论文及代码

Detecting Word Sense Disambiguation Biases in Machine Translation for Model-Agnostic Adversarial Attacks. Denis Emelin, Ivan Titov, Rico Sennrich. EMNLP 2020. blind [pdf] [code]

论文:2020.emnlp-main.616.pdf (aclanthology.org)

代码:GitHub - demelin/detecting_wsd_biases_for_nmt: Repository containing the experimental code for the publication ‘Detecting Word Sense Disambiguation Biases in Machine Translation for Model-Agnostic Adversarial Attacks’ (Emelin, Denis, Ivan Titov, and Rico Sennrich, EMNLP 2020).

20.Imitation Attacks and Defenses for Black-box Machine Translation Systems

a. 背景

b. 方法

c. 结果

d. 论文及代码

Imitation Attacks and Defenses for Black-box Machine Translation Systems. Eric Wallace, Mitchell Stern, Dawn Song. EMNLP 2020. decision [pdf] [code]

论文:aclanthology.org/2020.emnlp-main.446.pdf

代码:GitHub - Eric-Wallace/adversarial-mt: Code for “Imitation Attacks and Defenses for Black-box Machine Translations Systems”

21-30

21.Robustness to Modification with Shared Words in Paraphrase Identification

a. 背景

b. 方法

c. 结果

d. 论文及代码

Robustness to Modification with Shared Words in Paraphrase Identification. Zhouxing Shi, Minlie Huang. Findings of ACL: EMNLP 2020. score [pdf]

论文:aclanthology.org/2020.findings-emnlp.16.pdf

代码:

22.Word-level Textual Adversarial Attacking as Combinatorial Optimization

a. 背景

b. 方法

c. 结果

d. 论文及代码

Word-level Textual Adversarial Attacking as Combinatorial Optimization. Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, Maosong Sun. ACL 2020. score [pdf] [code]

论文:2020.acl-main.540.pdf (aclanthology.org)

代码:GitHub - thunlp/SememePSO-Attack: Code and data of the ACL 2020 paper “Word-level Textual Adversarial Attacking as Combinatorial Optimization”

23.It’s Morphin’ Time! Combating Linguistic Discrimination with Inflectional Perturbations

a. 背景

b. 方法

c. 结果

d. 论文及代码

It’s Morphin’ Time! Combating Linguistic Discrimination with Inflectional Perturbations. Samson Tan, Shafiq Joty, Min-Yen Kan, Richard Socher. ACL 2020. score [pdf] [code]

论文:2020.acl-main.263.pdf (aclanthology.org)

代码:GitHub - salesforce/morpheus: Code for ACL’20 paper “It’s Morphin’ Time! Combating Linguistic Discrimination with Inflectional Perturbations”

24.On the Robustness of Language Encoders against Grammatical Errors

a. 背景

b. 方法

c. 结果

d. 论文及代码

On the Robustness of Language Encoders against Grammatical Errors. Fan Yin, Quanyu Long, Tao Meng, Kai-Wei Chang. ACL 2020. score [pdf] [code]

论文:2020.acl-main.310.pdf (aclanthology.org)

代码:GitHub - uclanlp/ProbeGrammarRobustness: Source code for ACL2020: On the Robustness of Language Encoders against Grammatical Errors

25.Evaluating and Enhancing the Robustness of Neural Network-based Dependency Parsing Models with Adversarial Examples

a. 背景

b. 方法

c. 结果

d. 论文及代码

Evaluating and Enhancing the Robustness of Neural Network-based Dependency Parsing Models with Adversarial Examples. Xiaoqing Zheng, Jiehang Zeng, Yi Zhou, Cho-Jui Hsieh, Minhao Cheng, Xuanjing Huang. ACL 2020. gradient score [pdf] [code]

论文:2020.acl-main.590.pdf (aclanthology.org)

代码:GitHub - zjiehang/DPAttack: Pytorch implementation for ACL 2020 Paper: “Evaluating and Enhancing the Robustness of Neural Network-based Dependency Parsing Models with Adversarial Examples”

26.A Reinforced Generation of Adversarial Examples for Neural Machine Translation

a. 背景

b. 方法

c. 结果

d. 论文及代码

A Reinforced Generation of Adversarial Examples for Neural Machine Translation. Wei Zou, Shujian Huang, Jun Xie, Xinyu Dai, Jiajun Chen. ACL 2020. decision [pdf]

论文:aclanthology.org/2020.acl-main.319.pdf

代码:

27.Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment

a. 背景

b. 方法

c. 结果

d. 论文及代码

Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment. Di Jin, Zhijing Jin, Joey Tianyi Zhou, Peter Szolovits. AAAI 2020. score [pdf] [code]

论文:1907.11932v4.pdf (arxiv.org)

代码:GitHub - jind11/TextFooler: A Model for Natural Language Attack on Text Classification and Inference

28.Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples

a. 背景

b. 方法

c. 结果

d. 论文及代码

Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples. Minhao Cheng, Jinfeng Yi, Pin-Yu Chen, Huan Zhang, Cho-Jui Hsieh. AAAI 2020. score [pdf] [code]

论文:1803.01128.pdf (arxiv.org)

代码:GitHub - cmhcbb/Seq2Sick: Adversarial examples for Seq2Seq model in NLP

29.Greedy Attack and Gumbel Attack: Generating Adversarial Examples for Discrete Data

a. 背景

b. 方法

c. 结果

d. 论文及代码

Greedy Attack and Gumbel Attack: Generating Adversarial Examples for Discrete Data. Puyudi Yang, Jianbo Chen, Cho-Jui Hsieh, Jane-LingWang, Michael I. Jordan. JMLR 2020. score [pdf] [code]

论文:19-569.pdf (jmlr.org)

代码:GitHub - Puyudi/Greedy-Attack-and-Gumbel-Attack (Under Construction.)

30.On the Robustness of Self-Attentive Models

a. 背景

b. 方法

c. 结果

d. 论文及代码

On the Robustness of Self-Attentive Models. Yu-Lun Hsieh, Minhao Cheng, Da-Cheng Juan, Wei Wei, Wen-Lian Hsu, Cho-Jui Hsieh. ACL 2019. score [pdf]

论文:P19-1147.pdf (aclanthology.org)

代码:

31-40

31.Generating Natural Language Adversarial Examples through Probability Weighted Word Saliency

a. 背景

b. 方法

c. 结果

d. 论文及代码

Generating Natural Language Adversarial Examples through Probability Weighted Word Saliency. Shuhuai Ren, Yihe Deng, Kun He, Wanxiang Che. ACL 2019. score [pdf] [code]

论文:P19-1103.pdf (aclanthology.org)

代码:[GitHub - JHL-HUST/PWWS: Generating Natural Language Adversarial Examples through Probability Weighted Word Saliency](https://github.com/JHL-HUST/PWWS/)

32.Generating Fluent Adversarial Examples for Natural Languages

a. 背景

b. 方法

c. 结果

d. 论文及代码

Generating Fluent Adversarial Examples for Natural Languages. Huangzhao Zhang, Hao Zhou, Ning Miao, Lei Li. ACL 2019. gradient score [pdf] [code]

论文:Generating Fluent Adversarial Examples for Natural Languages - ACL Anthology

代码:GitHub - LC-John/Metropolis-Hastings-Attacker: “Generating Fluent Adversarial Examples for Natural Languages”

33.Robust Neural Machine Translation with Doubly Adversarial Inputs

a. 背景

b. 方法

c. 结果

d. 论文及代码

Robust Neural Machine Translation with Doubly Adversarial Inputs. Yong Cheng, Lu Jiang, Wolfgang Macherey. ACL 2019. gradient [pdf]

论文:https://aclanthology.org/P19-1425/

代码:

34.Universal Adversarial Attacks on Text Classifiers

a. 背景

b. 方法

c. 结果

d. 论文及代码

Universal Adversarial Attacks on Text Classifiers. Melika Behjati, Seyed-Mohsen Moosavi-Dezfooli, Mahdieh Soleymani Baghshah, Pascal Frossard. ICASSP 2019. gradient [pdf]

论文:Universal Adversarial Attacks on Text Classifiers | IEEE Conference Publication | IEEE Xplore

代码:

35.Generating Natural Language Adversarial Examples

a. 背景

b. 方法

c. 结果

d. 论文及代码

Generating Natural Language Adversarial Examples. Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, Kai-Wei Chang. EMNLP 2018. score [pdf] [code]

论文:Generating Natural Language Adversarial Examples - ACL Anthology

代码:GitHub - nesl/nlp_adversarial_examples: Implementation code for the paper “Generating Natural Language Adversarial Examples”

36.Breaking NLI Systems with Sentences that Require Simple Lexical Inferences

a. 背景

b. 方法

c. 结果

d. 论文及代码

Breaking NLI Systems with Sentences that Require Simple Lexical Inferences. Max Glockner, Vered Shwartz, Yoav Goldberg. ACL 2018. blind [pdf] [dataset]

论文:Breaking NLI Systems with Sentences that Require Simple Lexical Inferences - ACL Anthology

代码:GitHub - BIU-NLP/Breaking_NLI: NLI test set with lexical inferences

37.Deep Text Classification Can be Fooled

a. 背景

b. 方法

c. 结果

d. 论文及代码

Deep Text Classification Can be Fooled. Bin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, Wenchang Shi. IJCAI 2018. gradient score [pdf]

论文:1704.08006.pdf (arxiv.org)

代码:

38.Interpretable Adversarial Perturbation in Input Embedding Space for Text

a. 背景

b. 方法

c. 结果

d. 论文及代码

Interpretable Adversarial Perturbation in Input Embedding Space for Text. Sato, Motoki, Jun Suzuki, Hiroyuki Shindo, Yuji Matsumoto. IJCAI 2018. gradient [pdf] [code]

论文:1805.02917.pdf (arxiv.org)

代码:GitHub - aonotas/interpretable-adv: Code for Interpretable Adversarial Perturbation in Input Embedding Space for Text, IJCAI 2018.

39.Towards Crafting Text Adversarial Samples

a. 背景

b. 方法

c. 结果

d. 论文及代码

Towards Crafting Text Adversarial Samples. Suranjana Samanta, Sameep Mehta. ECIR 2018. gradient [pdf]

论文:1707.02812.pdf (arxiv.org)

代码:

40.Crafting Adversarial Input Sequences For Recurrent Neural Networks

a. 背景

b. 方法

c. 结果

d. 论文及代码

Crafting Adversarial Input Sequences For Recurrent Neural Networks. Nicolas Papernot, Patrick McDaniel, Ananthram Swami, Richard Harang. MILCOM 2016. gradient [pdf]

论文:1604.08275.pdf (arxiv.org)

代码:

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/718152.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

阅读笔记 | Transformers in Time Series: A Survey

阅读论文: Wen, Qingsong, et al. “Transformers in time series: A survey.” arXiv preprint arXiv:2202.07125 (2022). 这篇综述主要对基于Transformer的时序建模方法进行介绍。论文首先简单介绍了Transformer的基本原理,包括位置编码、多头注意力机…

OPENAI SORA:未来视频创作的新引擎——浅析其背后的人工智能算法

Sora - 探索AI视频模型的无限可能 随着人工智能技术的飞速发展,AI视频模型已成为科技领域的新热点。而在这个浪潮中,OpenAI推出的首个AI视频模型Sora,以其卓越的性能和前瞻性的技术,引领着AI视频领域的创新发展。本文将探讨SORA的…

回归预测 | Matlab实现RIME-BP霜冰算法优化BP神经网络多变量回归预测

回归预测 | Matlab实现RIME-BP霜冰算法优化BP神经网络多变量回归预测 目录 回归预测 | Matlab实现RIME-BP霜冰算法优化BP神经网络多变量回归预测预测效果基本描述程序设计参考资料 预测效果 基本描述 1.Matlab实现RIME-BP霜冰算法优化BP神经网络多变量回归预测(完整…

自动化测试介绍、selenium用法(自动化测试框架+爬虫可用)

文章目录 一、自动化测试1、什么是自动化测试?2、手工测试 vs 自动化测试3、自动化测试常见误区4、自动化测试的优劣5、自动化测试分层6、什么项目适合自动化测试 二、Selenuim1、小例子2、用法3、页面操作获取输入内容模拟点击清空文本元素拖拽frame切换窗口切换/标…

十五 超级数据查看器 讲解稿 外观设置

十五 超级数据查看器 讲解稿 外观设置 视频讲座地址 讲解稿全文: 大家好,今天讲解超级数据查看器,详情界面的外观设置。 首先,我们打开超级数据查看器。 本节课以成语词典为例来做讲述。 我们打开成语词典这个表,随便选一条记录点击&#x…

AutoSAR(基础入门篇)13.4-Mcal Dio代码分析

目录 一、文件结构 二、动态代码 1、arxml文件 2、Dio_Cfg.h 3、Dio_PBCfg.c 4、小结

【虚拟机安装centos7后找不到网卡问题】

最近开始学习linux,看着传智播客的教学视频学习,里面老师用的是centos6.5,我这边装的是centos7最新版的 结果到了网络配置的这一节,卡了我好久。 我在centos一直找不到我的网卡eth0,只有一个回环网口,在/…

什么是MVC和MVVM

**MVC和MVVM是两种流行的软件架构模式,它们在前端开发中被广泛采用来组织代码和管理应用程序的复杂性**。具体如下: MVC(Model-View-Controller): 1. 模型(Model):负责管理数据和业…

软考高级:主动攻击和被动攻击概念和例题

作者:明明如月学长, CSDN 博客专家,大厂高级 Java 工程师,《性能优化方法论》作者、《解锁大厂思维:剖析《阿里巴巴Java开发手册》》、《再学经典:《Effective Java》独家解析》专栏作者。 热门文章推荐&am…

第五套CCF信息学奥赛c++练习题 CSP-J认证初级组 中小学信奥赛入门组初赛考前模拟冲刺题(选择题)

第五套中小学信息学奥赛CSP-J考前冲刺题 1、不同类型的存储器组成了多层次结构的存储器体系,按存取速度从快到慢排列的是 A、快存/辅存/主存 B、外存/主存/辅存 C、快存/主存/辅存 D、主存/辅存/外存 答案:C 考点分析:主要考查计算机相关知识&…

静态链表(3)

尾插函数 尾插就比头插多了一步找尾巴,其他均一样 尾插步骤画图 1.找到空闲结点3 2.空链踢空点,穿透删除 先绑后面 再接前面,就完成插入了 综上所述,静态链表就是处理两条链表,静态链表总的执行一次插入或删除&#…

Netty NIO ByteBuffer 简单实验

1.概要 准备学一下Netty,先从NIO的三大组件开始。先ByteBuffer 2.代码 2.1 主函数 package com.xjc.springcloundtest;import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.IOException; import java.nio.ByteBuffer; import java.nio…

【大厂AI课学习笔记NO.62】模型的部署

我们历尽千辛万苦,总算要部署模型了。这个系列也写到62篇,不要着急,后面还有很多。 这周偷懒了,一天放出太多的文章,大家可能有些吃不消,从下周开始,本系列将正常更新。 这套大厂AI课&#xf…

[python] dict类型变量写在文件中

在Python中,如果你想要将一个字典变量以具有可读性的格式写入文件,并且指定缩进为2个空格,你可以使用json模块来实现。json模块提供了一种很方便的方法来进行序列化和反序列化Python对象。下面是一个具体的示例: 字典变量以具有可…

【剑指offer--C/C++】JZ3 数组中重复的数字

一、题目 二、本人思路及代码 这道题目它要求的时间空间利用率都是n,那么可以考虑创建一个长度为n的数组repeat初始化为0,下标代码出现的数字,下标对应的数组内容代表该下标数字出现的次数。然后遍历提供的数组,每出现一个数字&a…

超详细多表查询详解-多表关系-多表查询-子查询

多表关系 一对多关系:这是最常见的关系类型,它表示在两个表之间,一个表中的记录可以与另一个表中的多个记录相关联。例如,一个班级(父表)可以有多个学生(子表),但每个学…

市场复盘总结 20240301

仅用于记录当天的市场情况,用于统计交易策略的适用情况,以便程序回测 短线核心:不参与任何级别的调整,采用龙空龙模式 一支股票 10%的时候可以操作, 90%的时间适合空仓等待 二进三: 进级率中 40% 最常用的…

Linux高级编程:进程(一)

1、进程 1.1什么是进程&#xff1a;进行中的程序&#xff08;正在运行中的程序&#xff09;-process过程 程序的一次执行过程 - 进程 hello.c -- 程序源代码 a.out -- 可执行程序 1.2程序和进程的关系&#xff1a; 程序<------>进程 1.3进程怎么来的&#xff1a; 程…

http 协议深入介绍

一&#xff0c;http 相关概念 &#xff08;一&#xff09;关键名词 1&#xff0c;互联网 是网络的网络&#xff0c;是所有类型网络的母集 2&#xff0c;因特网 世界上最大的互联网网络。即因特网概念从属于互联网概念。习惯上&#xff0c;大家把连接在因特网上的计算机都成…

码界深潜:全面解读软件工程的艺术与科学

&#x1f3e1; 基石构筑篇——软件工程基础理论及技能 &#x1f522; 编程语言选型与精修 于软件工程之浩瀚宇宙中&#xff0c;编程语言犹如各色画笔&#xff0c;每种语言的特性对应不同的创作领域。譬如Java倚仗跨平台兼容性和强大的面向对象机制&#xff0c;在企业级应用程序…