Researchers from Microsoft and the University of Maryland (UMD) announced Free Large-Batch (FreeLB), a new adversarial training technique for deep-learning natural-language processing (NLP) systems that improves accuracy, increasing RoBERTa's scores on the General Language Understanding Evaluation (GLUE) benchmark and achieving the highest score on AI2 Reasoning Challenge (ARC) benchmark.
The team, drawn from Microsoft's Language and Information Technologies group and Professor Tom Goldstein's lab at UMD, provided a detailed description of FreeLB in a paper published on arXiv. The method works by adding noise to the word embeddings of input sentences when fine-tuning a pre-trained model such as RoBERTa.
FreeLB builds on previous work done by Goldstein's lab, in which adversarial training is performed "for free" by re-using the gradient information that is a result of standard training algorithms; the gradient is used to calculate a perturbation that is added to the input samples to create adversarial inputs. By including these samples in the fine-tuning training, the team was able to improve a BERT-based model's score from 78.3% to 79.4%, and a RoBERTa-large model from 88.5% to 88.8% on GLUE. The FreeLB training also took the top spot on the ARC leaderboard with 85.44% and 67.75% on ARC-Easy and ARC-Challenge respectively.
Adversarial training for image classifiers has been a focus for many researchers, especially those interested in autonomous vehicles. The FreeLB team notes that while this improves the robustness of the models, it often reduces their accuracy. However, NLP systems usually see an improved accuracy with adversarial training. There are several techniques for generating adversarial inputs to NLP systems by manipulating the input text: for example, by adding distracting sentences, or even by changing single words or characters (a technique that can also be used to help explain a model's output).
By contrast, FreeLB does not directly manipulate the input text. Instead, it adds a perturbation to the embedding vectors used to encode the input. Embeddings, frequently used as the first step in an NLP system, convert each word in the input vocabulary into a high-dimensional vector that often has interesting properties on its own. For example, words with similar meanings are often "close" to each other in embedding space.
FreeLB uses the gradient information from training to adjust the location of words in the embedding space with the maximum distance possible, without changing the output generated by the model. The authors claim this is even more effective than modifying text directly, as it can "make manipulations on word embeddings that are not possible in the text domain." The perturbations are done during fine-tuning when a pre-trained model is further trained on a task-specific dataset, such as a set of questions and answers. Because this training process calculates gradients, they are available "for free" to compute perturbations, which effectively creates new training examples.
FreeLB's implementation has not been open-sourced, although other projects from the Goldstein group's have been, including the previous work on adversarial training of images which is available on GitHub. The team notes that:
Investigating the reason for the discrepancy between the outcomes of adversarial training for images and text is an interesting future direction.
The Microsoft team has also open-source some of its other work, including a recently-released system for visual question answering called ReGAT, also available on GitHub.