Transformers and transfer learning
A milestone in NLP happened in 2017 with the release of the research paper Attention Is All You Need, by Vaswani et al. (https://arxiv.org/abs/1706.03762), which introduced a brand-new machine learning idea and architecture – transformers. Transformers in NLP is a fresh idea that aims to solve sequential modeling tasks and targets some problems introduced by Long Short-Term Memory (LSTM) architecture. Here’s how the paper explains how transformers work:
“The Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequence aligned RNNs or convolution.”
Transduction in this context means transforming input to output by transforming input words and sentences into vectors. Typically, a transformer is trained on a huge corpus. Then, in our downstream tasks, we use these vectors as they carry information regarding the word semantics...