LLMs and prompt engineering basics
As we saw in Chapter 6, language modeling is the task of predicting the next token given the sequence of previous tokens. The example we used was that given the sequence of words Yesterday I visited a, a language model can predict the next token to be something such as church, hospital, school, and so on. Conventional language models are usually trained in a supervised manner to perform a specific task. Pre-trained language models (PLM) are trained in a self-supervised manner, with the aim of learning a generic representation of the language. These PLM models are then fine-tuned to perform a specific downstream task. This self-supervised pre-training made PLM models much more powerful than regular language models.
The LLMs are an evolution of PLMs that have many more model parameters and larger training datasets. The GPT-3 model, for example, has 175B parameters. Its successor, GPT3.5, was the base for the ChatGPT model released in November 2022...