This folder contains a subset of the example scripts provided by HuggingFace for the Transformers library. Most of the scripts here have been modified to support training adapters instead of full model fine-tuning.
Before getting started with an example script, make sure to have everything set up.
It is best to install the latest adapter-transformers version from the repository:
git clone https://github.com/adapter-hub/adapter-transformers
cd transformers
pip install .
Now, switch to an examples folder and run:
pip install -r requirements.txt
Currently, scripts for these tasks support adapters:
| Task | Description |
|---|---|
language-modeling |
Causal & Masked language modeling |
multiple-choice |
SWAG Dataset |
question-answering |
SQuAD-style QA |
summarization |
Summarization, e.g. on CNN/Dailymail or XSum |
text-classification |
GLUE benchmark |
text-generation |
Text generation, e.g. using GPT-2 |
token-classification |
NER, e.g. on CoNLL2003 |
translation |
Machine translation, e.g. on WMT tasks |
dependency-parsing |
Dependency parsing on Universal Dependencies |
All scripts listed above which can be used for training provide a new --train_adapter option that switches between full fine-tuning and adapter training.
Loading pre-trained adapters can be done via --load_adapter.
You can find all additional, adapter-specific, command-line options here.
Fore more information and examples on training adapters, please refer to these locations:
- The section on adapter training in the AdapterHub documentation.
- Our collection of Colab notebook tutorials.
NOTE: Below, you find the original, unmodified, documentation by HuggingFace. Check out their examples folder for more example scripts.
Here is the list of all our examples:
- with information on whether they are built on top of `Trainer`` (if not, they still work, they might just lack some features),
- whether or not they have a version using the 🤗 Accelerate library.
- whether or not they leverage the 🤗 Datasets library.
- links to Colab notebooks to walk through the scripts and run them easily,
| Task | Example datasets | Trainer support | 🤗 Accelerate | 🤗 Datasets | Colab |
|---|---|---|---|---|---|
language-modeling |
WikiText-2 | ✅ | ✅ | ✅ | |
multiple-choice |
SWAG | ✅ | ✅ | ✅ | |
question-answering |
SQuAD | ✅ | ✅ | ✅ | |
summarization |
XSum | ✅ | ✅ | ✅ | |
text-classification |
GLUE | ✅ | ✅ | ✅ | |
text-generation |
- | n/a | - | - | |
token-classification |
CoNLL NER | ✅ | ✅ | ✅ | |
translation |
WMT | ✅ | ✅ | ✅ |
Most examples are equipped with a mechanism to truncate the number of dataset samples to the desired length. This is useful for debugging purposes, for example to quickly check that all stages of the programs can complete, before running the same setup on the full dataset which may take hours to complete.
For example here is how to truncate all three splits to just 50 samples each:
examples/pytorch/token-classification/run_ner.py \
--max_train_samples 50 \
--max_eval_samples 50 \
--max_predict_samples 50 \
[...]
Most example scripts should have the first two command line arguments and some have the third one. You can quickly check if a given example supports any of these by passing a -h option, e.g.:
examples/pytorch/token-classification/run_ner.py -h
You can resume training from a previous checkpoint like this:
- Pass
--output_dir previous_output_dirwithout--overwrite_output_dirto resume training from the latest checkpoint inoutput_dir(what you would use if the training was interrupted, for instance). - Pass
--resume_from_checkpoint path_to_a_specific_checkpointto resume training from that checkpoint folder.
Should you want to turn an example into a notebook where you'd no longer have access to the command
line, 🤗 Trainer supports resuming from a checkpoint via trainer.train(resume_from_checkpoint).
- If
resume_from_checkpointisTrueit will look for the last checkpoint in the value ofoutput_dirpassed viaTrainingArguments. - If
resume_from_checkpointis a path to a specific checkpoint it will use that saved checkpoint folder to resume the training from.
All the PyTorch scripts mentioned above work out of the box with distributed training and mixed precision, thanks to the Trainer API. To launch one of them on n GPUS, use the following command:
python -m torch.distributed.launch \
--nproc_per_node number_of_gpu_you_have path_to_script.py \
--all_arguments_of_the_scriptAs an example, here is how you would fine-tune the BERT large model (with whole word masking) on the text
classification MNLI task using the run_glue script, with 8 GPUs:
python -m torch.distributed.launch \
--nproc_per_node 8 pytorch/text-classification/run_glue.py \
--model_name_or_path bert-large-uncased-whole-word-masking \
--task_name mnli \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 8 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir /tmp/mnli_output/If you have a GPU with mixed precision capabilities (architecture Pascal or more recent), you can use mixed precision
training with PyTorch 1.6.0 or latest, or by installing the Apex library for previous
versions. Just add the flag --fp16 to your command launching one of the scripts mentioned above!
Using mixed precision training usually results in 2x-speedup for training with the same final results (as shown in this table for text classification).
When using Tensorflow, TPUs are supported out of the box as a tf.distribute.Strategy.
When using PyTorch, we support TPUs thanks to pytorch/xla. For more context and information on how to setup your TPU environment refer to Google's documentation and to the
very detailed pytorch/xla README.
In this repo, we provide a very simple launcher script named
xla_spawn.py that lets you run our
example scripts on multiple TPU cores without any boilerplate. Just pass a --num_cores flag to this script, then your
regular training script with its arguments (this is similar to the torch.distributed.launch helper for
torch.distributed):
python xla_spawn.py --num_cores num_tpu_you_have \
path_to_script.py \
--all_arguments_of_the_scriptAs an example, here is how you would fine-tune the BERT large model (with whole word masking) on the text
classification MNLI task using the run_glue script, with 8 TPUs (from this folder):
python xla_spawn.py --num_cores 8 \
text-classification/run_glue.py \
--model_name_or_path bert-large-uncased-whole-word-masking \
--task_name mnli \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 8 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir /tmp/mnli_output/Most PyTorch example scripts have a version using the 🤗 Accelerate library
that exposes the training loop so it's easy for you to customize or tweak them to your needs. They all require you to
install accelerate with
pip install accelerateThen you can easily launch any of the scripts by running
accelerate configand reply to the questions asked. Then
accelerate testthat will check everything is ready for training. Finally, you cam launch training with
accelerate launch path_to_script.py --args_to_scriptYou can easily log and monitor your runs code. The following are currently supported:
To use Weights & Biases, install the wandb package with:
pip install wandbThen log in the command line:
wandb loginIf you are in Jupyter or Colab, you should login with:
import wandb
wandb.login()To enable logging to W&B, include "wandb" in the report_to of your TrainingArguments or script. Or just pass along --report_to all if you have wandb installed.
Whenever you use Trainer or TFTrainer classes, your losses, evaluation metrics, model topology and gradients (for Trainer only) will automatically be logged.
Advanced configuration is possible by setting environment variables:
| Environment Variable | Value |
|---|---|
| WANDB_LOG_MODEL | Log the model as artifact (log the model as artifact at the end of training (false by default) |
| WANDB_WATCH | one of gradients (default) to log histograms of gradients, all to log histograms of both gradients and parameters, or false for no histogram logging |
| WANDB_PROJECT | Organize runs by project |
Set run names with run_name argument present in scripts or as part of TrainingArguments.
Additional configuration options are available through generic wandb environment variables.
Refer to related documentation & examples.
To use comet_ml, install the Python package with:
pip install comet_mlor if in a Conda environment:
conda install -c comet_ml -c anaconda -c conda-forge comet_ml