Skip to content

Commit 1e386f2

Browse files
committed
fix section names
1 parent 8d2a686 commit 1e386f2

File tree

1 file changed

+6
-4
lines changed

1 file changed

+6
-4
lines changed

README.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -119,7 +119,7 @@ We pass `sample_length = n_ctx * downsample_of_level` so that after downsampling
119119
Here, `n_ctx = 8192` and `downsamples = (32, 256)`, giving `sample_lengths = (8192 * 32, 8192 * 256) = (65536, 2097152)` respectively for the bottom and top level.
120120

121121
### Reuse pre-trained VQ-VAE and retrain top level prior on new dataset.
122-
#### No labels
122+
#### Train without labels
123123
Our pre-trained VQ-VAE can produce compressed codes for a wide variety of genres of music, and the pre-trained upsamplers
124124
can upsample them back to audio that sound very similar to the original audio.
125125
To re-use these for a new dataset of your choice, you can retrain just the top-level
@@ -130,14 +130,16 @@ mpiexec -n {ngpus} python jukebox/train.py --hps=vqvae,small_prior,all_fp16,cpu_
130130
--sample_length=1048576 --bs=4 --aug_shift --aug_blend --audio_files_dir={audio_files_dir} \
131131
--labels=False --train --test --prior --levels=3 --level=2 --weight_decay=0.01 --save_iters=1000
132132
```
133+
134+
#### Sample from new model
133135
You can then run sample.py with the top-level of our models replaced by your new model. To do so,
134136
- Add an entry `my_model=("vqvae", "upsampler_level_0", "upsampler_level_1", "small_prior")` in `MODELS` in `make_models.py`.
135137
- Update the `small_prior` dictionary in `hparams.py` to include `restore_prior='path/to/checkpoint'`. If you
136138
you changed any hps directly in the command line script (eg:`heads`), make sure to update them in the dictionary too so
137139
that `make_models` restores our checkpoint correctly.
138140
- Run sample.py as outlined in the sampling section, but now with `--model=my_model`
139141

140-
#### With labels
142+
#### Train with labels
141143
To train with you own metadata for your audio files, implement `get_metadata` in `data/files_dataset.py` to return the
142144
`artist`, `genre` and `lyrics` for a given audio file. For now, you can pass `''` for lyrics to not use any lyrics.
143145

@@ -164,9 +166,9 @@ mpiexec -n {ngpus} python jukebox/train.py --hps=vqvae,small_labelled_prior,all_
164166
--labels=True --train --test --prior --levels=3 --level=2 --weight_decay=0.01 --save_iters=1000 \
165167
```
166168

167-
For sampling, follow same instructions as [above](#no-labels) but use `small_labelled_prior` instead of `small_prior`.
169+
For sampling, follow same instructions as [above](#sample-from-new-model) but use `small_labelled_prior` instead of `small_prior`.
168170

169-
#### With lyrics
171+
#### Train with lyrics
170172
To train in addition with lyrics, update `get_metadata` in `data/files_dataset.py` to return `lyrics` too.
171173
For training with lyrics, we'll use `small_single_enc_dec_prior` in `hparams.py`.
172174
- Lyrics:

0 commit comments

Comments
 (0)