Introduction To Generative
Adversarial Networks
GANs
Hichem FELOUAT
SAAD DAHLAB BLIDA UNIVERSITY - Algeria - 2020
FACULTY OF SCIENCE - Computer Science Department
LRDSI Laboratory
hichemfel@gmail.com
Introduction
2Hichem Felouat - 2020 - hichemfel@gmail,com
• Generative adversarial networks (GANs) were proposed in a 2014 paper [1].
• A GAN is composed of two neural networks (Generator & Discriminator).
• Generator: Takes a random distribution as input (typically Gaussian) and
outputs some data - typically, an image. You can think of the random inputs
as the latent representations (i.e., codings) of the image to be generated.
• Discriminator: Takes either a fake image from the generator or a real image
from the training set as input, and must guess whether the input image is
fake or real.
[1] Ian Goodfellow et al., “Generative Adversarial Nets,” Proceedings of the 27th International Conference on Neural Information
Processing Systems 2 (2014): 2672–2680.
Hichem Felouat - 2020 - hichemfel@gmail,com 3
Introduction
Hichem Felouat - 2020 - hichemfel@gmail,com 4
Applications of GAN
1) Generate Examples for
Image Datasets
2) Generate Photographs
of Human Faces
3) Generate Realistic
Photographs
4) Generate Cartoon
Characters
5) Image-to-Image
Translation
6) Text-to-Image
Translation
7) Semantic-Image-to-
Photo Translation
8) Face Frontal View
Generation
9) Generate New
Human Poses
10) Photos to Emojis
11) Photograph Editing
12) Face Aging
13) Photo Blending
14) Super Resolution
15) Photo Inpainting
16) Clothing
Translation
17) Video Prediction
18) 3D Object
Generation
Generative adversarial networks: a survey on applications and challenges
https://link.springer.com/article/10.1007/s13735-020-00196-w
gans-awesome-applications
https://github.com/nashory/gans-awesome-applications
https://machinelearningmastery.com/impressive-applications-of-generative-adversarial-networks/
Hichem Felouat - 2020 - hichemfel@gmail,com 5
GAN Training
• The generator and the discriminator have opposite
goals: the discriminator tries to tell fake images from real
images, while the generator tries to produce images that
look real enough to trick the discriminator.
• Because the GAN is composed of two networks with
different objectives, it can not be trained like a regular
neural network. Each training iteration is divided into two
phases:
Hichem Felouat - 2020 - hichemfel@gmail,com 6
GAN Training
Hichem Felouat - 2020 - hichemfel@gmail,com 7
GAN Training
First phase:
• We train the discriminator. A batch of real images is sampled from the
training set and is completed with an equal number of fake images produced
by the generator (The labels are: 0 = fake images and 1 = real images).
• The discriminator is trained on this labeled batch for one step, using the
binary cross-entropy loss.
• Backpropagation only optimizes the weights of the discriminator during this
phase.
Hichem Felouat - 2020 - hichemfel@gmail,com 8
GAN Training
Second phase:
• We train the generator. We first use it to produce another batch of fake
images, and once again the discriminator is used to tell whether the images
are fake or real.
• This time we do not add real images in the batch (The generator never
actually sees any real images).
• The weights of the discriminator are frozen during this step, so
backpropagation only affects the weights of the generator.
Hichem Felouat - 2020 - hichemfel@gmail,com 9
Common Problems
• Vanishing Gradients: when the discriminator doesn't provide enough
information for the generator to make progress (The original GAN paper
proposed a modification to minimax loss to deal with vanishing gradients)[2].
• Mode Collapse: this is when the generator starts producing the same output
(or a small set of outputs) over and over again. How can this happen?
Suppose that the generator gets better at producing convincing (class1) than
any other class. It will fool the discriminator a bit more with (class1), and this
will encourage it to produce even more images of (class1). Gradually, it will
forget how to produce anything else.
• GANs are very sensitive to the hyperparameters: you may have to spend
a lot of effort fine-tuning them.
[2] https://developers.google.com/machine-learning/gan/loss
Hichem Felouat - 2020 - hichemfel@gmail,com 10
Deep Convolutional GANs
Deep Convolutional GANs (DCGANs) - 2015
Alec Radford et al., “Unsupervised Representation Learning with Deep
Convolutional Generative Adversarial Networks,” arXiv preprint
arXiv:1511.06434 (2015).
Hichem Felouat - 2020 - hichemfel@gmail,com 11
Deep Convolutional GANs
Here are the main guidelines they proposed for building stable
convolutional GANs:
1) Replace any pooling layers with strided convolutions (in the discriminator) and
transposed convolutions (in the generator).
2) Use Batch Normalization in both the generator and the discriminator, except in the
generator’s output layer and the discriminator’s input layer.
3) Remove fully connected hidden layers for deeper architectures.
4) Use ReLU activation in the generator for all layers except the output layer, which should
use tanh.
5) Use leaky ReLU activation in the discriminator for all layers.
Hichem Felouat - 2020 - hichemfel@gmail,com 12
Example: Preparing The Dataset cifar10
import tensorflow as tf
from tensorflow import keras
import matplotlib.pyplot as plt
import numpy as np
# Using Keras to load the dataset
(X_train, y_train), (X_test, y_test) = keras.datasets.cifar10.load_data()
print("X_train shape = ",X_train.shape," X_test shape = ",X_test.shape)
fig = plt.figure()
for i in range(9):
plt.subplot(3,3,i+1)
plt.tight_layout()
plt.imshow(X_train[i], cmap='gray', interpolation='none')
plt.xticks([])
plt.yticks([])
# Scale the pixel intensities down to the [0,1] range by dividing them by 255.0
X_train = X_train.astype("float32") / 255.0
# Creating a Dataset to iterate through the images
batch_size = 128
dataset = tf.data.Dataset.from_tensor_slices(X_train).shuffle(1000)
dataset = dataset.batch(batch_size, drop_remainder=True).prefetch(1)
Hichem Felouat - 2020 - hichemfel@gmail,com 13
Example: The Generator
# codings_size : the dimension of the input vector for the generator
codings_size = 100
def build_generator(codings_size=100):
generator = tf.keras.Sequential()
# latent variable as input
generator.add(keras.layers.Dense(1024, activation="relu", input_shape=(codings_size,)))
generator.add(keras.layers.BatchNormalization())
generator.add(keras.layers.Dense(1024, activation="relu"))
generator.add(keras.layers.BatchNormalization())
generator.add(keras.layers.Dense(128*8*8, activation="relu"))
generator.add(keras.layers.Reshape((8, 8, 128)))
assert generator.output_shape == (None, 8, 8, 128) # Note: None is the batch size
generator.add(keras.layers.Conv2DTranspose(filters=128, kernel_size=2, strides=2, activation="relu", padding="same"))
assert generator.output_shape == (None, 16, 16, 128)
generator.add(keras.layers.BatchNormalization())
generator.add(keras.layers.Conv2DTranspose(filters=3, kernel_size=2, strides=2, activation="tanh", padding="same"))
assert generator.output_shape == (None, 32, 32, 3)
generator.add(keras.layers.BatchNormalization())
return generator
Hichem Felouat - 2020 - hichemfel@gmail,com 14
Example: The Generator plot generated images
generator = build_generator()
nbr_imgs = 3
def plot_generated_images(nbr_imgs, titleadd=""):
noise = tf.random.normal([nbr_imgs, 100])
imgs = generator.predict(noise)
fig = plt.figure(figsize=(40,10))
for i, img in enumerate(imgs):
ax = fig.add_subplot(1,nbr_imgs,i+1)
ax.imshow((img * 255).astype(np.uint8))
fig.suptitle("Generated images"+titleadd,fontsize=25)
plt.show()
plot_generated_images(nbr_imgs)
In the beginning, the generator
generates random pictures.
Hichem Felouat - 2020 - hichemfel@gmail,com 15
Example: The Discriminator
# discriminator
def build_discriminator():
discriminator = tf.keras.Sequential()
discriminator.add(keras.layers.Conv2D(filters=64, kernel_size=3, strides=2,
activation=keras.layers.LeakyReLU(0.2), padding="same", input_shape=(32, 32, 3)))
discriminator.add(keras.layers.Conv2D(filters=128, kernel_size=3, strides=2,
activation=keras.layers.LeakyReLU(0.2), padding="same"))
discriminator.add(keras.layers.Conv2D(filters=128, kernel_size=3, strides=2,
activation=keras.layers.LeakyReLU(0.2), padding="same"))
discriminator.add(keras.layers.Conv2D(filters=256, kernel_size=3, strides=2,
activation=keras.layers.LeakyReLU(0.2), padding="same"))
# classifier
discriminator.add(keras.layers.Flatten())
discriminator.add(keras.layers.Dropout(0.4))
# discriminator.add(keras.layers.Dense(1024, activation=keras.layers.LeakyReLU(0.2)))
discriminator.add(keras.layers.Dense(1, activation="sigmoid"))
return discriminator
discriminator = build_discriminator()
# compile model
opt = keras.optimizers.Adam(lr=0.0002, beta_1=0.5)
discriminator.compile(loss="binary_crossentropy", optimizer=opt, metrics=["accuracy"])
discriminator.trainable = False
Hichem Felouat - 2020 - hichemfel@gmail,com 16
Example: Train the GAN
gan = keras.models.Sequential([generator, discriminator])
# compile gan
opt = keras.optimizers.Adam(lr=0.0002, beta_1=0.5)
gan.compile(loss="binary_crossentropy", optimizer=opt)
# ------------------------------------------------------------------------------
# For creating an animated gif
from PIL import Image
import cv2
images = []
def animated_gif():
noise_1 = tf.random.normal(shape=[4, codings_size])
imgs = generator.predict(noise_1)
img0 = (imgs[0] * 255).astype(np.uint8)
img1 = (imgs[1] * 255).astype(np.uint8)
img2 = (imgs[2] * 255).astype(np.uint8)
img3 = (imgs[3] * 255).astype(np.uint8)
img = cv2.hconcat([img0, img1, img2, img3])
img = Image.fromarray(np.uint8(img)).convert("RGB")
return img
Hichem Felouat - 2020 - hichemfel@gmail,com 17
Example: Train the GAN - The Training Loop
print("---------------------------------------------------------------")
def train_gan(gan, dataset, batch_size, codings_size, n_epochs):
generator, discriminator = gan.layers
for epoch in range(n_epochs):
for X_batch in dataset:
# phase 1 - training the discriminator
noise = tf.random.normal(shape=[batch_size, codings_size])
generated_images = generator.predict(noise)
X_fake_and_real = tf.concat([generated_images, X_batch], axis=0)
y1 = tf.constant([[0.]] * batch_size + [[1.]] * batch_size)
discriminator.trainable = True
d_loss_accuracy = discriminator.train_on_batch(X_fake_and_real, y1)
# phase 2 - training the generator
noise = tf.random.normal(shape=[batch_size, codings_size])
y2 = tf.constant([[1.]] * batch_size)
discriminator.trainable = False
g_loss = gan.train_on_batch(noise, y2)
print("epoch : ",epoch, " d_loss_accuracy = ",d_loss, " g_loss = ",g_loss)
plot_generated_images(3,titleadd=" : Epoch {}".format(epoch))
# For creating an animated gif
img = animated_gif()
images.append(img)
print("---------------------------------------------------------------")
Hichem Felouat - 2020 - hichemfel@gmail,com 18
Example: Train the GAN
n_epochs = 100
train_gan(gan, dataset, batch_size, codings_size, n_epochs)
# Create a gif of the generated images at every epoch
images[0].save("/content/gif_image.gif",
save_all=True, append_images=images[1:], optimize=False, duration=500, loop=0)
Deep Convolutional GANs:
https://github.com/hichemfelouat/my-codes-of-machine-learning/blob/master/GAN.ipynb
Hichem Felouat - 2020 - hichemfel@gmail,com 19
Example: AttGAN
AttGAN - Arbitrary Facial Attribute Editing: Only Change What You Want
https://github.com/elvisyjlin/AttGAN-PyTorch
[ Bald, Bangs, Black_Hair, Blond_Hair, Brown_Hair, Bushy_Eyebrows, Eyeglasses, Male,
Mouth_Slightly_Open, Mustache, No_Beard, Pale_Skin, Young ]
Hichem Felouat - 2020 - hichemfel@gmail,com 20
Recent GANs
Hichem Felouat - 2020 - hichemfel@gmail,com 21
Recent GANs
https://arxiv.org/abs/1906.01529
Hichem Felouat - 2020 - hichemfel@gmail,com 22
GANs in NLP
In this paper, the author explores the uses of GAN in this NLP task and
proposed a GAN architecture that does the same.
https://arxiv.org/abs/1905.01976
Hichem Felouat - 2020 - hichemfel@gmail,com 23
thank you for your
attention
Hichem Felouat 888

Introduction To Generative Adversarial Networks GANs

  • 1.
    Introduction To Generative AdversarialNetworks GANs Hichem FELOUAT SAAD DAHLAB BLIDA UNIVERSITY - Algeria - 2020 FACULTY OF SCIENCE - Computer Science Department LRDSI Laboratory [email protected]
  • 2.
    Introduction 2Hichem Felouat -2020 - hichemfel@gmail,com • Generative adversarial networks (GANs) were proposed in a 2014 paper [1]. • A GAN is composed of two neural networks (Generator & Discriminator). • Generator: Takes a random distribution as input (typically Gaussian) and outputs some data - typically, an image. You can think of the random inputs as the latent representations (i.e., codings) of the image to be generated. • Discriminator: Takes either a fake image from the generator or a real image from the training set as input, and must guess whether the input image is fake or real. [1] Ian Goodfellow et al., “Generative Adversarial Nets,” Proceedings of the 27th International Conference on Neural Information Processing Systems 2 (2014): 2672–2680.
  • 3.
    Hichem Felouat -2020 - hichemfel@gmail,com 3 Introduction
  • 4.
    Hichem Felouat -2020 - hichemfel@gmail,com 4 Applications of GAN 1) Generate Examples for Image Datasets 2) Generate Photographs of Human Faces 3) Generate Realistic Photographs 4) Generate Cartoon Characters 5) Image-to-Image Translation 6) Text-to-Image Translation 7) Semantic-Image-to- Photo Translation 8) Face Frontal View Generation 9) Generate New Human Poses 10) Photos to Emojis 11) Photograph Editing 12) Face Aging 13) Photo Blending 14) Super Resolution 15) Photo Inpainting 16) Clothing Translation 17) Video Prediction 18) 3D Object Generation Generative adversarial networks: a survey on applications and challenges https://link.springer.com/article/10.1007/s13735-020-00196-w gans-awesome-applications https://github.com/nashory/gans-awesome-applications https://machinelearningmastery.com/impressive-applications-of-generative-adversarial-networks/
  • 5.
    Hichem Felouat -2020 - hichemfel@gmail,com 5 GAN Training • The generator and the discriminator have opposite goals: the discriminator tries to tell fake images from real images, while the generator tries to produce images that look real enough to trick the discriminator. • Because the GAN is composed of two networks with different objectives, it can not be trained like a regular neural network. Each training iteration is divided into two phases:
  • 6.
    Hichem Felouat -2020 - hichemfel@gmail,com 6 GAN Training
  • 7.
    Hichem Felouat -2020 - hichemfel@gmail,com 7 GAN Training First phase: • We train the discriminator. A batch of real images is sampled from the training set and is completed with an equal number of fake images produced by the generator (The labels are: 0 = fake images and 1 = real images). • The discriminator is trained on this labeled batch for one step, using the binary cross-entropy loss. • Backpropagation only optimizes the weights of the discriminator during this phase.
  • 8.
    Hichem Felouat -2020 - hichemfel@gmail,com 8 GAN Training Second phase: • We train the generator. We first use it to produce another batch of fake images, and once again the discriminator is used to tell whether the images are fake or real. • This time we do not add real images in the batch (The generator never actually sees any real images). • The weights of the discriminator are frozen during this step, so backpropagation only affects the weights of the generator.
  • 9.
    Hichem Felouat -2020 - hichemfel@gmail,com 9 Common Problems • Vanishing Gradients: when the discriminator doesn't provide enough information for the generator to make progress (The original GAN paper proposed a modification to minimax loss to deal with vanishing gradients)[2]. • Mode Collapse: this is when the generator starts producing the same output (or a small set of outputs) over and over again. How can this happen? Suppose that the generator gets better at producing convincing (class1) than any other class. It will fool the discriminator a bit more with (class1), and this will encourage it to produce even more images of (class1). Gradually, it will forget how to produce anything else. • GANs are very sensitive to the hyperparameters: you may have to spend a lot of effort fine-tuning them. [2] https://developers.google.com/machine-learning/gan/loss
  • 10.
    Hichem Felouat -2020 - hichemfel@gmail,com 10 Deep Convolutional GANs Deep Convolutional GANs (DCGANs) - 2015 Alec Radford et al., “Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks,” arXiv preprint arXiv:1511.06434 (2015).
  • 11.
    Hichem Felouat -2020 - hichemfel@gmail,com 11 Deep Convolutional GANs Here are the main guidelines they proposed for building stable convolutional GANs: 1) Replace any pooling layers with strided convolutions (in the discriminator) and transposed convolutions (in the generator). 2) Use Batch Normalization in both the generator and the discriminator, except in the generator’s output layer and the discriminator’s input layer. 3) Remove fully connected hidden layers for deeper architectures. 4) Use ReLU activation in the generator for all layers except the output layer, which should use tanh. 5) Use leaky ReLU activation in the discriminator for all layers.
  • 12.
    Hichem Felouat -2020 - hichemfel@gmail,com 12 Example: Preparing The Dataset cifar10 import tensorflow as tf from tensorflow import keras import matplotlib.pyplot as plt import numpy as np # Using Keras to load the dataset (X_train, y_train), (X_test, y_test) = keras.datasets.cifar10.load_data() print("X_train shape = ",X_train.shape," X_test shape = ",X_test.shape) fig = plt.figure() for i in range(9): plt.subplot(3,3,i+1) plt.tight_layout() plt.imshow(X_train[i], cmap='gray', interpolation='none') plt.xticks([]) plt.yticks([]) # Scale the pixel intensities down to the [0,1] range by dividing them by 255.0 X_train = X_train.astype("float32") / 255.0 # Creating a Dataset to iterate through the images batch_size = 128 dataset = tf.data.Dataset.from_tensor_slices(X_train).shuffle(1000) dataset = dataset.batch(batch_size, drop_remainder=True).prefetch(1)
  • 13.
    Hichem Felouat -2020 - hichemfel@gmail,com 13 Example: The Generator # codings_size : the dimension of the input vector for the generator codings_size = 100 def build_generator(codings_size=100): generator = tf.keras.Sequential() # latent variable as input generator.add(keras.layers.Dense(1024, activation="relu", input_shape=(codings_size,))) generator.add(keras.layers.BatchNormalization()) generator.add(keras.layers.Dense(1024, activation="relu")) generator.add(keras.layers.BatchNormalization()) generator.add(keras.layers.Dense(128*8*8, activation="relu")) generator.add(keras.layers.Reshape((8, 8, 128))) assert generator.output_shape == (None, 8, 8, 128) # Note: None is the batch size generator.add(keras.layers.Conv2DTranspose(filters=128, kernel_size=2, strides=2, activation="relu", padding="same")) assert generator.output_shape == (None, 16, 16, 128) generator.add(keras.layers.BatchNormalization()) generator.add(keras.layers.Conv2DTranspose(filters=3, kernel_size=2, strides=2, activation="tanh", padding="same")) assert generator.output_shape == (None, 32, 32, 3) generator.add(keras.layers.BatchNormalization()) return generator
  • 14.
    Hichem Felouat -2020 - hichemfel@gmail,com 14 Example: The Generator plot generated images generator = build_generator() nbr_imgs = 3 def plot_generated_images(nbr_imgs, titleadd=""): noise = tf.random.normal([nbr_imgs, 100]) imgs = generator.predict(noise) fig = plt.figure(figsize=(40,10)) for i, img in enumerate(imgs): ax = fig.add_subplot(1,nbr_imgs,i+1) ax.imshow((img * 255).astype(np.uint8)) fig.suptitle("Generated images"+titleadd,fontsize=25) plt.show() plot_generated_images(nbr_imgs) In the beginning, the generator generates random pictures.
  • 15.
    Hichem Felouat -2020 - hichemfel@gmail,com 15 Example: The Discriminator # discriminator def build_discriminator(): discriminator = tf.keras.Sequential() discriminator.add(keras.layers.Conv2D(filters=64, kernel_size=3, strides=2, activation=keras.layers.LeakyReLU(0.2), padding="same", input_shape=(32, 32, 3))) discriminator.add(keras.layers.Conv2D(filters=128, kernel_size=3, strides=2, activation=keras.layers.LeakyReLU(0.2), padding="same")) discriminator.add(keras.layers.Conv2D(filters=128, kernel_size=3, strides=2, activation=keras.layers.LeakyReLU(0.2), padding="same")) discriminator.add(keras.layers.Conv2D(filters=256, kernel_size=3, strides=2, activation=keras.layers.LeakyReLU(0.2), padding="same")) # classifier discriminator.add(keras.layers.Flatten()) discriminator.add(keras.layers.Dropout(0.4)) # discriminator.add(keras.layers.Dense(1024, activation=keras.layers.LeakyReLU(0.2))) discriminator.add(keras.layers.Dense(1, activation="sigmoid")) return discriminator discriminator = build_discriminator() # compile model opt = keras.optimizers.Adam(lr=0.0002, beta_1=0.5) discriminator.compile(loss="binary_crossentropy", optimizer=opt, metrics=["accuracy"]) discriminator.trainable = False
  • 16.
    Hichem Felouat -2020 - hichemfel@gmail,com 16 Example: Train the GAN gan = keras.models.Sequential([generator, discriminator]) # compile gan opt = keras.optimizers.Adam(lr=0.0002, beta_1=0.5) gan.compile(loss="binary_crossentropy", optimizer=opt) # ------------------------------------------------------------------------------ # For creating an animated gif from PIL import Image import cv2 images = [] def animated_gif(): noise_1 = tf.random.normal(shape=[4, codings_size]) imgs = generator.predict(noise_1) img0 = (imgs[0] * 255).astype(np.uint8) img1 = (imgs[1] * 255).astype(np.uint8) img2 = (imgs[2] * 255).astype(np.uint8) img3 = (imgs[3] * 255).astype(np.uint8) img = cv2.hconcat([img0, img1, img2, img3]) img = Image.fromarray(np.uint8(img)).convert("RGB") return img
  • 17.
    Hichem Felouat -2020 - hichemfel@gmail,com 17 Example: Train the GAN - The Training Loop print("---------------------------------------------------------------") def train_gan(gan, dataset, batch_size, codings_size, n_epochs): generator, discriminator = gan.layers for epoch in range(n_epochs): for X_batch in dataset: # phase 1 - training the discriminator noise = tf.random.normal(shape=[batch_size, codings_size]) generated_images = generator.predict(noise) X_fake_and_real = tf.concat([generated_images, X_batch], axis=0) y1 = tf.constant([[0.]] * batch_size + [[1.]] * batch_size) discriminator.trainable = True d_loss_accuracy = discriminator.train_on_batch(X_fake_and_real, y1) # phase 2 - training the generator noise = tf.random.normal(shape=[batch_size, codings_size]) y2 = tf.constant([[1.]] * batch_size) discriminator.trainable = False g_loss = gan.train_on_batch(noise, y2) print("epoch : ",epoch, " d_loss_accuracy = ",d_loss, " g_loss = ",g_loss) plot_generated_images(3,titleadd=" : Epoch {}".format(epoch)) # For creating an animated gif img = animated_gif() images.append(img) print("---------------------------------------------------------------")
  • 18.
    Hichem Felouat -2020 - hichemfel@gmail,com 18 Example: Train the GAN n_epochs = 100 train_gan(gan, dataset, batch_size, codings_size, n_epochs) # Create a gif of the generated images at every epoch images[0].save("/content/gif_image.gif", save_all=True, append_images=images[1:], optimize=False, duration=500, loop=0) Deep Convolutional GANs: https://github.com/hichemfelouat/my-codes-of-machine-learning/blob/master/GAN.ipynb
  • 19.
    Hichem Felouat -2020 - hichemfel@gmail,com 19 Example: AttGAN AttGAN - Arbitrary Facial Attribute Editing: Only Change What You Want https://github.com/elvisyjlin/AttGAN-PyTorch [ Bald, Bangs, Black_Hair, Blond_Hair, Brown_Hair, Bushy_Eyebrows, Eyeglasses, Male, Mouth_Slightly_Open, Mustache, No_Beard, Pale_Skin, Young ]
  • 20.
    Hichem Felouat -2020 - hichemfel@gmail,com 20 Recent GANs
  • 21.
    Hichem Felouat -2020 - hichemfel@gmail,com 21 Recent GANs https://arxiv.org/abs/1906.01529
  • 22.
    Hichem Felouat -2020 - hichemfel@gmail,com 22 GANs in NLP In this paper, the author explores the uses of GAN in this NLP task and proposed a GAN architecture that does the same. https://arxiv.org/abs/1905.01976
  • 23.
    Hichem Felouat -2020 - hichemfel@gmail,com 23 thank you for your attention Hichem Felouat 888