# Chapter 4: Project: Face Generation with GANs

## 4.3 Training the GAN

Training a GAN involves simultaneously training the generator to produce more realistic images and training the discriminator to become better at distinguishing generated images from real ones. This is typically done in alternating phases: first, the discriminator is trained for one or more epochs, then the generator is trained for a number of epochs.

To train the discriminator, we’ll feed it batches of real images labeled as real (1) and generated images labeled as fake (0), and update its weights based on how well it classified the images.

To train the generator, we’ll use the combined model, which chains the generator to the discriminator. The combined model is trained using random noise labeled as real (1). Because the discriminator is frozen during the generator's training phase (i.e.,

), only the generator’s weights are updated. The generator’s training objective is to get the discriminator to classify its generated images as real.**discriminator.trainable = False**

Here's the training loop in Python, using Keras:

`import numpy as np`

def train(GAN, generator, discriminator, dataset, latent_dim, n_epochs=5000, n_batch=128):

half_batch = int(n_batch / 2)

# manually enumerate epochs

for i in range(n_epochs):

# prepare real samples

x_real, y_real = generate_real_samples(dataset, half_batch)

# prepare fake examples

x_fake, y_fake = generate_fake_samples(generator, latent_dim, half_batch)

# update discriminator

d_loss_real = discriminator.train_on_batch(x_real, y_real)

d_loss_fake = discriminator.train_on_batch(x_fake, y_fake)

# prepare points in latent space as input for the generator

x_gan = generate_latent_points(latent_dim, n_batch)

# create inverted labels for the fake samples

y_gan = np.ones((n_batch, 1))

# update the generator via the discriminator's error

g_loss = GAN.train_on_batch(x_gan, y_gan)

# summarize loss on this batch

print('>%d, d_real=%.3f, d_fake=%.3f g=%.3f' % (i+1, d_loss_real, d_loss_fake, g_loss))

# evaluate the model every n_eval epochs

if (i+1) % 500 == 0:

summarize_performance(i, generator, discriminator, dataset, latent_dim)

In this code,

and **generate_real_samples()**

are functions that generate a batch of real and fake images, respectively, with appropriate labels, and **generate_fake_samples()**

is a function that evaluates the discriminator’s performance and saves the generator’s output at different points during training. The specific implementation of these functions will depend on your dataset and the specific requirements of your project.**summarize_performance()**

This process is repeated for a number of epochs until the generator and discriminator are both trained to a satisfactory level. The generator will hopefully produce convincing images, while the discriminator should be able to accurately distinguish between real and fake images.

In the next section, we'll take a look at how to generate new faces using our trained GAN.

## 4.3 Training the GAN

Training a GAN involves simultaneously training the generator to produce more realistic images and training the discriminator to become better at distinguishing generated images from real ones. This is typically done in alternating phases: first, the discriminator is trained for one or more epochs, then the generator is trained for a number of epochs.

To train the discriminator, we’ll feed it batches of real images labeled as real (1) and generated images labeled as fake (0), and update its weights based on how well it classified the images.

To train the generator, we’ll use the combined model, which chains the generator to the discriminator. The combined model is trained using random noise labeled as real (1). Because the discriminator is frozen during the generator's training phase (i.e.,

), only the generator’s weights are updated. The generator’s training objective is to get the discriminator to classify its generated images as real.**discriminator.trainable = False**

Here's the training loop in Python, using Keras:

`import numpy as np`

def train(GAN, generator, discriminator, dataset, latent_dim, n_epochs=5000, n_batch=128):

half_batch = int(n_batch / 2)

# manually enumerate epochs

for i in range(n_epochs):

# prepare real samples

x_real, y_real = generate_real_samples(dataset, half_batch)

# prepare fake examples

x_fake, y_fake = generate_fake_samples(generator, latent_dim, half_batch)

# update discriminator

d_loss_real = discriminator.train_on_batch(x_real, y_real)

d_loss_fake = discriminator.train_on_batch(x_fake, y_fake)

# prepare points in latent space as input for the generator

x_gan = generate_latent_points(latent_dim, n_batch)

# create inverted labels for the fake samples

y_gan = np.ones((n_batch, 1))

# update the generator via the discriminator's error

g_loss = GAN.train_on_batch(x_gan, y_gan)

# summarize loss on this batch

print('>%d, d_real=%.3f, d_fake=%.3f g=%.3f' % (i+1, d_loss_real, d_loss_fake, g_loss))

# evaluate the model every n_eval epochs

if (i+1) % 500 == 0:

summarize_performance(i, generator, discriminator, dataset, latent_dim)

In this code,

and **generate_real_samples()**

are functions that generate a batch of real and fake images, respectively, with appropriate labels, and **generate_fake_samples()**

is a function that evaluates the discriminator’s performance and saves the generator’s output at different points during training. The specific implementation of these functions will depend on your dataset and the specific requirements of your project.**summarize_performance()**

This process is repeated for a number of epochs until the generator and discriminator are both trained to a satisfactory level. The generator will hopefully produce convincing images, while the discriminator should be able to accurately distinguish between real and fake images.

In the next section, we'll take a look at how to generate new faces using our trained GAN.

## 4.3 Training the GAN

Training a GAN involves simultaneously training the generator to produce more realistic images and training the discriminator to become better at distinguishing generated images from real ones. This is typically done in alternating phases: first, the discriminator is trained for one or more epochs, then the generator is trained for a number of epochs.

To train the discriminator, we’ll feed it batches of real images labeled as real (1) and generated images labeled as fake (0), and update its weights based on how well it classified the images.

To train the generator, we’ll use the combined model, which chains the generator to the discriminator. The combined model is trained using random noise labeled as real (1). Because the discriminator is frozen during the generator's training phase (i.e.,

), only the generator’s weights are updated. The generator’s training objective is to get the discriminator to classify its generated images as real.**discriminator.trainable = False**

Here's the training loop in Python, using Keras:

`import numpy as np`

def train(GAN, generator, discriminator, dataset, latent_dim, n_epochs=5000, n_batch=128):

half_batch = int(n_batch / 2)

# manually enumerate epochs

for i in range(n_epochs):

# prepare real samples

x_real, y_real = generate_real_samples(dataset, half_batch)

# prepare fake examples

x_fake, y_fake = generate_fake_samples(generator, latent_dim, half_batch)

# update discriminator

d_loss_real = discriminator.train_on_batch(x_real, y_real)

d_loss_fake = discriminator.train_on_batch(x_fake, y_fake)

# prepare points in latent space as input for the generator

x_gan = generate_latent_points(latent_dim, n_batch)

# create inverted labels for the fake samples

y_gan = np.ones((n_batch, 1))

# update the generator via the discriminator's error

g_loss = GAN.train_on_batch(x_gan, y_gan)

# summarize loss on this batch

print('>%d, d_real=%.3f, d_fake=%.3f g=%.3f' % (i+1, d_loss_real, d_loss_fake, g_loss))

# evaluate the model every n_eval epochs

if (i+1) % 500 == 0:

summarize_performance(i, generator, discriminator, dataset, latent_dim)

In this code,

and **generate_real_samples()**

are functions that generate a batch of real and fake images, respectively, with appropriate labels, and **generate_fake_samples()**

is a function that evaluates the discriminator’s performance and saves the generator’s output at different points during training. The specific implementation of these functions will depend on your dataset and the specific requirements of your project.**summarize_performance()**

This process is repeated for a number of epochs until the generator and discriminator are both trained to a satisfactory level. The generator will hopefully produce convincing images, while the discriminator should be able to accurately distinguish between real and fake images.

In the next section, we'll take a look at how to generate new faces using our trained GAN.

## 4.3 Training the GAN

), only the generator’s weights are updated. The generator’s training objective is to get the discriminator to classify its generated images as real.**discriminator.trainable = False**

Here's the training loop in Python, using Keras:

`import numpy as np`

def train(GAN, generator, discriminator, dataset, latent_dim, n_epochs=5000, n_batch=128):

half_batch = int(n_batch / 2)

# manually enumerate epochs

for i in range(n_epochs):

# prepare real samples

x_real, y_real = generate_real_samples(dataset, half_batch)

# prepare fake examples

x_fake, y_fake = generate_fake_samples(generator, latent_dim, half_batch)

# update discriminator

d_loss_real = discriminator.train_on_batch(x_real, y_real)

d_loss_fake = discriminator.train_on_batch(x_fake, y_fake)

# prepare points in latent space as input for the generator

x_gan = generate_latent_points(latent_dim, n_batch)

# create inverted labels for the fake samples

y_gan = np.ones((n_batch, 1))

# update the generator via the discriminator's error

g_loss = GAN.train_on_batch(x_gan, y_gan)

# summarize loss on this batch

print('>%d, d_real=%.3f, d_fake=%.3f g=%.3f' % (i+1, d_loss_real, d_loss_fake, g_loss))

# evaluate the model every n_eval epochs

if (i+1) % 500 == 0:

summarize_performance(i, generator, discriminator, dataset, latent_dim)

and **generate_real_samples()**

are functions that generate a batch of real and fake images, respectively, with appropriate labels, and **generate_fake_samples()**

is a function that evaluates the discriminator’s performance and saves the generator’s output at different points during training. The specific implementation of these functions will depend on your dataset and the specific requirements of your project.**summarize_performance()**

In the next section, we'll take a look at how to generate new faces using our trained GAN.