Chapter 4: Project Face Generation with GANs
4.3 Training the GAN
Training a Generative Adversarial Network (GAN) is a nuanced process that involves iteratively updating the generator and discriminator networks to improve the quality of the generated images. This section will guide you through the steps of training a GAN to generate realistic human faces, including the necessary training loops, loss functions, and monitoring techniques.
4.3.1 Overview of the Training Process
The training process for a GAN involves two main steps in each iteration:
- Training the Discriminator: The discriminator is trained to differentiate between real images from the dataset and fake images generated by the generator.
- Training the Generator: The generator is trained to produce images that can fool the discriminator into classifying them as real.
To achieve this, the training loop consists of:
- Generating a batch of fake images from random noise.
- Obtaining a batch of real images from the dataset.
- Training the discriminator on both real and fake images.
- Training the generator via the combined GAN model, where the discriminator's weights are frozen.
4.3.2 Training the Discriminator
The discriminator is trained to maximize the likelihood of correctly classifying real and fake images. The loss function used is binary cross-entropy.
Discriminator Loss:
LD=−m1∑i=1m[yilog(D(xi))+(1−yi)log(1−D(G(zi)))]
where yi is the label (1 for real, 0 for fake), D(xi) is the discriminator's prediction for real images, and D(G(zi)) is the discriminator's prediction for fake images.
Example: Discriminator Training Code
import numpy as np
# Training parameters
epochs = 10000
batch_size = 64
sample_interval = 1000
# Adversarial ground truths
real = np.ones((batch_size, 1))
fake = np.zeros((batch_size, 1))
# Training loop for discriminator
for epoch in range(epochs):
# Train the discriminator
# Select a random batch of real images
idx = np.random.randint(0, train_images.shape[0], batch_size)
real_images = train_images[idx]
# Generate a batch of fake images
noise = np.random.normal(0, 1, (batch_size, latent_dim))
fake_images = generator.predict(noise)
# Train the discriminator on real and fake images
d_loss_real = discriminator.train_on_batch(real_images, real)
d_loss_fake = discriminator.train_on_batch(fake_images, fake)
d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
# Print progress
if epoch % sample_interval == 0:
print(f"{epoch} [D loss: {d_loss[0]}, acc.: {d_loss[1] * 100}%]")
The training parameters indicate that the training process will continue for 10,000 epochs, with a batch size of 64 and a sample interval of 1000.
The 'real' and 'fake' variables represent the labels for real and fake images respectively, which are used during the training of the discriminator.
Inside the training loop, for each epoch, the discriminator is trained on a batch of real images and a batch of fake images. In each iteration, a batch of real images is randomly selected from the training data, and a batch of fake images is generated by the generator.
The discriminator's loss is calculated for both real and fake images, and the average is taken. If the current epoch is a multiple of the sample interval, the program prints out the epoch number and the discriminator's loss and accuracy.
4.3.3 Training the Generator
The generator is trained to maximize the likelihood of the discriminator classifying its outputs as real. This is achieved by training the generator through the combined GAN model, where the discriminator's weights are frozen.
Generator Loss:
Example: Generator Training Code
# Training loop for generator
for epoch in range(epochs):
# Train the discriminator
idx = np.random.randint(0, train_images.shape[0], batch_size)
real_images = train_images[idx]
noise = np.random.normal(0, 1, (batch_size, latent_dim))
fake_images = generator.predict(noise)
d_loss_real = discriminator.train_on_batch(real_images, real)
d_loss_fake = discriminator.train_on_batch(fake_images, fake)
d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
# Train the generator
noise = np.random.normal(0, 1, (batch_size, latent_dim))
g_loss = gan.train_on_batch(noise, real)
# Print progress
if epoch % sample_interval == 0:
print(f"{epoch} [D loss: {d_loss[0]}, acc.: {d_loss[1] * 100}%] [G loss: {g_loss}]")
# Generate and save images
noise = np.random.normal(0, 1, (10, latent_dim))
generated_images = generator.predict(noise)
fig, axs = plt.subplots(1, 10, figsize=(20, 2))
for i, img in enumerate(generated_images):
axs[i].imshow((img * 127.5 + 127.5).astype(np.uint8))
axs[i].axis('off')
plt.show()
First, a set of real images are selected and a set of fake images are generated by the generator. The discriminator is trained on both these sets. The average loss of the discriminator is then calculated.
Next, the generator is trained using the same noise as input but with the labels of the real images. The goal here is to fool the discriminator into thinking the generated images are real.
The script then prints the progress of the training, including the loss of the discriminator and generator. If a certain number of epochs has been reached (determined by the sample_interval), the generator will produce and save some sample images for inspection. The images are normalized and displayed using matplotlib.
4.3.4 Monitoring the Training Process
To ensure that the GAN is training effectively, it is essential to monitor the training process. This includes:
- Loss Monitoring: Tracking the loss values of both the discriminator and generator over time.
- Generated Samples: Periodically generating and visualizing images to qualitatively assess the generator's performance.
- Saving Models: Saving the model weights at regular intervals to safeguard against potential training interruptions and facilitate future evaluation or further training.
Example: Monitoring Code
import matplotlib.pyplot as plt
# Function to plot generated images
def plot_generated_images(epoch, generator, examples=10, dim=(1, 10), figsize=(20, 2)):
noise = np.random.normal(0, 1, (examples, latent_dim))
generated_images = generator.predict(noise)
generated_images = (generated_images * 127.5 + 127.5).astype(np.uint8)
plt.figure(figsize=figsize)
for i in range(examples):
plt.subplot(dim[0], dim[1], i + 1)
plt.imshow(generated_images[i])
plt.axis('off')
plt.tight_layout()
plt.savefig(f"gan_generated_image_epoch_{epoch}.png")
plt.close()
# Training loop with monitoring
for epoch in range(epochs):
idx = np.random.randint(0, train_images.shape[0], batch_size)
real_images = train_images[idx]
noise = np.random.normal(0, 1, (batch_size, latent_dim))
fake_images = generator.predict(noise)
d_loss_real = discriminator.train_on_batch(real_images, real)
d_loss_fake = discriminator.train_on_batch(fake_images, fake)
d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
noise = np.random.normal(0, 1, (batch_size, latent_dim))
g_loss = gan.train_on_batch(noise, real)
if epoch % sample_interval == 0:
print(f"{epoch} [D loss: {d_loss[0]}, acc.: {d_loss[1] * 100}%] [G loss: {g_loss}]")
plot_generated_images(epoch, generator)
if epoch % 1000 == 0:
generator.save(f'generator_epoch_{epoch}.h5')
discriminator.save(f'discriminator_epoch_{epoch}.h5')
GANs consist of two parts: a generator and a discriminator. The generator creates new data instances, while the discriminator evaluates them for authenticity.
The function "plot_generated_images" is defined to visualize the images generated by the GAN after specific epochs.
Next, a training loop is set up where real images are fed into the discriminator along with fake ones generated by the generator. The discriminator is trained to differentiate between real and fake images.
The generator is also trained to fool the discriminator into thinking the generator's images are real. The discriminator's and generator's losses are logged for inspection.
If the current epoch is a multiple of the "sample_interval", the code prints the losses and plots generated images. If the current epoch is a multiple of 1000, the generator and discriminator models are saved.
Summary
Training a GAN to generate realistic human faces involves a delicate balance between training the discriminator and the generator. By iteratively updating both networks, monitoring their performance, and saving models at regular intervals, we can create a powerful GAN capable of producing high-quality images. This process requires careful attention to detail, as training instability can lead to issues such as mode collapse.
With the training process set up and running, the next steps will focus on evaluating the trained GAN, fine-tuning its performance, and leveraging the generated images for various applications.
4.3 Training the GAN
Training a Generative Adversarial Network (GAN) is a nuanced process that involves iteratively updating the generator and discriminator networks to improve the quality of the generated images. This section will guide you through the steps of training a GAN to generate realistic human faces, including the necessary training loops, loss functions, and monitoring techniques.
4.3.1 Overview of the Training Process
The training process for a GAN involves two main steps in each iteration:
- Training the Discriminator: The discriminator is trained to differentiate between real images from the dataset and fake images generated by the generator.
- Training the Generator: The generator is trained to produce images that can fool the discriminator into classifying them as real.
To achieve this, the training loop consists of:
- Generating a batch of fake images from random noise.
- Obtaining a batch of real images from the dataset.
- Training the discriminator on both real and fake images.
- Training the generator via the combined GAN model, where the discriminator's weights are frozen.
4.3.2 Training the Discriminator
The discriminator is trained to maximize the likelihood of correctly classifying real and fake images. The loss function used is binary cross-entropy.
Discriminator Loss:
LD=−m1∑i=1m[yilog(D(xi))+(1−yi)log(1−D(G(zi)))]
where yi is the label (1 for real, 0 for fake), D(xi) is the discriminator's prediction for real images, and D(G(zi)) is the discriminator's prediction for fake images.
Example: Discriminator Training Code
import numpy as np
# Training parameters
epochs = 10000
batch_size = 64
sample_interval = 1000
# Adversarial ground truths
real = np.ones((batch_size, 1))
fake = np.zeros((batch_size, 1))
# Training loop for discriminator
for epoch in range(epochs):
# Train the discriminator
# Select a random batch of real images
idx = np.random.randint(0, train_images.shape[0], batch_size)
real_images = train_images[idx]
# Generate a batch of fake images
noise = np.random.normal(0, 1, (batch_size, latent_dim))
fake_images = generator.predict(noise)
# Train the discriminator on real and fake images
d_loss_real = discriminator.train_on_batch(real_images, real)
d_loss_fake = discriminator.train_on_batch(fake_images, fake)
d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
# Print progress
if epoch % sample_interval == 0:
print(f"{epoch} [D loss: {d_loss[0]}, acc.: {d_loss[1] * 100}%]")
The training parameters indicate that the training process will continue for 10,000 epochs, with a batch size of 64 and a sample interval of 1000.
The 'real' and 'fake' variables represent the labels for real and fake images respectively, which are used during the training of the discriminator.
Inside the training loop, for each epoch, the discriminator is trained on a batch of real images and a batch of fake images. In each iteration, a batch of real images is randomly selected from the training data, and a batch of fake images is generated by the generator.
The discriminator's loss is calculated for both real and fake images, and the average is taken. If the current epoch is a multiple of the sample interval, the program prints out the epoch number and the discriminator's loss and accuracy.
4.3.3 Training the Generator
The generator is trained to maximize the likelihood of the discriminator classifying its outputs as real. This is achieved by training the generator through the combined GAN model, where the discriminator's weights are frozen.
Generator Loss:
Example: Generator Training Code
# Training loop for generator
for epoch in range(epochs):
# Train the discriminator
idx = np.random.randint(0, train_images.shape[0], batch_size)
real_images = train_images[idx]
noise = np.random.normal(0, 1, (batch_size, latent_dim))
fake_images = generator.predict(noise)
d_loss_real = discriminator.train_on_batch(real_images, real)
d_loss_fake = discriminator.train_on_batch(fake_images, fake)
d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
# Train the generator
noise = np.random.normal(0, 1, (batch_size, latent_dim))
g_loss = gan.train_on_batch(noise, real)
# Print progress
if epoch % sample_interval == 0:
print(f"{epoch} [D loss: {d_loss[0]}, acc.: {d_loss[1] * 100}%] [G loss: {g_loss}]")
# Generate and save images
noise = np.random.normal(0, 1, (10, latent_dim))
generated_images = generator.predict(noise)
fig, axs = plt.subplots(1, 10, figsize=(20, 2))
for i, img in enumerate(generated_images):
axs[i].imshow((img * 127.5 + 127.5).astype(np.uint8))
axs[i].axis('off')
plt.show()
First, a set of real images are selected and a set of fake images are generated by the generator. The discriminator is trained on both these sets. The average loss of the discriminator is then calculated.
Next, the generator is trained using the same noise as input but with the labels of the real images. The goal here is to fool the discriminator into thinking the generated images are real.
The script then prints the progress of the training, including the loss of the discriminator and generator. If a certain number of epochs has been reached (determined by the sample_interval), the generator will produce and save some sample images for inspection. The images are normalized and displayed using matplotlib.
4.3.4 Monitoring the Training Process
To ensure that the GAN is training effectively, it is essential to monitor the training process. This includes:
- Loss Monitoring: Tracking the loss values of both the discriminator and generator over time.
- Generated Samples: Periodically generating and visualizing images to qualitatively assess the generator's performance.
- Saving Models: Saving the model weights at regular intervals to safeguard against potential training interruptions and facilitate future evaluation or further training.
Example: Monitoring Code
import matplotlib.pyplot as plt
# Function to plot generated images
def plot_generated_images(epoch, generator, examples=10, dim=(1, 10), figsize=(20, 2)):
noise = np.random.normal(0, 1, (examples, latent_dim))
generated_images = generator.predict(noise)
generated_images = (generated_images * 127.5 + 127.5).astype(np.uint8)
plt.figure(figsize=figsize)
for i in range(examples):
plt.subplot(dim[0], dim[1], i + 1)
plt.imshow(generated_images[i])
plt.axis('off')
plt.tight_layout()
plt.savefig(f"gan_generated_image_epoch_{epoch}.png")
plt.close()
# Training loop with monitoring
for epoch in range(epochs):
idx = np.random.randint(0, train_images.shape[0], batch_size)
real_images = train_images[idx]
noise = np.random.normal(0, 1, (batch_size, latent_dim))
fake_images = generator.predict(noise)
d_loss_real = discriminator.train_on_batch(real_images, real)
d_loss_fake = discriminator.train_on_batch(fake_images, fake)
d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
noise = np.random.normal(0, 1, (batch_size, latent_dim))
g_loss = gan.train_on_batch(noise, real)
if epoch % sample_interval == 0:
print(f"{epoch} [D loss: {d_loss[0]}, acc.: {d_loss[1] * 100}%] [G loss: {g_loss}]")
plot_generated_images(epoch, generator)
if epoch % 1000 == 0:
generator.save(f'generator_epoch_{epoch}.h5')
discriminator.save(f'discriminator_epoch_{epoch}.h5')
GANs consist of two parts: a generator and a discriminator. The generator creates new data instances, while the discriminator evaluates them for authenticity.
The function "plot_generated_images" is defined to visualize the images generated by the GAN after specific epochs.
Next, a training loop is set up where real images are fed into the discriminator along with fake ones generated by the generator. The discriminator is trained to differentiate between real and fake images.
The generator is also trained to fool the discriminator into thinking the generator's images are real. The discriminator's and generator's losses are logged for inspection.
If the current epoch is a multiple of the "sample_interval", the code prints the losses and plots generated images. If the current epoch is a multiple of 1000, the generator and discriminator models are saved.
Summary
Training a GAN to generate realistic human faces involves a delicate balance between training the discriminator and the generator. By iteratively updating both networks, monitoring their performance, and saving models at regular intervals, we can create a powerful GAN capable of producing high-quality images. This process requires careful attention to detail, as training instability can lead to issues such as mode collapse.
With the training process set up and running, the next steps will focus on evaluating the trained GAN, fine-tuning its performance, and leveraging the generated images for various applications.
4.3 Training the GAN
Training a Generative Adversarial Network (GAN) is a nuanced process that involves iteratively updating the generator and discriminator networks to improve the quality of the generated images. This section will guide you through the steps of training a GAN to generate realistic human faces, including the necessary training loops, loss functions, and monitoring techniques.
4.3.1 Overview of the Training Process
The training process for a GAN involves two main steps in each iteration:
- Training the Discriminator: The discriminator is trained to differentiate between real images from the dataset and fake images generated by the generator.
- Training the Generator: The generator is trained to produce images that can fool the discriminator into classifying them as real.
To achieve this, the training loop consists of:
- Generating a batch of fake images from random noise.
- Obtaining a batch of real images from the dataset.
- Training the discriminator on both real and fake images.
- Training the generator via the combined GAN model, where the discriminator's weights are frozen.
4.3.2 Training the Discriminator
The discriminator is trained to maximize the likelihood of correctly classifying real and fake images. The loss function used is binary cross-entropy.
Discriminator Loss:
LD=−m1∑i=1m[yilog(D(xi))+(1−yi)log(1−D(G(zi)))]
where yi is the label (1 for real, 0 for fake), D(xi) is the discriminator's prediction for real images, and D(G(zi)) is the discriminator's prediction for fake images.
Example: Discriminator Training Code
import numpy as np
# Training parameters
epochs = 10000
batch_size = 64
sample_interval = 1000
# Adversarial ground truths
real = np.ones((batch_size, 1))
fake = np.zeros((batch_size, 1))
# Training loop for discriminator
for epoch in range(epochs):
# Train the discriminator
# Select a random batch of real images
idx = np.random.randint(0, train_images.shape[0], batch_size)
real_images = train_images[idx]
# Generate a batch of fake images
noise = np.random.normal(0, 1, (batch_size, latent_dim))
fake_images = generator.predict(noise)
# Train the discriminator on real and fake images
d_loss_real = discriminator.train_on_batch(real_images, real)
d_loss_fake = discriminator.train_on_batch(fake_images, fake)
d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
# Print progress
if epoch % sample_interval == 0:
print(f"{epoch} [D loss: {d_loss[0]}, acc.: {d_loss[1] * 100}%]")
The training parameters indicate that the training process will continue for 10,000 epochs, with a batch size of 64 and a sample interval of 1000.
The 'real' and 'fake' variables represent the labels for real and fake images respectively, which are used during the training of the discriminator.
Inside the training loop, for each epoch, the discriminator is trained on a batch of real images and a batch of fake images. In each iteration, a batch of real images is randomly selected from the training data, and a batch of fake images is generated by the generator.
The discriminator's loss is calculated for both real and fake images, and the average is taken. If the current epoch is a multiple of the sample interval, the program prints out the epoch number and the discriminator's loss and accuracy.
4.3.3 Training the Generator
The generator is trained to maximize the likelihood of the discriminator classifying its outputs as real. This is achieved by training the generator through the combined GAN model, where the discriminator's weights are frozen.
Generator Loss:
Example: Generator Training Code
# Training loop for generator
for epoch in range(epochs):
# Train the discriminator
idx = np.random.randint(0, train_images.shape[0], batch_size)
real_images = train_images[idx]
noise = np.random.normal(0, 1, (batch_size, latent_dim))
fake_images = generator.predict(noise)
d_loss_real = discriminator.train_on_batch(real_images, real)
d_loss_fake = discriminator.train_on_batch(fake_images, fake)
d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
# Train the generator
noise = np.random.normal(0, 1, (batch_size, latent_dim))
g_loss = gan.train_on_batch(noise, real)
# Print progress
if epoch % sample_interval == 0:
print(f"{epoch} [D loss: {d_loss[0]}, acc.: {d_loss[1] * 100}%] [G loss: {g_loss}]")
# Generate and save images
noise = np.random.normal(0, 1, (10, latent_dim))
generated_images = generator.predict(noise)
fig, axs = plt.subplots(1, 10, figsize=(20, 2))
for i, img in enumerate(generated_images):
axs[i].imshow((img * 127.5 + 127.5).astype(np.uint8))
axs[i].axis('off')
plt.show()
First, a set of real images are selected and a set of fake images are generated by the generator. The discriminator is trained on both these sets. The average loss of the discriminator is then calculated.
Next, the generator is trained using the same noise as input but with the labels of the real images. The goal here is to fool the discriminator into thinking the generated images are real.
The script then prints the progress of the training, including the loss of the discriminator and generator. If a certain number of epochs has been reached (determined by the sample_interval), the generator will produce and save some sample images for inspection. The images are normalized and displayed using matplotlib.
4.3.4 Monitoring the Training Process
To ensure that the GAN is training effectively, it is essential to monitor the training process. This includes:
- Loss Monitoring: Tracking the loss values of both the discriminator and generator over time.
- Generated Samples: Periodically generating and visualizing images to qualitatively assess the generator's performance.
- Saving Models: Saving the model weights at regular intervals to safeguard against potential training interruptions and facilitate future evaluation or further training.
Example: Monitoring Code
import matplotlib.pyplot as plt
# Function to plot generated images
def plot_generated_images(epoch, generator, examples=10, dim=(1, 10), figsize=(20, 2)):
noise = np.random.normal(0, 1, (examples, latent_dim))
generated_images = generator.predict(noise)
generated_images = (generated_images * 127.5 + 127.5).astype(np.uint8)
plt.figure(figsize=figsize)
for i in range(examples):
plt.subplot(dim[0], dim[1], i + 1)
plt.imshow(generated_images[i])
plt.axis('off')
plt.tight_layout()
plt.savefig(f"gan_generated_image_epoch_{epoch}.png")
plt.close()
# Training loop with monitoring
for epoch in range(epochs):
idx = np.random.randint(0, train_images.shape[0], batch_size)
real_images = train_images[idx]
noise = np.random.normal(0, 1, (batch_size, latent_dim))
fake_images = generator.predict(noise)
d_loss_real = discriminator.train_on_batch(real_images, real)
d_loss_fake = discriminator.train_on_batch(fake_images, fake)
d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
noise = np.random.normal(0, 1, (batch_size, latent_dim))
g_loss = gan.train_on_batch(noise, real)
if epoch % sample_interval == 0:
print(f"{epoch} [D loss: {d_loss[0]}, acc.: {d_loss[1] * 100}%] [G loss: {g_loss}]")
plot_generated_images(epoch, generator)
if epoch % 1000 == 0:
generator.save(f'generator_epoch_{epoch}.h5')
discriminator.save(f'discriminator_epoch_{epoch}.h5')
GANs consist of two parts: a generator and a discriminator. The generator creates new data instances, while the discriminator evaluates them for authenticity.
The function "plot_generated_images" is defined to visualize the images generated by the GAN after specific epochs.
Next, a training loop is set up where real images are fed into the discriminator along with fake ones generated by the generator. The discriminator is trained to differentiate between real and fake images.
The generator is also trained to fool the discriminator into thinking the generator's images are real. The discriminator's and generator's losses are logged for inspection.
If the current epoch is a multiple of the "sample_interval", the code prints the losses and plots generated images. If the current epoch is a multiple of 1000, the generator and discriminator models are saved.
Summary
Training a GAN to generate realistic human faces involves a delicate balance between training the discriminator and the generator. By iteratively updating both networks, monitoring their performance, and saving models at regular intervals, we can create a powerful GAN capable of producing high-quality images. This process requires careful attention to detail, as training instability can lead to issues such as mode collapse.
With the training process set up and running, the next steps will focus on evaluating the trained GAN, fine-tuning its performance, and leveraging the generated images for various applications.
4.3 Training the GAN
Training a Generative Adversarial Network (GAN) is a nuanced process that involves iteratively updating the generator and discriminator networks to improve the quality of the generated images. This section will guide you through the steps of training a GAN to generate realistic human faces, including the necessary training loops, loss functions, and monitoring techniques.
4.3.1 Overview of the Training Process
The training process for a GAN involves two main steps in each iteration:
- Training the Discriminator: The discriminator is trained to differentiate between real images from the dataset and fake images generated by the generator.
- Training the Generator: The generator is trained to produce images that can fool the discriminator into classifying them as real.
To achieve this, the training loop consists of:
- Generating a batch of fake images from random noise.
- Obtaining a batch of real images from the dataset.
- Training the discriminator on both real and fake images.
- Training the generator via the combined GAN model, where the discriminator's weights are frozen.
4.3.2 Training the Discriminator
The discriminator is trained to maximize the likelihood of correctly classifying real and fake images. The loss function used is binary cross-entropy.
Discriminator Loss:
LD=−m1∑i=1m[yilog(D(xi))+(1−yi)log(1−D(G(zi)))]
where yi is the label (1 for real, 0 for fake), D(xi) is the discriminator's prediction for real images, and D(G(zi)) is the discriminator's prediction for fake images.
Example: Discriminator Training Code
import numpy as np
# Training parameters
epochs = 10000
batch_size = 64
sample_interval = 1000
# Adversarial ground truths
real = np.ones((batch_size, 1))
fake = np.zeros((batch_size, 1))
# Training loop for discriminator
for epoch in range(epochs):
# Train the discriminator
# Select a random batch of real images
idx = np.random.randint(0, train_images.shape[0], batch_size)
real_images = train_images[idx]
# Generate a batch of fake images
noise = np.random.normal(0, 1, (batch_size, latent_dim))
fake_images = generator.predict(noise)
# Train the discriminator on real and fake images
d_loss_real = discriminator.train_on_batch(real_images, real)
d_loss_fake = discriminator.train_on_batch(fake_images, fake)
d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
# Print progress
if epoch % sample_interval == 0:
print(f"{epoch} [D loss: {d_loss[0]}, acc.: {d_loss[1] * 100}%]")
The training parameters indicate that the training process will continue for 10,000 epochs, with a batch size of 64 and a sample interval of 1000.
The 'real' and 'fake' variables represent the labels for real and fake images respectively, which are used during the training of the discriminator.
Inside the training loop, for each epoch, the discriminator is trained on a batch of real images and a batch of fake images. In each iteration, a batch of real images is randomly selected from the training data, and a batch of fake images is generated by the generator.
The discriminator's loss is calculated for both real and fake images, and the average is taken. If the current epoch is a multiple of the sample interval, the program prints out the epoch number and the discriminator's loss and accuracy.
4.3.3 Training the Generator
The generator is trained to maximize the likelihood of the discriminator classifying its outputs as real. This is achieved by training the generator through the combined GAN model, where the discriminator's weights are frozen.
Generator Loss:
Example: Generator Training Code
# Training loop for generator
for epoch in range(epochs):
# Train the discriminator
idx = np.random.randint(0, train_images.shape[0], batch_size)
real_images = train_images[idx]
noise = np.random.normal(0, 1, (batch_size, latent_dim))
fake_images = generator.predict(noise)
d_loss_real = discriminator.train_on_batch(real_images, real)
d_loss_fake = discriminator.train_on_batch(fake_images, fake)
d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
# Train the generator
noise = np.random.normal(0, 1, (batch_size, latent_dim))
g_loss = gan.train_on_batch(noise, real)
# Print progress
if epoch % sample_interval == 0:
print(f"{epoch} [D loss: {d_loss[0]}, acc.: {d_loss[1] * 100}%] [G loss: {g_loss}]")
# Generate and save images
noise = np.random.normal(0, 1, (10, latent_dim))
generated_images = generator.predict(noise)
fig, axs = plt.subplots(1, 10, figsize=(20, 2))
for i, img in enumerate(generated_images):
axs[i].imshow((img * 127.5 + 127.5).astype(np.uint8))
axs[i].axis('off')
plt.show()
First, a set of real images are selected and a set of fake images are generated by the generator. The discriminator is trained on both these sets. The average loss of the discriminator is then calculated.
Next, the generator is trained using the same noise as input but with the labels of the real images. The goal here is to fool the discriminator into thinking the generated images are real.
The script then prints the progress of the training, including the loss of the discriminator and generator. If a certain number of epochs has been reached (determined by the sample_interval), the generator will produce and save some sample images for inspection. The images are normalized and displayed using matplotlib.
4.3.4 Monitoring the Training Process
To ensure that the GAN is training effectively, it is essential to monitor the training process. This includes:
- Loss Monitoring: Tracking the loss values of both the discriminator and generator over time.
- Generated Samples: Periodically generating and visualizing images to qualitatively assess the generator's performance.
- Saving Models: Saving the model weights at regular intervals to safeguard against potential training interruptions and facilitate future evaluation or further training.
Example: Monitoring Code
import matplotlib.pyplot as plt
# Function to plot generated images
def plot_generated_images(epoch, generator, examples=10, dim=(1, 10), figsize=(20, 2)):
noise = np.random.normal(0, 1, (examples, latent_dim))
generated_images = generator.predict(noise)
generated_images = (generated_images * 127.5 + 127.5).astype(np.uint8)
plt.figure(figsize=figsize)
for i in range(examples):
plt.subplot(dim[0], dim[1], i + 1)
plt.imshow(generated_images[i])
plt.axis('off')
plt.tight_layout()
plt.savefig(f"gan_generated_image_epoch_{epoch}.png")
plt.close()
# Training loop with monitoring
for epoch in range(epochs):
idx = np.random.randint(0, train_images.shape[0], batch_size)
real_images = train_images[idx]
noise = np.random.normal(0, 1, (batch_size, latent_dim))
fake_images = generator.predict(noise)
d_loss_real = discriminator.train_on_batch(real_images, real)
d_loss_fake = discriminator.train_on_batch(fake_images, fake)
d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
noise = np.random.normal(0, 1, (batch_size, latent_dim))
g_loss = gan.train_on_batch(noise, real)
if epoch % sample_interval == 0:
print(f"{epoch} [D loss: {d_loss[0]}, acc.: {d_loss[1] * 100}%] [G loss: {g_loss}]")
plot_generated_images(epoch, generator)
if epoch % 1000 == 0:
generator.save(f'generator_epoch_{epoch}.h5')
discriminator.save(f'discriminator_epoch_{epoch}.h5')
GANs consist of two parts: a generator and a discriminator. The generator creates new data instances, while the discriminator evaluates them for authenticity.
The function "plot_generated_images" is defined to visualize the images generated by the GAN after specific epochs.
Next, a training loop is set up where real images are fed into the discriminator along with fake ones generated by the generator. The discriminator is trained to differentiate between real and fake images.
The generator is also trained to fool the discriminator into thinking the generator's images are real. The discriminator's and generator's losses are logged for inspection.
If the current epoch is a multiple of the "sample_interval", the code prints the losses and plots generated images. If the current epoch is a multiple of 1000, the generator and discriminator models are saved.
Summary
Training a GAN to generate realistic human faces involves a delicate balance between training the discriminator and the generator. By iteratively updating both networks, monitoring their performance, and saving models at regular intervals, we can create a powerful GAN capable of producing high-quality images. This process requires careful attention to detail, as training instability can lead to issues such as mode collapse.
With the training process set up and running, the next steps will focus on evaluating the trained GAN, fine-tuning its performance, and leveraging the generated images for various applications.