Code icon

The App is Under a Quick Maintenance

We apologize for the inconvenience. Please come back later

Menu iconMenu iconGenerative Deep Learning with Python
Generative Deep Learning with Python

Chapter 6: Project: Handwritten Digit Generation with VAEs

6.2 Model Creation

The next step in our project is the creation of the Variational Autoencoder model. As we learned in Chapter 5, the VAE consists of an encoder, a decoder, and a loss function that incorporates a reconstruction term as well as a Kullback-Leibler divergence term. 

Let's create these components using TensorFlow and Keras.

6.2.1 Encoder

The encoder part of the VAE is responsible for mapping the input data into a latent space representation. This is typically done using a neural network.

Here's an example of how we might define the encoder using Keras:

import tensorflow as tf
from tensorflow.keras import layers

original_dim = 28 * 28
intermediate_dim = 64
latent_dim = 2

# Define encoder model
inputs = tf.keras.Input(shape=(original_dim,))
h = layers.Dense(intermediate_dim, activation='relu')(inputs)
z_mean = layers.Dense(latent_dim)(h)
z_log_sigma = layers.Dense(latent_dim)(h)

# Instantiate the encoder model
encoder = tf.keras.Model(inputs, [z_mean, z_log_sigma], name='encoder')

# Display the summary of the encoder model
encoder.summary()

In the code above, we first define the dimensions of our data and the latent space. Then, we define the structure of the encoder network. Our encoder model has a single hidden layer with a ReLU activation function. The encoder outputs the parameters of a Gaussian distribution, z_mean and z_log_sigma.

6.2.2 Latent Space Sampling

We will sample from the latent space to generate a new data point. We do this by adding a custom layer that takes z_mean and z_log_sigma as input and outputs a random sample from the corresponding Gaussian distribution.

from tensorflow.keras import backend as K

def sampling(args):
    z_mean, z_log_sigma = args
    epsilon = K.random_normal(shape=(K.shape(z_mean)[0], latent_dim),
                              mean=0., stddev=1.)
    return z_mean + K.exp(z_log_sigma) * epsilon

z = layers.Lambda(sampling)([z_mean, z_log_sigma])

6.2.3 Decoder

The decoder part of the VAE takes a point in the latent space and maps it back to the original data space. Like the encoder, the decoder is also typically implemented as a neural network.

Here's how we might define the decoder:

# Define decoder model
decoder_h = layers.Dense(intermediate_dim, activation='relu')
decoder_mean = layers.Dense(original_dim, activation='sigmoid')
h_decoded = decoder_h(z)
x_decoded_mean = decoder_mean(h_decoded)

In this code, we define the structure of the decoder network. Our decoder model also has a single hidden layer with a ReLU activation function. The output layer uses a sigmoid activation function to ensure that the output values fall within the range [0, 1], the same range as our normalized input data.

6.2.4 Assembling the VAE

We now have all the components needed for our VAE: an encoder that maps our data to a latent space, a decoder that maps from the latent space back to the data space, and a sampling function that allows us to generate new data points in the latent space.

We can now assemble these components into a single model:

from tensorflow.keras import Model

# Assemble encoder, sampler, and decoder into a VAE model
vae = Model(inputs, x_decoded_mean)

# Print model summary
vae.summary()

Our VAE model is now ready to be compiled and trained, which we will cover in the next section.

6.2 Model Creation

The next step in our project is the creation of the Variational Autoencoder model. As we learned in Chapter 5, the VAE consists of an encoder, a decoder, and a loss function that incorporates a reconstruction term as well as a Kullback-Leibler divergence term. 

Let's create these components using TensorFlow and Keras.

6.2.1 Encoder

The encoder part of the VAE is responsible for mapping the input data into a latent space representation. This is typically done using a neural network.

Here's an example of how we might define the encoder using Keras:

import tensorflow as tf
from tensorflow.keras import layers

original_dim = 28 * 28
intermediate_dim = 64
latent_dim = 2

# Define encoder model
inputs = tf.keras.Input(shape=(original_dim,))
h = layers.Dense(intermediate_dim, activation='relu')(inputs)
z_mean = layers.Dense(latent_dim)(h)
z_log_sigma = layers.Dense(latent_dim)(h)

# Instantiate the encoder model
encoder = tf.keras.Model(inputs, [z_mean, z_log_sigma], name='encoder')

# Display the summary of the encoder model
encoder.summary()

In the code above, we first define the dimensions of our data and the latent space. Then, we define the structure of the encoder network. Our encoder model has a single hidden layer with a ReLU activation function. The encoder outputs the parameters of a Gaussian distribution, z_mean and z_log_sigma.

6.2.2 Latent Space Sampling

We will sample from the latent space to generate a new data point. We do this by adding a custom layer that takes z_mean and z_log_sigma as input and outputs a random sample from the corresponding Gaussian distribution.

from tensorflow.keras import backend as K

def sampling(args):
    z_mean, z_log_sigma = args
    epsilon = K.random_normal(shape=(K.shape(z_mean)[0], latent_dim),
                              mean=0., stddev=1.)
    return z_mean + K.exp(z_log_sigma) * epsilon

z = layers.Lambda(sampling)([z_mean, z_log_sigma])

6.2.3 Decoder

The decoder part of the VAE takes a point in the latent space and maps it back to the original data space. Like the encoder, the decoder is also typically implemented as a neural network.

Here's how we might define the decoder:

# Define decoder model
decoder_h = layers.Dense(intermediate_dim, activation='relu')
decoder_mean = layers.Dense(original_dim, activation='sigmoid')
h_decoded = decoder_h(z)
x_decoded_mean = decoder_mean(h_decoded)

In this code, we define the structure of the decoder network. Our decoder model also has a single hidden layer with a ReLU activation function. The output layer uses a sigmoid activation function to ensure that the output values fall within the range [0, 1], the same range as our normalized input data.

6.2.4 Assembling the VAE

We now have all the components needed for our VAE: an encoder that maps our data to a latent space, a decoder that maps from the latent space back to the data space, and a sampling function that allows us to generate new data points in the latent space.

We can now assemble these components into a single model:

from tensorflow.keras import Model

# Assemble encoder, sampler, and decoder into a VAE model
vae = Model(inputs, x_decoded_mean)

# Print model summary
vae.summary()

Our VAE model is now ready to be compiled and trained, which we will cover in the next section.

6.2 Model Creation

The next step in our project is the creation of the Variational Autoencoder model. As we learned in Chapter 5, the VAE consists of an encoder, a decoder, and a loss function that incorporates a reconstruction term as well as a Kullback-Leibler divergence term. 

Let's create these components using TensorFlow and Keras.

6.2.1 Encoder

The encoder part of the VAE is responsible for mapping the input data into a latent space representation. This is typically done using a neural network.

Here's an example of how we might define the encoder using Keras:

import tensorflow as tf
from tensorflow.keras import layers

original_dim = 28 * 28
intermediate_dim = 64
latent_dim = 2

# Define encoder model
inputs = tf.keras.Input(shape=(original_dim,))
h = layers.Dense(intermediate_dim, activation='relu')(inputs)
z_mean = layers.Dense(latent_dim)(h)
z_log_sigma = layers.Dense(latent_dim)(h)

# Instantiate the encoder model
encoder = tf.keras.Model(inputs, [z_mean, z_log_sigma], name='encoder')

# Display the summary of the encoder model
encoder.summary()

In the code above, we first define the dimensions of our data and the latent space. Then, we define the structure of the encoder network. Our encoder model has a single hidden layer with a ReLU activation function. The encoder outputs the parameters of a Gaussian distribution, z_mean and z_log_sigma.

6.2.2 Latent Space Sampling

We will sample from the latent space to generate a new data point. We do this by adding a custom layer that takes z_mean and z_log_sigma as input and outputs a random sample from the corresponding Gaussian distribution.

from tensorflow.keras import backend as K

def sampling(args):
    z_mean, z_log_sigma = args
    epsilon = K.random_normal(shape=(K.shape(z_mean)[0], latent_dim),
                              mean=0., stddev=1.)
    return z_mean + K.exp(z_log_sigma) * epsilon

z = layers.Lambda(sampling)([z_mean, z_log_sigma])

6.2.3 Decoder

The decoder part of the VAE takes a point in the latent space and maps it back to the original data space. Like the encoder, the decoder is also typically implemented as a neural network.

Here's how we might define the decoder:

# Define decoder model
decoder_h = layers.Dense(intermediate_dim, activation='relu')
decoder_mean = layers.Dense(original_dim, activation='sigmoid')
h_decoded = decoder_h(z)
x_decoded_mean = decoder_mean(h_decoded)

In this code, we define the structure of the decoder network. Our decoder model also has a single hidden layer with a ReLU activation function. The output layer uses a sigmoid activation function to ensure that the output values fall within the range [0, 1], the same range as our normalized input data.

6.2.4 Assembling the VAE

We now have all the components needed for our VAE: an encoder that maps our data to a latent space, a decoder that maps from the latent space back to the data space, and a sampling function that allows us to generate new data points in the latent space.

We can now assemble these components into a single model:

from tensorflow.keras import Model

# Assemble encoder, sampler, and decoder into a VAE model
vae = Model(inputs, x_decoded_mean)

# Print model summary
vae.summary()

Our VAE model is now ready to be compiled and trained, which we will cover in the next section.

6.2 Model Creation

The next step in our project is the creation of the Variational Autoencoder model. As we learned in Chapter 5, the VAE consists of an encoder, a decoder, and a loss function that incorporates a reconstruction term as well as a Kullback-Leibler divergence term. 

Let's create these components using TensorFlow and Keras.

6.2.1 Encoder

The encoder part of the VAE is responsible for mapping the input data into a latent space representation. This is typically done using a neural network.

Here's an example of how we might define the encoder using Keras:

import tensorflow as tf
from tensorflow.keras import layers

original_dim = 28 * 28
intermediate_dim = 64
latent_dim = 2

# Define encoder model
inputs = tf.keras.Input(shape=(original_dim,))
h = layers.Dense(intermediate_dim, activation='relu')(inputs)
z_mean = layers.Dense(latent_dim)(h)
z_log_sigma = layers.Dense(latent_dim)(h)

# Instantiate the encoder model
encoder = tf.keras.Model(inputs, [z_mean, z_log_sigma], name='encoder')

# Display the summary of the encoder model
encoder.summary()

In the code above, we first define the dimensions of our data and the latent space. Then, we define the structure of the encoder network. Our encoder model has a single hidden layer with a ReLU activation function. The encoder outputs the parameters of a Gaussian distribution, z_mean and z_log_sigma.

6.2.2 Latent Space Sampling

We will sample from the latent space to generate a new data point. We do this by adding a custom layer that takes z_mean and z_log_sigma as input and outputs a random sample from the corresponding Gaussian distribution.

from tensorflow.keras import backend as K

def sampling(args):
    z_mean, z_log_sigma = args
    epsilon = K.random_normal(shape=(K.shape(z_mean)[0], latent_dim),
                              mean=0., stddev=1.)
    return z_mean + K.exp(z_log_sigma) * epsilon

z = layers.Lambda(sampling)([z_mean, z_log_sigma])

6.2.3 Decoder

The decoder part of the VAE takes a point in the latent space and maps it back to the original data space. Like the encoder, the decoder is also typically implemented as a neural network.

Here's how we might define the decoder:

# Define decoder model
decoder_h = layers.Dense(intermediate_dim, activation='relu')
decoder_mean = layers.Dense(original_dim, activation='sigmoid')
h_decoded = decoder_h(z)
x_decoded_mean = decoder_mean(h_decoded)

In this code, we define the structure of the decoder network. Our decoder model also has a single hidden layer with a ReLU activation function. The output layer uses a sigmoid activation function to ensure that the output values fall within the range [0, 1], the same range as our normalized input data.

6.2.4 Assembling the VAE

We now have all the components needed for our VAE: an encoder that maps our data to a latent space, a decoder that maps from the latent space back to the data space, and a sampling function that allows us to generate new data points in the latent space.

We can now assemble these components into a single model:

from tensorflow.keras import Model

# Assemble encoder, sampler, and decoder into a VAE model
vae = Model(inputs, x_decoded_mean)

# Print model summary
vae.summary()

Our VAE model is now ready to be compiled and trained, which we will cover in the next section.