Code icon

The App is Under a Quick Maintenance

We apologize for the inconvenience. Please come back later

Menu iconMenu iconNatural Language Processing with Python
Natural Language Processing with Python

Chapter 10: Machine Translation

10.5 Practical Exercises of Chapter 10: Machine Translation

Exercise 1: Implementing a Simple Seq2Seq Model

Implement a simple sequence-to-sequence model using LSTM layers for both the encoder and decoder. Use this model to translate short English sentences into French. You can use a small dataset such as the English-French sentence pairs dataset available on manylanguagepairs.com for this exercise.

Here is a skeleton code to help you get started:

import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, LSTM, Dense

# Hyperparameters
batch_size = 64
epochs = 100
latent_dim = 256
num_samples = 10000

# Prepare the data...

# Define an input sequence and process it.
encoder_inputs = Input(shape=(None, num_encoder_tokens))
encoder = LSTM(latent_dim, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_inputs)
encoder_states = [state_h, state_c]

# Set up the decoder, using `encoder_states` as initial state.
decoder_inputs = Input(shape=(None, num_decoder_tokens))
decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state=encoder_states)
decoder_dense = Dense(num_decoder_tokens, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)

# Define the model
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)

# Compile & train the model
model.compile(optimizer='rmsprop', loss='categorical_crossentropy')
model.fit([encoder_input_data, decoder_input_data], decoder_target_data,
          batch_size=batch_size,
          epochs=epochs,
          validation_split=0.2)

Exercise 2: Attention Mechanism

Try to implement an attention mechanism and incorporate it into the seq2seq model you created in the first exercise.

Exercise 3: Experimenting with Transformer Models

Use the Hugging Face's Transformers library to load a pre-trained Transformer model and use it for translation tasks. Compare the performance with the seq2seq model you created in the first exercise.

from transformers import pipeline

# Use the 'translation_en_to_fr' pipeline
translator = pipeline('translation_en_to_fr', model='Helsinki-NLP/opus-mt-en-fr')

# Translate a sentence
translation = translator("Hello, how are you?", max_length=40)
print(translation[0]['translation_text'])

Exercise 4: Evaluation Metrics

Write a function to calculate the BLEU score for the translations produced by your models. Use this function to compare the performance of different models or different versions of a model.

from nltk.translate.bleu_score import sentence_bleu

def calculate_bleu_score(reference, candidate):
    reference = [reference.split()]
    candidate = candidate.split()
    score = sentence_bleu(reference, candidate)
    return score

reference = "The cat is on the mat."
candidate = "There is a cat on the mat."
print("BLEU Score:", calculate_bleu_score(reference, candidate))

Remember to adjust the models and parameters according to the resources you have, as training these models can be very resource-intensive. Also, these exercises are just starting points, feel free to expand upon them and experiment with different settings and architectures.

10.5 Practical Exercises of Chapter 10: Machine Translation

Exercise 1: Implementing a Simple Seq2Seq Model

Implement a simple sequence-to-sequence model using LSTM layers for both the encoder and decoder. Use this model to translate short English sentences into French. You can use a small dataset such as the English-French sentence pairs dataset available on manylanguagepairs.com for this exercise.

Here is a skeleton code to help you get started:

import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, LSTM, Dense

# Hyperparameters
batch_size = 64
epochs = 100
latent_dim = 256
num_samples = 10000

# Prepare the data...

# Define an input sequence and process it.
encoder_inputs = Input(shape=(None, num_encoder_tokens))
encoder = LSTM(latent_dim, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_inputs)
encoder_states = [state_h, state_c]

# Set up the decoder, using `encoder_states` as initial state.
decoder_inputs = Input(shape=(None, num_decoder_tokens))
decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state=encoder_states)
decoder_dense = Dense(num_decoder_tokens, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)

# Define the model
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)

# Compile & train the model
model.compile(optimizer='rmsprop', loss='categorical_crossentropy')
model.fit([encoder_input_data, decoder_input_data], decoder_target_data,
          batch_size=batch_size,
          epochs=epochs,
          validation_split=0.2)

Exercise 2: Attention Mechanism

Try to implement an attention mechanism and incorporate it into the seq2seq model you created in the first exercise.

Exercise 3: Experimenting with Transformer Models

Use the Hugging Face's Transformers library to load a pre-trained Transformer model and use it for translation tasks. Compare the performance with the seq2seq model you created in the first exercise.

from transformers import pipeline

# Use the 'translation_en_to_fr' pipeline
translator = pipeline('translation_en_to_fr', model='Helsinki-NLP/opus-mt-en-fr')

# Translate a sentence
translation = translator("Hello, how are you?", max_length=40)
print(translation[0]['translation_text'])

Exercise 4: Evaluation Metrics

Write a function to calculate the BLEU score for the translations produced by your models. Use this function to compare the performance of different models or different versions of a model.

from nltk.translate.bleu_score import sentence_bleu

def calculate_bleu_score(reference, candidate):
    reference = [reference.split()]
    candidate = candidate.split()
    score = sentence_bleu(reference, candidate)
    return score

reference = "The cat is on the mat."
candidate = "There is a cat on the mat."
print("BLEU Score:", calculate_bleu_score(reference, candidate))

Remember to adjust the models and parameters according to the resources you have, as training these models can be very resource-intensive. Also, these exercises are just starting points, feel free to expand upon them and experiment with different settings and architectures.

10.5 Practical Exercises of Chapter 10: Machine Translation

Exercise 1: Implementing a Simple Seq2Seq Model

Implement a simple sequence-to-sequence model using LSTM layers for both the encoder and decoder. Use this model to translate short English sentences into French. You can use a small dataset such as the English-French sentence pairs dataset available on manylanguagepairs.com for this exercise.

Here is a skeleton code to help you get started:

import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, LSTM, Dense

# Hyperparameters
batch_size = 64
epochs = 100
latent_dim = 256
num_samples = 10000

# Prepare the data...

# Define an input sequence and process it.
encoder_inputs = Input(shape=(None, num_encoder_tokens))
encoder = LSTM(latent_dim, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_inputs)
encoder_states = [state_h, state_c]

# Set up the decoder, using `encoder_states` as initial state.
decoder_inputs = Input(shape=(None, num_decoder_tokens))
decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state=encoder_states)
decoder_dense = Dense(num_decoder_tokens, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)

# Define the model
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)

# Compile & train the model
model.compile(optimizer='rmsprop', loss='categorical_crossentropy')
model.fit([encoder_input_data, decoder_input_data], decoder_target_data,
          batch_size=batch_size,
          epochs=epochs,
          validation_split=0.2)

Exercise 2: Attention Mechanism

Try to implement an attention mechanism and incorporate it into the seq2seq model you created in the first exercise.

Exercise 3: Experimenting with Transformer Models

Use the Hugging Face's Transformers library to load a pre-trained Transformer model and use it for translation tasks. Compare the performance with the seq2seq model you created in the first exercise.

from transformers import pipeline

# Use the 'translation_en_to_fr' pipeline
translator = pipeline('translation_en_to_fr', model='Helsinki-NLP/opus-mt-en-fr')

# Translate a sentence
translation = translator("Hello, how are you?", max_length=40)
print(translation[0]['translation_text'])

Exercise 4: Evaluation Metrics

Write a function to calculate the BLEU score for the translations produced by your models. Use this function to compare the performance of different models or different versions of a model.

from nltk.translate.bleu_score import sentence_bleu

def calculate_bleu_score(reference, candidate):
    reference = [reference.split()]
    candidate = candidate.split()
    score = sentence_bleu(reference, candidate)
    return score

reference = "The cat is on the mat."
candidate = "There is a cat on the mat."
print("BLEU Score:", calculate_bleu_score(reference, candidate))

Remember to adjust the models and parameters according to the resources you have, as training these models can be very resource-intensive. Also, these exercises are just starting points, feel free to expand upon them and experiment with different settings and architectures.

10.5 Practical Exercises of Chapter 10: Machine Translation

Exercise 1: Implementing a Simple Seq2Seq Model

Implement a simple sequence-to-sequence model using LSTM layers for both the encoder and decoder. Use this model to translate short English sentences into French. You can use a small dataset such as the English-French sentence pairs dataset available on manylanguagepairs.com for this exercise.

Here is a skeleton code to help you get started:

import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, LSTM, Dense

# Hyperparameters
batch_size = 64
epochs = 100
latent_dim = 256
num_samples = 10000

# Prepare the data...

# Define an input sequence and process it.
encoder_inputs = Input(shape=(None, num_encoder_tokens))
encoder = LSTM(latent_dim, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_inputs)
encoder_states = [state_h, state_c]

# Set up the decoder, using `encoder_states` as initial state.
decoder_inputs = Input(shape=(None, num_decoder_tokens))
decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state=encoder_states)
decoder_dense = Dense(num_decoder_tokens, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)

# Define the model
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)

# Compile & train the model
model.compile(optimizer='rmsprop', loss='categorical_crossentropy')
model.fit([encoder_input_data, decoder_input_data], decoder_target_data,
          batch_size=batch_size,
          epochs=epochs,
          validation_split=0.2)

Exercise 2: Attention Mechanism

Try to implement an attention mechanism and incorporate it into the seq2seq model you created in the first exercise.

Exercise 3: Experimenting with Transformer Models

Use the Hugging Face's Transformers library to load a pre-trained Transformer model and use it for translation tasks. Compare the performance with the seq2seq model you created in the first exercise.

from transformers import pipeline

# Use the 'translation_en_to_fr' pipeline
translator = pipeline('translation_en_to_fr', model='Helsinki-NLP/opus-mt-en-fr')

# Translate a sentence
translation = translator("Hello, how are you?", max_length=40)
print(translation[0]['translation_text'])

Exercise 4: Evaluation Metrics

Write a function to calculate the BLEU score for the translations produced by your models. Use this function to compare the performance of different models or different versions of a model.

from nltk.translate.bleu_score import sentence_bleu

def calculate_bleu_score(reference, candidate):
    reference = [reference.split()]
    candidate = candidate.split()
    score = sentence_bleu(reference, candidate)
    return score

reference = "The cat is on the mat."
candidate = "There is a cat on the mat."
print("BLEU Score:", calculate_bleu_score(reference, candidate))

Remember to adjust the models and parameters according to the resources you have, as training these models can be very resource-intensive. Also, these exercises are just starting points, feel free to expand upon them and experiment with different settings and architectures.