Capítulo 2: Fundamentos del Aprendizaje Automático para
Ejercicios Prácticos del Capítulo 2
Esta sección de ejercicios prácticos consolida tu comprensión de los temas cubiertos en el Capítulo 2. Cada ejercicio está diseñado para proporcionar experiencia práctica con conceptos clave como fundamentos de machine learning, redes neuronales y embeddings basados en transformers. Se incluyen soluciones con código detallado para cada tarea.
Ejercicio 1: Preprocesamiento de Datos de Texto
Tarea: Escribe un programa en Python para preprocesar texto mediante:
- Tokenización en palabras.
- Eliminación de stopwords.
- Conversión del texto en una representación de Bolsa de Palabras (BoW).
Ejemplo de Entrada:
"Natural language processing is a fascinating field of artificial intelligence."
Solución:
from sklearn.feature_extraction.text import CountVectorizer
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
# Input text
text = "Natural language processing is a fascinating field of artificial intelligence."
# Tokenize
tokens = word_tokenize(text.lower())
# Remove stopwords
stop_words = set(stopwords.words('english'))
filtered_tokens = [word for word in tokens if word.isalnum() and word not in stop_words]
# Convert to Bag-of-Words representation
vectorizer = CountVectorizer()
bow_matrix = vectorizer.fit_transform([" ".join(filtered_tokens)])
print("Filtered Tokens:", filtered_tokens)
print("Vocabulary:", vectorizer.vocabulary_)
print("BoW Matrix:\n", bow_matrix.toarray())
Salida Esperada:
Filtered Tokens: ['natural', 'language', 'processing', 'fascinating', 'field', 'artificial', 'intelligence']
Vocabulary: {'natural': 3, 'language': 2, 'processing': 5, 'fascinating': 1, 'field': 0, 'artificial': 4, 'intelligence': 6}
BoW Matrix:
[[1 1 1 1 1 1 1]]
Ejercicio 2: Entrenamiento de una Red Neuronal Feed-forward para Análisis de Sentimientos
Objetivo: Entrenar una red neuronal feed-forward simple para clasificar reseñas como positivas o negativas.
Conjunto de datos:
Reviews = [
"I love this movie; it's fantastic!",
"This film was terrible and boring.",
"Amazing acting and a great story.",
"The plot was awful, and I hated it."
]
Labels = [1, 0, 1, 0] # 1 = Positive, 0 = Negative
Solución:
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
# Sample dataset
texts = [
"I love this movie; it's fantastic!",
"This film was terrible and boring.",
"Amazing acting and a great story.",
"The plot was awful, and I hated it."
]
labels = [1, 0, 1, 0] # 1 = Positive, 0 = Negative
# Preprocess text using Bag-of-Words
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(texts).toarray()
# Train-test split
X_train, X_test, y_train, y_test = train_test_split(X, labels, test_size=0.25, random_state=42)
# Define the feedforward neural network
model = Sequential([
Dense(8, input_dim=X_train.shape[1], activation='relu'), # Hidden layer
Dense(1, activation='sigmoid') # Output layer
])
# Compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# Train the model
model.fit(X_train, y_train, epochs=10, batch_size=2, verbose=1)
# Evaluate the model
loss, accuracy = model.evaluate(X_test, y_test)
print(f"Test Accuracy: {accuracy:.2f}")
Salida Esperada:
Epoch 10/10
Test Accuracy: 1.00
Ejercicio 3: Extracción de Embeddings de Palabras con BERT
Tarea: Extraer embeddings contextualizados para una palabra en una oración usando BERT.
Oración de Entrada:
"The bank is located near the river."
Solución:
from transformers import AutoTokenizer, AutoModel
import torch
# Load BERT model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModel.from_pretrained("bert-base-uncased")
# Input sentence
sentence = "The bank is located near the river."
# Tokenize input
inputs = tokenizer(sentence, return_tensors="pt", truncation=True, padding=True)
# Generate embeddings
with torch.no_grad():
outputs = model(**inputs)
embeddings = outputs.last_hidden_state # Shape: [batch_size, seq_length, hidden_dim]
# Display embedding for the word 'bank'
tokenized_words = tokenizer.tokenize(sentence)
bank_index = tokenized_words.index("bank")
bank_embedding = embeddings[0, bank_index, :]
print(f"Embedding for 'bank': {bank_embedding}")
Ejercicio 4: Embeddings de Oraciones con Transformers de Oraciones
Tarea: Generar embeddings de oraciones para similitud semántica.
Oraciones:
- "Me encanta el procesamiento del lenguaje natural."
- "El PLN es un campo fascinante."
Solución:
from sentence_transformers import SentenceTransformer
# Load a pre-trained sentence transformer model
model = SentenceTransformer('all-MiniLM-L6-v2')
# Input sentences
sentences = [
"I love natural language processing.",
"NLP is a fascinating field."
]
# Generate sentence embeddings
embeddings = model.encode(sentences)
# Display embeddings
print("Embedding for sentence 1:", embeddings[0])
print("Embedding for sentence 2:", embeddings[1])
Salida Esperada:
Dos vectores que representan el significado semántico de cada oración.
Ejercicio 5: Ajuste Fino de BERT para Clasificación de Texto
Tarea: Realizar un ajuste fino de BERT en un pequeño conjunto de datos de clasificación de texto.
Ejemplo del Conjunto de Datos:
Texts = ["I love this movie!", "The movie was awful.", "What a great film!", "I disliked the plot."]
Labels = [1, 0, 1, 0] # 1 = Positive, 0 = Negative
Solución:
from transformers import AutoTokenizer, AutoModelForSequenceClassification, Trainer, TrainingArguments
from datasets import Dataset
# Prepare dataset
texts = ["I love this movie!", "The movie was awful.", "What a great film!", "I disliked the plot."]
labels = [1, 0, 1, 0] # 1 = Positive, 0 = Negative
data = {"text": texts, "label": labels}
dataset = Dataset.from_dict(data)
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=2)
# Tokenize dataset
def tokenize_function(example):
return tokenizer(example["text"], truncation=True, padding="max_length")
tokenized_dataset = dataset.map(tokenize_function, batched=True)
# Training arguments
training_args = TrainingArguments(
output_dir="./results",
evaluation_strategy="epoch",
learning_rate=2e-5,
per_device_train_batch_size=8,
num_train_epochs=3,
)
# Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_dataset,
)
# Fine-tune
trainer.train()
Estos ejercicios te guían a través del preprocesamiento, la construcción y el entrenamiento de modelos, y el trabajo con embeddings. Completarlos te dará experiencia práctica con las técnicas discutidas en este capítulo, construyendo una base sólida para abordar tareas de PLN del mundo real.
Ejercicios Prácticos del Capítulo 2
Esta sección de ejercicios prácticos consolida tu comprensión de los temas cubiertos en el Capítulo 2. Cada ejercicio está diseñado para proporcionar experiencia práctica con conceptos clave como fundamentos de machine learning, redes neuronales y embeddings basados en transformers. Se incluyen soluciones con código detallado para cada tarea.
Ejercicio 1: Preprocesamiento de Datos de Texto
Tarea: Escribe un programa en Python para preprocesar texto mediante:
- Tokenización en palabras.
- Eliminación de stopwords.
- Conversión del texto en una representación de Bolsa de Palabras (BoW).
Ejemplo de Entrada:
"Natural language processing is a fascinating field of artificial intelligence."
Solución:
from sklearn.feature_extraction.text import CountVectorizer
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
# Input text
text = "Natural language processing is a fascinating field of artificial intelligence."
# Tokenize
tokens = word_tokenize(text.lower())
# Remove stopwords
stop_words = set(stopwords.words('english'))
filtered_tokens = [word for word in tokens if word.isalnum() and word not in stop_words]
# Convert to Bag-of-Words representation
vectorizer = CountVectorizer()
bow_matrix = vectorizer.fit_transform([" ".join(filtered_tokens)])
print("Filtered Tokens:", filtered_tokens)
print("Vocabulary:", vectorizer.vocabulary_)
print("BoW Matrix:\n", bow_matrix.toarray())
Salida Esperada:
Filtered Tokens: ['natural', 'language', 'processing', 'fascinating', 'field', 'artificial', 'intelligence']
Vocabulary: {'natural': 3, 'language': 2, 'processing': 5, 'fascinating': 1, 'field': 0, 'artificial': 4, 'intelligence': 6}
BoW Matrix:
[[1 1 1 1 1 1 1]]
Ejercicio 2: Entrenamiento de una Red Neuronal Feed-forward para Análisis de Sentimientos
Objetivo: Entrenar una red neuronal feed-forward simple para clasificar reseñas como positivas o negativas.
Conjunto de datos:
Reviews = [
"I love this movie; it's fantastic!",
"This film was terrible and boring.",
"Amazing acting and a great story.",
"The plot was awful, and I hated it."
]
Labels = [1, 0, 1, 0] # 1 = Positive, 0 = Negative
Solución:
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
# Sample dataset
texts = [
"I love this movie; it's fantastic!",
"This film was terrible and boring.",
"Amazing acting and a great story.",
"The plot was awful, and I hated it."
]
labels = [1, 0, 1, 0] # 1 = Positive, 0 = Negative
# Preprocess text using Bag-of-Words
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(texts).toarray()
# Train-test split
X_train, X_test, y_train, y_test = train_test_split(X, labels, test_size=0.25, random_state=42)
# Define the feedforward neural network
model = Sequential([
Dense(8, input_dim=X_train.shape[1], activation='relu'), # Hidden layer
Dense(1, activation='sigmoid') # Output layer
])
# Compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# Train the model
model.fit(X_train, y_train, epochs=10, batch_size=2, verbose=1)
# Evaluate the model
loss, accuracy = model.evaluate(X_test, y_test)
print(f"Test Accuracy: {accuracy:.2f}")
Salida Esperada:
Epoch 10/10
Test Accuracy: 1.00
Ejercicio 3: Extracción de Embeddings de Palabras con BERT
Tarea: Extraer embeddings contextualizados para una palabra en una oración usando BERT.
Oración de Entrada:
"The bank is located near the river."
Solución:
from transformers import AutoTokenizer, AutoModel
import torch
# Load BERT model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModel.from_pretrained("bert-base-uncased")
# Input sentence
sentence = "The bank is located near the river."
# Tokenize input
inputs = tokenizer(sentence, return_tensors="pt", truncation=True, padding=True)
# Generate embeddings
with torch.no_grad():
outputs = model(**inputs)
embeddings = outputs.last_hidden_state # Shape: [batch_size, seq_length, hidden_dim]
# Display embedding for the word 'bank'
tokenized_words = tokenizer.tokenize(sentence)
bank_index = tokenized_words.index("bank")
bank_embedding = embeddings[0, bank_index, :]
print(f"Embedding for 'bank': {bank_embedding}")
Ejercicio 4: Embeddings de Oraciones con Transformers de Oraciones
Tarea: Generar embeddings de oraciones para similitud semántica.
Oraciones:
- "Me encanta el procesamiento del lenguaje natural."
- "El PLN es un campo fascinante."
Solución:
from sentence_transformers import SentenceTransformer
# Load a pre-trained sentence transformer model
model = SentenceTransformer('all-MiniLM-L6-v2')
# Input sentences
sentences = [
"I love natural language processing.",
"NLP is a fascinating field."
]
# Generate sentence embeddings
embeddings = model.encode(sentences)
# Display embeddings
print("Embedding for sentence 1:", embeddings[0])
print("Embedding for sentence 2:", embeddings[1])
Salida Esperada:
Dos vectores que representan el significado semántico de cada oración.
Ejercicio 5: Ajuste Fino de BERT para Clasificación de Texto
Tarea: Realizar un ajuste fino de BERT en un pequeño conjunto de datos de clasificación de texto.
Ejemplo del Conjunto de Datos:
Texts = ["I love this movie!", "The movie was awful.", "What a great film!", "I disliked the plot."]
Labels = [1, 0, 1, 0] # 1 = Positive, 0 = Negative
Solución:
from transformers import AutoTokenizer, AutoModelForSequenceClassification, Trainer, TrainingArguments
from datasets import Dataset
# Prepare dataset
texts = ["I love this movie!", "The movie was awful.", "What a great film!", "I disliked the plot."]
labels = [1, 0, 1, 0] # 1 = Positive, 0 = Negative
data = {"text": texts, "label": labels}
dataset = Dataset.from_dict(data)
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=2)
# Tokenize dataset
def tokenize_function(example):
return tokenizer(example["text"], truncation=True, padding="max_length")
tokenized_dataset = dataset.map(tokenize_function, batched=True)
# Training arguments
training_args = TrainingArguments(
output_dir="./results",
evaluation_strategy="epoch",
learning_rate=2e-5,
per_device_train_batch_size=8,
num_train_epochs=3,
)
# Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_dataset,
)
# Fine-tune
trainer.train()
Estos ejercicios te guían a través del preprocesamiento, la construcción y el entrenamiento de modelos, y el trabajo con embeddings. Completarlos te dará experiencia práctica con las técnicas discutidas en este capítulo, construyendo una base sólida para abordar tareas de PLN del mundo real.
Ejercicios Prácticos del Capítulo 2
Esta sección de ejercicios prácticos consolida tu comprensión de los temas cubiertos en el Capítulo 2. Cada ejercicio está diseñado para proporcionar experiencia práctica con conceptos clave como fundamentos de machine learning, redes neuronales y embeddings basados en transformers. Se incluyen soluciones con código detallado para cada tarea.
Ejercicio 1: Preprocesamiento de Datos de Texto
Tarea: Escribe un programa en Python para preprocesar texto mediante:
- Tokenización en palabras.
- Eliminación de stopwords.
- Conversión del texto en una representación de Bolsa de Palabras (BoW).
Ejemplo de Entrada:
"Natural language processing is a fascinating field of artificial intelligence."
Solución:
from sklearn.feature_extraction.text import CountVectorizer
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
# Input text
text = "Natural language processing is a fascinating field of artificial intelligence."
# Tokenize
tokens = word_tokenize(text.lower())
# Remove stopwords
stop_words = set(stopwords.words('english'))
filtered_tokens = [word for word in tokens if word.isalnum() and word not in stop_words]
# Convert to Bag-of-Words representation
vectorizer = CountVectorizer()
bow_matrix = vectorizer.fit_transform([" ".join(filtered_tokens)])
print("Filtered Tokens:", filtered_tokens)
print("Vocabulary:", vectorizer.vocabulary_)
print("BoW Matrix:\n", bow_matrix.toarray())
Salida Esperada:
Filtered Tokens: ['natural', 'language', 'processing', 'fascinating', 'field', 'artificial', 'intelligence']
Vocabulary: {'natural': 3, 'language': 2, 'processing': 5, 'fascinating': 1, 'field': 0, 'artificial': 4, 'intelligence': 6}
BoW Matrix:
[[1 1 1 1 1 1 1]]
Ejercicio 2: Entrenamiento de una Red Neuronal Feed-forward para Análisis de Sentimientos
Objetivo: Entrenar una red neuronal feed-forward simple para clasificar reseñas como positivas o negativas.
Conjunto de datos:
Reviews = [
"I love this movie; it's fantastic!",
"This film was terrible and boring.",
"Amazing acting and a great story.",
"The plot was awful, and I hated it."
]
Labels = [1, 0, 1, 0] # 1 = Positive, 0 = Negative
Solución:
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
# Sample dataset
texts = [
"I love this movie; it's fantastic!",
"This film was terrible and boring.",
"Amazing acting and a great story.",
"The plot was awful, and I hated it."
]
labels = [1, 0, 1, 0] # 1 = Positive, 0 = Negative
# Preprocess text using Bag-of-Words
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(texts).toarray()
# Train-test split
X_train, X_test, y_train, y_test = train_test_split(X, labels, test_size=0.25, random_state=42)
# Define the feedforward neural network
model = Sequential([
Dense(8, input_dim=X_train.shape[1], activation='relu'), # Hidden layer
Dense(1, activation='sigmoid') # Output layer
])
# Compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# Train the model
model.fit(X_train, y_train, epochs=10, batch_size=2, verbose=1)
# Evaluate the model
loss, accuracy = model.evaluate(X_test, y_test)
print(f"Test Accuracy: {accuracy:.2f}")
Salida Esperada:
Epoch 10/10
Test Accuracy: 1.00
Ejercicio 3: Extracción de Embeddings de Palabras con BERT
Tarea: Extraer embeddings contextualizados para una palabra en una oración usando BERT.
Oración de Entrada:
"The bank is located near the river."
Solución:
from transformers import AutoTokenizer, AutoModel
import torch
# Load BERT model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModel.from_pretrained("bert-base-uncased")
# Input sentence
sentence = "The bank is located near the river."
# Tokenize input
inputs = tokenizer(sentence, return_tensors="pt", truncation=True, padding=True)
# Generate embeddings
with torch.no_grad():
outputs = model(**inputs)
embeddings = outputs.last_hidden_state # Shape: [batch_size, seq_length, hidden_dim]
# Display embedding for the word 'bank'
tokenized_words = tokenizer.tokenize(sentence)
bank_index = tokenized_words.index("bank")
bank_embedding = embeddings[0, bank_index, :]
print(f"Embedding for 'bank': {bank_embedding}")
Ejercicio 4: Embeddings de Oraciones con Transformers de Oraciones
Tarea: Generar embeddings de oraciones para similitud semántica.
Oraciones:
- "Me encanta el procesamiento del lenguaje natural."
- "El PLN es un campo fascinante."
Solución:
from sentence_transformers import SentenceTransformer
# Load a pre-trained sentence transformer model
model = SentenceTransformer('all-MiniLM-L6-v2')
# Input sentences
sentences = [
"I love natural language processing.",
"NLP is a fascinating field."
]
# Generate sentence embeddings
embeddings = model.encode(sentences)
# Display embeddings
print("Embedding for sentence 1:", embeddings[0])
print("Embedding for sentence 2:", embeddings[1])
Salida Esperada:
Dos vectores que representan el significado semántico de cada oración.
Ejercicio 5: Ajuste Fino de BERT para Clasificación de Texto
Tarea: Realizar un ajuste fino de BERT en un pequeño conjunto de datos de clasificación de texto.
Ejemplo del Conjunto de Datos:
Texts = ["I love this movie!", "The movie was awful.", "What a great film!", "I disliked the plot."]
Labels = [1, 0, 1, 0] # 1 = Positive, 0 = Negative
Solución:
from transformers import AutoTokenizer, AutoModelForSequenceClassification, Trainer, TrainingArguments
from datasets import Dataset
# Prepare dataset
texts = ["I love this movie!", "The movie was awful.", "What a great film!", "I disliked the plot."]
labels = [1, 0, 1, 0] # 1 = Positive, 0 = Negative
data = {"text": texts, "label": labels}
dataset = Dataset.from_dict(data)
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=2)
# Tokenize dataset
def tokenize_function(example):
return tokenizer(example["text"], truncation=True, padding="max_length")
tokenized_dataset = dataset.map(tokenize_function, batched=True)
# Training arguments
training_args = TrainingArguments(
output_dir="./results",
evaluation_strategy="epoch",
learning_rate=2e-5,
per_device_train_batch_size=8,
num_train_epochs=3,
)
# Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_dataset,
)
# Fine-tune
trainer.train()
Estos ejercicios te guían a través del preprocesamiento, la construcción y el entrenamiento de modelos, y el trabajo con embeddings. Completarlos te dará experiencia práctica con las técnicas discutidas en este capítulo, construyendo una base sólida para abordar tareas de PLN del mundo real.
Ejercicios Prácticos del Capítulo 2
Esta sección de ejercicios prácticos consolida tu comprensión de los temas cubiertos en el Capítulo 2. Cada ejercicio está diseñado para proporcionar experiencia práctica con conceptos clave como fundamentos de machine learning, redes neuronales y embeddings basados en transformers. Se incluyen soluciones con código detallado para cada tarea.
Ejercicio 1: Preprocesamiento de Datos de Texto
Tarea: Escribe un programa en Python para preprocesar texto mediante:
- Tokenización en palabras.
- Eliminación de stopwords.
- Conversión del texto en una representación de Bolsa de Palabras (BoW).
Ejemplo de Entrada:
"Natural language processing is a fascinating field of artificial intelligence."
Solución:
from sklearn.feature_extraction.text import CountVectorizer
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
# Input text
text = "Natural language processing is a fascinating field of artificial intelligence."
# Tokenize
tokens = word_tokenize(text.lower())
# Remove stopwords
stop_words = set(stopwords.words('english'))
filtered_tokens = [word for word in tokens if word.isalnum() and word not in stop_words]
# Convert to Bag-of-Words representation
vectorizer = CountVectorizer()
bow_matrix = vectorizer.fit_transform([" ".join(filtered_tokens)])
print("Filtered Tokens:", filtered_tokens)
print("Vocabulary:", vectorizer.vocabulary_)
print("BoW Matrix:\n", bow_matrix.toarray())
Salida Esperada:
Filtered Tokens: ['natural', 'language', 'processing', 'fascinating', 'field', 'artificial', 'intelligence']
Vocabulary: {'natural': 3, 'language': 2, 'processing': 5, 'fascinating': 1, 'field': 0, 'artificial': 4, 'intelligence': 6}
BoW Matrix:
[[1 1 1 1 1 1 1]]
Ejercicio 2: Entrenamiento de una Red Neuronal Feed-forward para Análisis de Sentimientos
Objetivo: Entrenar una red neuronal feed-forward simple para clasificar reseñas como positivas o negativas.
Conjunto de datos:
Reviews = [
"I love this movie; it's fantastic!",
"This film was terrible and boring.",
"Amazing acting and a great story.",
"The plot was awful, and I hated it."
]
Labels = [1, 0, 1, 0] # 1 = Positive, 0 = Negative
Solución:
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
# Sample dataset
texts = [
"I love this movie; it's fantastic!",
"This film was terrible and boring.",
"Amazing acting and a great story.",
"The plot was awful, and I hated it."
]
labels = [1, 0, 1, 0] # 1 = Positive, 0 = Negative
# Preprocess text using Bag-of-Words
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(texts).toarray()
# Train-test split
X_train, X_test, y_train, y_test = train_test_split(X, labels, test_size=0.25, random_state=42)
# Define the feedforward neural network
model = Sequential([
Dense(8, input_dim=X_train.shape[1], activation='relu'), # Hidden layer
Dense(1, activation='sigmoid') # Output layer
])
# Compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# Train the model
model.fit(X_train, y_train, epochs=10, batch_size=2, verbose=1)
# Evaluate the model
loss, accuracy = model.evaluate(X_test, y_test)
print(f"Test Accuracy: {accuracy:.2f}")
Salida Esperada:
Epoch 10/10
Test Accuracy: 1.00
Ejercicio 3: Extracción de Embeddings de Palabras con BERT
Tarea: Extraer embeddings contextualizados para una palabra en una oración usando BERT.
Oración de Entrada:
"The bank is located near the river."
Solución:
from transformers import AutoTokenizer, AutoModel
import torch
# Load BERT model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModel.from_pretrained("bert-base-uncased")
# Input sentence
sentence = "The bank is located near the river."
# Tokenize input
inputs = tokenizer(sentence, return_tensors="pt", truncation=True, padding=True)
# Generate embeddings
with torch.no_grad():
outputs = model(**inputs)
embeddings = outputs.last_hidden_state # Shape: [batch_size, seq_length, hidden_dim]
# Display embedding for the word 'bank'
tokenized_words = tokenizer.tokenize(sentence)
bank_index = tokenized_words.index("bank")
bank_embedding = embeddings[0, bank_index, :]
print(f"Embedding for 'bank': {bank_embedding}")
Ejercicio 4: Embeddings de Oraciones con Transformers de Oraciones
Tarea: Generar embeddings de oraciones para similitud semántica.
Oraciones:
- "Me encanta el procesamiento del lenguaje natural."
- "El PLN es un campo fascinante."
Solución:
from sentence_transformers import SentenceTransformer
# Load a pre-trained sentence transformer model
model = SentenceTransformer('all-MiniLM-L6-v2')
# Input sentences
sentences = [
"I love natural language processing.",
"NLP is a fascinating field."
]
# Generate sentence embeddings
embeddings = model.encode(sentences)
# Display embeddings
print("Embedding for sentence 1:", embeddings[0])
print("Embedding for sentence 2:", embeddings[1])
Salida Esperada:
Dos vectores que representan el significado semántico de cada oración.
Ejercicio 5: Ajuste Fino de BERT para Clasificación de Texto
Tarea: Realizar un ajuste fino de BERT en un pequeño conjunto de datos de clasificación de texto.
Ejemplo del Conjunto de Datos:
Texts = ["I love this movie!", "The movie was awful.", "What a great film!", "I disliked the plot."]
Labels = [1, 0, 1, 0] # 1 = Positive, 0 = Negative
Solución:
from transformers import AutoTokenizer, AutoModelForSequenceClassification, Trainer, TrainingArguments
from datasets import Dataset
# Prepare dataset
texts = ["I love this movie!", "The movie was awful.", "What a great film!", "I disliked the plot."]
labels = [1, 0, 1, 0] # 1 = Positive, 0 = Negative
data = {"text": texts, "label": labels}
dataset = Dataset.from_dict(data)
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=2)
# Tokenize dataset
def tokenize_function(example):
return tokenizer(example["text"], truncation=True, padding="max_length")
tokenized_dataset = dataset.map(tokenize_function, batched=True)
# Training arguments
training_args = TrainingArguments(
output_dir="./results",
evaluation_strategy="epoch",
learning_rate=2e-5,
per_device_train_batch_size=8,
num_train_epochs=3,
)
# Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_dataset,
)
# Fine-tune
trainer.train()
Estos ejercicios te guían a través del preprocesamiento, la construcción y el entrenamiento de modelos, y el trabajo con embeddings. Completarlos te dará experiencia práctica con las técnicas discutidas en este capítulo, construyendo una base sólida para abordar tareas de PLN del mundo real.