Chapter 1: Introduction to Neural Networks and Deep Learning
Practical Exercises Chapter 1
Exercise 1: Implementing a Simple Perceptron
Task: Implement a perceptron for the AND logic gate. Train the perceptron using the Perceptron learning algorithm and test it on the same data.
Solution:
import numpy as np
class Perceptron:
def __init__(self, learning_rate=0.01, n_iters=1000):
self.learning_rate = learning_rate
self.n_iters = n_iters
self.weights = None
self.bias = None
def fit(self, X, y):
n_samples, n_features = X.shape
self.weights = np.zeros(n_features)
self.bias = 0
for _ in range(self.n_iters):
for idx, x_i in enumerate(X):
linear_output = np.dot(x_i, self.weights) + self.bias
y_predicted = self.activation_function(linear_output)
# Update rule
update = self.learning_rate * (y[idx] - y_predicted)
self.weights += update * x_i
self.bias += update
def activation_function(self, x):
return np.where(x >= 0, 1, 0)
def predict(self, X):
linear_output = np.dot(X, self.weights) + self.bias
return self.activation_function(linear_output)
# AND gate dataset
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([0, 0, 0, 1])
# Train Perceptron
perceptron = Perceptron(learning_rate=0.1, n_iters=10)
perceptron.fit(X, y)
# Test Perceptron
predictions = perceptron.predict(X)
print(f"Predictions: {predictions}")
Exercise 2: Training a Multi-Layer Perceptron (MLP)
Task: Train a multi-layer perceptron (MLP) on the XOR logic gate. Use Scikit-learn’s MLPClassifier
and report the accuracy.
Solution:
from sklearn.neural_network import MLPClassifier
import numpy as np
from sklearn.metrics import accuracy_score
# XOR gate dataset
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([0, 1, 1, 0])
# Train MLP classifier
mlp = MLPClassifier(hidden_layer_sizes=(2,), max_iter=1000, random_state=42)
mlp.fit(X, y)
# Test the MLP and compute accuracy
predictions = mlp.predict(X)
accuracy = accuracy_score(y, predictions)
print(f"Accuracy: {accuracy:.2f}")
Exercise 3: Gradient Descent on a Quadratic Function
Task: Implement gradient descent to minimize the following quadratic loss function:
L(w) = w^2
Start with an initial weight of \( w = 10 \) and a learning rate of 0.1. Perform 20 iterations and plot the loss curve.
Solution:
import numpy as np
import matplotlib.pyplot as plt
# Define loss function (quadratic) and its gradient
def loss_function(w):
return w**2
def gradient(w):
return 2 * w
# Gradient descent parameters
learning_rate = 0.1
n_iterations = 20
w = 10 # Initial weight
# Store weights and losses
weights = [w]
losses = [loss_function(w)]
# Perform gradient descent
for i in range(n_iterations):
grad = gradient(w)
w = w - learning_rate * grad
weights.append(w)
losses.append(loss_function(w))
# Plot the loss curve
plt.plot(range(n_iterations + 1), losses, marker='o')
plt.xlabel("Iteration")
plt.ylabel("Loss")
plt.title("Gradient Descent Minimizing Loss Function")
plt.show()
Exercise 4: Backpropagation with Scikit-learn’s MLP
Task: Train a multi-layer perceptron (MLP) on the digits dataset using Scikit-learn’s MLPClassifier
and report the test accuracy. The model should use backpropagation to adjust the weights.
Solution:
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import accuracy_score
# Load digits dataset (multi-class classification)
digits = load_digits()
X = digits.data
y = digits.target
# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# Train MLP classifier
mlp = MLPClassifier(hidden_layer_sizes=(100,), max_iter=1000, solver='adam', random_state=42)
mlp.fit(X_train, y_train)
# Test the MLP and compute accuracy
y_pred = mlp.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
print(f"Test Accuracy: {accuracy:.2f}")
Exercise 5: Applying L2 Regularization (Ridge) to a Neural Network
Task: Train a neural network with L2 regularization (Ridge) on the moons dataset using Scikit-learn’s MLPClassifier
. Report the test accuracy and observe how L2 regularization affects overfitting.
Solution:
from sklearn.datasets import make_moons
from sklearn.model_selection import train_test_split
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import accuracy_score
# Generate moons dataset (binary classification)
X, y = make_moons(n_samples=500, noise=0.20, random_state=42)
# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# Train MLP classifier with L2 regularization (alpha controls regularization strength)
mlp = MLPClassifier(hidden_layer_sizes=(100,), alpha=0.01, max_iter=1000, solver='adam', random_state=42)
mlp.fit(X_train, y_train)
# Test the MLP and compute accuracy
y_pred = mlp.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
print(f"Test Accuracy with L2 Regularization: {accuracy:.2f}")
Exercise 6: Implementing Binary Cross-Entropy Loss
Task: Implement binary cross-entropy loss manually and use it to compute the loss for the following data points:
- True label: y = 1, Predicted probability: \hat{y} = 0.9
- True label: y = 0, Predicted probability: \hat{y} = 0.3
Solution:
import numpy as np
# Binary cross-entropy loss function
def binary_crossentropy(y_true, y_pred):
return -(y_true * np.log(y_pred) + (1 - y_true) * np.log(1 - y_pred))
# Example data
y_true_1 = 1
y_pred_1 = 0.9
y_true_2 = 0
y_pred_2 = 0.3
# Compute binary cross-entropy loss for each case
loss_1 = binary_crossentropy(y_true_1, y_pred_1)
loss_2 = binary_crossentropy(y_true_2, y_pred_2)
print(f"Binary Cross-Entropy Loss (y=1, y_pred=0.9): {loss_1:.4f}")
print(f"Binary Cross-Entropy Loss (y=0, y_pred=0.3): {loss_2:.4f}")
By completing these exercises, you will gain hands on experience with building and training neural networks, as well as applying regularization techniques to improve model generalization.
Practical Exercises Chapter 1
Exercise 1: Implementing a Simple Perceptron
Task: Implement a perceptron for the AND logic gate. Train the perceptron using the Perceptron learning algorithm and test it on the same data.
Solution:
import numpy as np
class Perceptron:
def __init__(self, learning_rate=0.01, n_iters=1000):
self.learning_rate = learning_rate
self.n_iters = n_iters
self.weights = None
self.bias = None
def fit(self, X, y):
n_samples, n_features = X.shape
self.weights = np.zeros(n_features)
self.bias = 0
for _ in range(self.n_iters):
for idx, x_i in enumerate(X):
linear_output = np.dot(x_i, self.weights) + self.bias
y_predicted = self.activation_function(linear_output)
# Update rule
update = self.learning_rate * (y[idx] - y_predicted)
self.weights += update * x_i
self.bias += update
def activation_function(self, x):
return np.where(x >= 0, 1, 0)
def predict(self, X):
linear_output = np.dot(X, self.weights) + self.bias
return self.activation_function(linear_output)
# AND gate dataset
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([0, 0, 0, 1])
# Train Perceptron
perceptron = Perceptron(learning_rate=0.1, n_iters=10)
perceptron.fit(X, y)
# Test Perceptron
predictions = perceptron.predict(X)
print(f"Predictions: {predictions}")
Exercise 2: Training a Multi-Layer Perceptron (MLP)
Task: Train a multi-layer perceptron (MLP) on the XOR logic gate. Use Scikit-learn’s MLPClassifier
and report the accuracy.
Solution:
from sklearn.neural_network import MLPClassifier
import numpy as np
from sklearn.metrics import accuracy_score
# XOR gate dataset
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([0, 1, 1, 0])
# Train MLP classifier
mlp = MLPClassifier(hidden_layer_sizes=(2,), max_iter=1000, random_state=42)
mlp.fit(X, y)
# Test the MLP and compute accuracy
predictions = mlp.predict(X)
accuracy = accuracy_score(y, predictions)
print(f"Accuracy: {accuracy:.2f}")
Exercise 3: Gradient Descent on a Quadratic Function
Task: Implement gradient descent to minimize the following quadratic loss function:
L(w) = w^2
Start with an initial weight of \( w = 10 \) and a learning rate of 0.1. Perform 20 iterations and plot the loss curve.
Solution:
import numpy as np
import matplotlib.pyplot as plt
# Define loss function (quadratic) and its gradient
def loss_function(w):
return w**2
def gradient(w):
return 2 * w
# Gradient descent parameters
learning_rate = 0.1
n_iterations = 20
w = 10 # Initial weight
# Store weights and losses
weights = [w]
losses = [loss_function(w)]
# Perform gradient descent
for i in range(n_iterations):
grad = gradient(w)
w = w - learning_rate * grad
weights.append(w)
losses.append(loss_function(w))
# Plot the loss curve
plt.plot(range(n_iterations + 1), losses, marker='o')
plt.xlabel("Iteration")
plt.ylabel("Loss")
plt.title("Gradient Descent Minimizing Loss Function")
plt.show()
Exercise 4: Backpropagation with Scikit-learn’s MLP
Task: Train a multi-layer perceptron (MLP) on the digits dataset using Scikit-learn’s MLPClassifier
and report the test accuracy. The model should use backpropagation to adjust the weights.
Solution:
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import accuracy_score
# Load digits dataset (multi-class classification)
digits = load_digits()
X = digits.data
y = digits.target
# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# Train MLP classifier
mlp = MLPClassifier(hidden_layer_sizes=(100,), max_iter=1000, solver='adam', random_state=42)
mlp.fit(X_train, y_train)
# Test the MLP and compute accuracy
y_pred = mlp.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
print(f"Test Accuracy: {accuracy:.2f}")
Exercise 5: Applying L2 Regularization (Ridge) to a Neural Network
Task: Train a neural network with L2 regularization (Ridge) on the moons dataset using Scikit-learn’s MLPClassifier
. Report the test accuracy and observe how L2 regularization affects overfitting.
Solution:
from sklearn.datasets import make_moons
from sklearn.model_selection import train_test_split
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import accuracy_score
# Generate moons dataset (binary classification)
X, y = make_moons(n_samples=500, noise=0.20, random_state=42)
# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# Train MLP classifier with L2 regularization (alpha controls regularization strength)
mlp = MLPClassifier(hidden_layer_sizes=(100,), alpha=0.01, max_iter=1000, solver='adam', random_state=42)
mlp.fit(X_train, y_train)
# Test the MLP and compute accuracy
y_pred = mlp.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
print(f"Test Accuracy with L2 Regularization: {accuracy:.2f}")
Exercise 6: Implementing Binary Cross-Entropy Loss
Task: Implement binary cross-entropy loss manually and use it to compute the loss for the following data points:
- True label: y = 1, Predicted probability: \hat{y} = 0.9
- True label: y = 0, Predicted probability: \hat{y} = 0.3
Solution:
import numpy as np
# Binary cross-entropy loss function
def binary_crossentropy(y_true, y_pred):
return -(y_true * np.log(y_pred) + (1 - y_true) * np.log(1 - y_pred))
# Example data
y_true_1 = 1
y_pred_1 = 0.9
y_true_2 = 0
y_pred_2 = 0.3
# Compute binary cross-entropy loss for each case
loss_1 = binary_crossentropy(y_true_1, y_pred_1)
loss_2 = binary_crossentropy(y_true_2, y_pred_2)
print(f"Binary Cross-Entropy Loss (y=1, y_pred=0.9): {loss_1:.4f}")
print(f"Binary Cross-Entropy Loss (y=0, y_pred=0.3): {loss_2:.4f}")
By completing these exercises, you will gain hands on experience with building and training neural networks, as well as applying regularization techniques to improve model generalization.
Practical Exercises Chapter 1
Exercise 1: Implementing a Simple Perceptron
Task: Implement a perceptron for the AND logic gate. Train the perceptron using the Perceptron learning algorithm and test it on the same data.
Solution:
import numpy as np
class Perceptron:
def __init__(self, learning_rate=0.01, n_iters=1000):
self.learning_rate = learning_rate
self.n_iters = n_iters
self.weights = None
self.bias = None
def fit(self, X, y):
n_samples, n_features = X.shape
self.weights = np.zeros(n_features)
self.bias = 0
for _ in range(self.n_iters):
for idx, x_i in enumerate(X):
linear_output = np.dot(x_i, self.weights) + self.bias
y_predicted = self.activation_function(linear_output)
# Update rule
update = self.learning_rate * (y[idx] - y_predicted)
self.weights += update * x_i
self.bias += update
def activation_function(self, x):
return np.where(x >= 0, 1, 0)
def predict(self, X):
linear_output = np.dot(X, self.weights) + self.bias
return self.activation_function(linear_output)
# AND gate dataset
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([0, 0, 0, 1])
# Train Perceptron
perceptron = Perceptron(learning_rate=0.1, n_iters=10)
perceptron.fit(X, y)
# Test Perceptron
predictions = perceptron.predict(X)
print(f"Predictions: {predictions}")
Exercise 2: Training a Multi-Layer Perceptron (MLP)
Task: Train a multi-layer perceptron (MLP) on the XOR logic gate. Use Scikit-learn’s MLPClassifier
and report the accuracy.
Solution:
from sklearn.neural_network import MLPClassifier
import numpy as np
from sklearn.metrics import accuracy_score
# XOR gate dataset
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([0, 1, 1, 0])
# Train MLP classifier
mlp = MLPClassifier(hidden_layer_sizes=(2,), max_iter=1000, random_state=42)
mlp.fit(X, y)
# Test the MLP and compute accuracy
predictions = mlp.predict(X)
accuracy = accuracy_score(y, predictions)
print(f"Accuracy: {accuracy:.2f}")
Exercise 3: Gradient Descent on a Quadratic Function
Task: Implement gradient descent to minimize the following quadratic loss function:
L(w) = w^2
Start with an initial weight of \( w = 10 \) and a learning rate of 0.1. Perform 20 iterations and plot the loss curve.
Solution:
import numpy as np
import matplotlib.pyplot as plt
# Define loss function (quadratic) and its gradient
def loss_function(w):
return w**2
def gradient(w):
return 2 * w
# Gradient descent parameters
learning_rate = 0.1
n_iterations = 20
w = 10 # Initial weight
# Store weights and losses
weights = [w]
losses = [loss_function(w)]
# Perform gradient descent
for i in range(n_iterations):
grad = gradient(w)
w = w - learning_rate * grad
weights.append(w)
losses.append(loss_function(w))
# Plot the loss curve
plt.plot(range(n_iterations + 1), losses, marker='o')
plt.xlabel("Iteration")
plt.ylabel("Loss")
plt.title("Gradient Descent Minimizing Loss Function")
plt.show()
Exercise 4: Backpropagation with Scikit-learn’s MLP
Task: Train a multi-layer perceptron (MLP) on the digits dataset using Scikit-learn’s MLPClassifier
and report the test accuracy. The model should use backpropagation to adjust the weights.
Solution:
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import accuracy_score
# Load digits dataset (multi-class classification)
digits = load_digits()
X = digits.data
y = digits.target
# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# Train MLP classifier
mlp = MLPClassifier(hidden_layer_sizes=(100,), max_iter=1000, solver='adam', random_state=42)
mlp.fit(X_train, y_train)
# Test the MLP and compute accuracy
y_pred = mlp.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
print(f"Test Accuracy: {accuracy:.2f}")
Exercise 5: Applying L2 Regularization (Ridge) to a Neural Network
Task: Train a neural network with L2 regularization (Ridge) on the moons dataset using Scikit-learn’s MLPClassifier
. Report the test accuracy and observe how L2 regularization affects overfitting.
Solution:
from sklearn.datasets import make_moons
from sklearn.model_selection import train_test_split
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import accuracy_score
# Generate moons dataset (binary classification)
X, y = make_moons(n_samples=500, noise=0.20, random_state=42)
# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# Train MLP classifier with L2 regularization (alpha controls regularization strength)
mlp = MLPClassifier(hidden_layer_sizes=(100,), alpha=0.01, max_iter=1000, solver='adam', random_state=42)
mlp.fit(X_train, y_train)
# Test the MLP and compute accuracy
y_pred = mlp.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
print(f"Test Accuracy with L2 Regularization: {accuracy:.2f}")
Exercise 6: Implementing Binary Cross-Entropy Loss
Task: Implement binary cross-entropy loss manually and use it to compute the loss for the following data points:
- True label: y = 1, Predicted probability: \hat{y} = 0.9
- True label: y = 0, Predicted probability: \hat{y} = 0.3
Solution:
import numpy as np
# Binary cross-entropy loss function
def binary_crossentropy(y_true, y_pred):
return -(y_true * np.log(y_pred) + (1 - y_true) * np.log(1 - y_pred))
# Example data
y_true_1 = 1
y_pred_1 = 0.9
y_true_2 = 0
y_pred_2 = 0.3
# Compute binary cross-entropy loss for each case
loss_1 = binary_crossentropy(y_true_1, y_pred_1)
loss_2 = binary_crossentropy(y_true_2, y_pred_2)
print(f"Binary Cross-Entropy Loss (y=1, y_pred=0.9): {loss_1:.4f}")
print(f"Binary Cross-Entropy Loss (y=0, y_pred=0.3): {loss_2:.4f}")
By completing these exercises, you will gain hands on experience with building and training neural networks, as well as applying regularization techniques to improve model generalization.
Practical Exercises Chapter 1
Exercise 1: Implementing a Simple Perceptron
Task: Implement a perceptron for the AND logic gate. Train the perceptron using the Perceptron learning algorithm and test it on the same data.
Solution:
import numpy as np
class Perceptron:
def __init__(self, learning_rate=0.01, n_iters=1000):
self.learning_rate = learning_rate
self.n_iters = n_iters
self.weights = None
self.bias = None
def fit(self, X, y):
n_samples, n_features = X.shape
self.weights = np.zeros(n_features)
self.bias = 0
for _ in range(self.n_iters):
for idx, x_i in enumerate(X):
linear_output = np.dot(x_i, self.weights) + self.bias
y_predicted = self.activation_function(linear_output)
# Update rule
update = self.learning_rate * (y[idx] - y_predicted)
self.weights += update * x_i
self.bias += update
def activation_function(self, x):
return np.where(x >= 0, 1, 0)
def predict(self, X):
linear_output = np.dot(X, self.weights) + self.bias
return self.activation_function(linear_output)
# AND gate dataset
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([0, 0, 0, 1])
# Train Perceptron
perceptron = Perceptron(learning_rate=0.1, n_iters=10)
perceptron.fit(X, y)
# Test Perceptron
predictions = perceptron.predict(X)
print(f"Predictions: {predictions}")
Exercise 2: Training a Multi-Layer Perceptron (MLP)
Task: Train a multi-layer perceptron (MLP) on the XOR logic gate. Use Scikit-learn’s MLPClassifier
and report the accuracy.
Solution:
from sklearn.neural_network import MLPClassifier
import numpy as np
from sklearn.metrics import accuracy_score
# XOR gate dataset
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([0, 1, 1, 0])
# Train MLP classifier
mlp = MLPClassifier(hidden_layer_sizes=(2,), max_iter=1000, random_state=42)
mlp.fit(X, y)
# Test the MLP and compute accuracy
predictions = mlp.predict(X)
accuracy = accuracy_score(y, predictions)
print(f"Accuracy: {accuracy:.2f}")
Exercise 3: Gradient Descent on a Quadratic Function
Task: Implement gradient descent to minimize the following quadratic loss function:
L(w) = w^2
Start with an initial weight of \( w = 10 \) and a learning rate of 0.1. Perform 20 iterations and plot the loss curve.
Solution:
import numpy as np
import matplotlib.pyplot as plt
# Define loss function (quadratic) and its gradient
def loss_function(w):
return w**2
def gradient(w):
return 2 * w
# Gradient descent parameters
learning_rate = 0.1
n_iterations = 20
w = 10 # Initial weight
# Store weights and losses
weights = [w]
losses = [loss_function(w)]
# Perform gradient descent
for i in range(n_iterations):
grad = gradient(w)
w = w - learning_rate * grad
weights.append(w)
losses.append(loss_function(w))
# Plot the loss curve
plt.plot(range(n_iterations + 1), losses, marker='o')
plt.xlabel("Iteration")
plt.ylabel("Loss")
plt.title("Gradient Descent Minimizing Loss Function")
plt.show()
Exercise 4: Backpropagation with Scikit-learn’s MLP
Task: Train a multi-layer perceptron (MLP) on the digits dataset using Scikit-learn’s MLPClassifier
and report the test accuracy. The model should use backpropagation to adjust the weights.
Solution:
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import accuracy_score
# Load digits dataset (multi-class classification)
digits = load_digits()
X = digits.data
y = digits.target
# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# Train MLP classifier
mlp = MLPClassifier(hidden_layer_sizes=(100,), max_iter=1000, solver='adam', random_state=42)
mlp.fit(X_train, y_train)
# Test the MLP and compute accuracy
y_pred = mlp.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
print(f"Test Accuracy: {accuracy:.2f}")
Exercise 5: Applying L2 Regularization (Ridge) to a Neural Network
Task: Train a neural network with L2 regularization (Ridge) on the moons dataset using Scikit-learn’s MLPClassifier
. Report the test accuracy and observe how L2 regularization affects overfitting.
Solution:
from sklearn.datasets import make_moons
from sklearn.model_selection import train_test_split
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import accuracy_score
# Generate moons dataset (binary classification)
X, y = make_moons(n_samples=500, noise=0.20, random_state=42)
# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# Train MLP classifier with L2 regularization (alpha controls regularization strength)
mlp = MLPClassifier(hidden_layer_sizes=(100,), alpha=0.01, max_iter=1000, solver='adam', random_state=42)
mlp.fit(X_train, y_train)
# Test the MLP and compute accuracy
y_pred = mlp.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
print(f"Test Accuracy with L2 Regularization: {accuracy:.2f}")
Exercise 6: Implementing Binary Cross-Entropy Loss
Task: Implement binary cross-entropy loss manually and use it to compute the loss for the following data points:
- True label: y = 1, Predicted probability: \hat{y} = 0.9
- True label: y = 0, Predicted probability: \hat{y} = 0.3
Solution:
import numpy as np
# Binary cross-entropy loss function
def binary_crossentropy(y_true, y_pred):
return -(y_true * np.log(y_pred) + (1 - y_true) * np.log(1 - y_pred))
# Example data
y_true_1 = 1
y_pred_1 = 0.9
y_true_2 = 0
y_pred_2 = 0.3
# Compute binary cross-entropy loss for each case
loss_1 = binary_crossentropy(y_true_1, y_pred_1)
loss_2 = binary_crossentropy(y_true_2, y_pred_2)
print(f"Binary Cross-Entropy Loss (y=1, y_pred=0.9): {loss_1:.4f}")
print(f"Binary Cross-Entropy Loss (y=0, y_pred=0.3): {loss_2:.4f}")
By completing these exercises, you will gain hands on experience with building and training neural networks, as well as applying regularization techniques to improve model generalization.