Project 3: Customer Feedback Analysis Using Sentiment Analysis
6. Step 4: Evaluating the Model
Evaluate the trained model on the test set to measure its performance.
from sklearn.metrics import classification_report
# Predict on the evaluation set
predictions = trainer.predict(eval_dataset)
# Convert predictions to labels
predicted_labels = predictions.predictions.argmax(-1)
# Print classification report
print(classification_report(eval_dataset['label'], predicted_labels))
Code breakdown:
1. Import and Setup
from sklearn.metrics import classification_report
This imports scikit-learn's classification report tool for evaluating model performance.
2. Making Predictions
predictions = trainer.predict(eval_dataset)
predicted_labels = predictions.predictions.argmax(-1)
This code:
- Uses the trained model to make predictions on the evaluation dataset
- Converts the raw predictions into class labels using argmax (choosing the highest probability class)
3. Evaluation
print(classification_report(eval_dataset['label'], predicted_labels))
This generates a report comparing the true labels with predicted labels, showing metrics like precision, recall, and F1-score for each sentiment class. This evaluation step is crucial for understanding how well the model performs on unseen data before deploying it for real customer feedback analysis.
6. Step 4: Evaluating the Model
Evaluate the trained model on the test set to measure its performance.
from sklearn.metrics import classification_report
# Predict on the evaluation set
predictions = trainer.predict(eval_dataset)
# Convert predictions to labels
predicted_labels = predictions.predictions.argmax(-1)
# Print classification report
print(classification_report(eval_dataset['label'], predicted_labels))
Code breakdown:
1. Import and Setup
from sklearn.metrics import classification_report
This imports scikit-learn's classification report tool for evaluating model performance.
2. Making Predictions
predictions = trainer.predict(eval_dataset)
predicted_labels = predictions.predictions.argmax(-1)
This code:
- Uses the trained model to make predictions on the evaluation dataset
- Converts the raw predictions into class labels using argmax (choosing the highest probability class)
3. Evaluation
print(classification_report(eval_dataset['label'], predicted_labels))
This generates a report comparing the true labels with predicted labels, showing metrics like precision, recall, and F1-score for each sentiment class. This evaluation step is crucial for understanding how well the model performs on unseen data before deploying it for real customer feedback analysis.
6. Step 4: Evaluating the Model
Evaluate the trained model on the test set to measure its performance.
from sklearn.metrics import classification_report
# Predict on the evaluation set
predictions = trainer.predict(eval_dataset)
# Convert predictions to labels
predicted_labels = predictions.predictions.argmax(-1)
# Print classification report
print(classification_report(eval_dataset['label'], predicted_labels))
Code breakdown:
1. Import and Setup
from sklearn.metrics import classification_report
This imports scikit-learn's classification report tool for evaluating model performance.
2. Making Predictions
predictions = trainer.predict(eval_dataset)
predicted_labels = predictions.predictions.argmax(-1)
This code:
- Uses the trained model to make predictions on the evaluation dataset
- Converts the raw predictions into class labels using argmax (choosing the highest probability class)
3. Evaluation
print(classification_report(eval_dataset['label'], predicted_labels))
This generates a report comparing the true labels with predicted labels, showing metrics like precision, recall, and F1-score for each sentiment class. This evaluation step is crucial for understanding how well the model performs on unseen data before deploying it for real customer feedback analysis.
6. Step 4: Evaluating the Model
Evaluate the trained model on the test set to measure its performance.
from sklearn.metrics import classification_report
# Predict on the evaluation set
predictions = trainer.predict(eval_dataset)
# Convert predictions to labels
predicted_labels = predictions.predictions.argmax(-1)
# Print classification report
print(classification_report(eval_dataset['label'], predicted_labels))
Code breakdown:
1. Import and Setup
from sklearn.metrics import classification_report
This imports scikit-learn's classification report tool for evaluating model performance.
2. Making Predictions
predictions = trainer.predict(eval_dataset)
predicted_labels = predictions.predictions.argmax(-1)
This code:
- Uses the trained model to make predictions on the evaluation dataset
- Converts the raw predictions into class labels using argmax (choosing the highest probability class)
3. Evaluation
print(classification_report(eval_dataset['label'], predicted_labels))
This generates a report comparing the true labels with predicted labels, showing metrics like precision, recall, and F1-score for each sentiment class. This evaluation step is crucial for understanding how well the model performs on unseen data before deploying it for real customer feedback analysis.