Chapter 13: Project: Sentiment Analysis Dashboard
13.5 Evaluating and Deploying the Dashboard
In this section, we will focus on evaluating the performance of the sentiment analysis dashboard and deploying it to a suitable platform. Evaluation helps ensure that the dashboard meets user expectations and performs well under various conditions. Deployment makes the dashboard accessible to users, allowing them to benefit from its functionalities.
13.5.1 Evaluating the Dashboard
Evaluation involves measuring the dashboard's performance in terms of accuracy, responsiveness, and user satisfaction. We will use different metrics and methods to evaluate these aspects.
1. Accuracy Metrics
Accuracy metrics assess how well the sentiment analysis models classify the sentiment of the text data. Common metrics include accuracy, precision, recall, F1-score, and confusion matrix.
Example: Evaluating Sentiment Analysis Models
We can use the sklearn library to calculate precision, recall, F1-score, and confusion matrix for the sentiment analysis models.
evaluate_models.py:
import pandas as pd
import pickle
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix, ConfusionMatrixDisplay
import matplotlib.pyplot as plt
# Load test data and preprocessed text
test_data = pd.read_csv('data/processed_data/test_data_preprocessed.csv')
with open('data/processed_data/X_test.pickle', 'rb') as file:
X_test = pickle.load(file)
y_test = test_data['sentiment'].apply(lambda x: 1 if x == 'positive' else 0)
# Load Logistic Regression model
with open('models/best_logistic_regression_model.pickle', 'rb') as file:
logistic_regression_model = pickle.load(file)
# Evaluate Logistic Regression model
y_pred_lr = logistic_regression_model.predict(X_test)
accuracy_lr = accuracy_score(y_test, y_pred_lr)
print(f'Logistic Regression Accuracy: {accuracy_lr}')
print(classification_report(y_test, y_pred_lr))
cm_lr = confusion_matrix(y_test, y_pred_lr, labels=[0, 1])
disp_lr = ConfusionMatrixDisplay(confusion_matrix=cm_lr, display_labels=['Negative', 'Positive'])
disp_lr.plot(cmap=plt.cm.Blues)
plt.title('Logistic Regression Confusion Matrix')
plt.show()
# Load LSTM model
from tensorflow.keras.models import load_model
lstm_model = load_model('models/lstm_model.h5')
# Evaluate LSTM model
y_pred_prob_lstm = lstm_model.predict(X_test)
y_pred_lstm = (y_pred_prob_lstm > 0.5).astype(int)
accuracy_lstm = accuracy_score(y_test, y_pred_lstm)
print(f'LSTM Accuracy: {accuracy_lstm}')
print(classification_report(y_test, y_pred_lstm))
cm_lstm = confusion_matrix(y_test, y_pred_lstm, labels=[0, 1])
disp_lstm = ConfusionMatrixDisplay(confusion_matrix=cm_lstm, display_labels=['Negative', 'Positive'])
disp_lstm.plot(cmap=plt.cm.Blues)
plt.title('LSTM Confusion Matrix')
plt.show()
In this script, we evaluate the Logistic Regression and LSTM models on the test set, calculate various metrics, and plot the confusion matrices to visualize the performance.
2. User Feedback
User feedback is essential for assessing the usability and satisfaction of the dashboard. We can collect feedback through surveys or direct ratings.
Example: Collecting User Feedback
We can modify our Flask application to include a feedback form where users can rate their experience and provide comments.
app.py (continued):
feedback_data = []
@app.route('/feedback', methods=['POST'])
def feedback():
user_feedback = request.json
feedback_data.append(user_feedback)
return jsonify({'message': 'Thank you for your feedback!'})
# HTML template for feedback form
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Sentiment Analysis Dashboard</title>
<link rel="stylesheet" href="<https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/css/bootstrap.min.css>">
</head>
<body>
<div class="container">
<h1 class="mt-5">Sentiment Analysis Dashboard</h1>
<form id="feedback-form" class="mt-4">
<div class="form-group">
<label for="rating">Rate your experience:</label>
<select class="form-control" id="rating" name="rating">
<option value="1">1 - Poor</option>
<option value="2">2 - Fair</option>
<option value="3">3 - Good</option>
<option value="4">4 - Very Good</option>
<option value="5">5 - Excellent</option>
</select>
</div>
<div class="form-group">
<label for="comments">Comments:</label>
<textarea class="form-control" id="comments" name="comments" rows="3"></textarea>
</div>
<button type="submit" class="btn btn-primary">Submit Feedback</button>
</form>
<div id="feedback-result" class="mt-4"></div>
</div>
<script src="<https://code.jquery.com/jquery-3.5.1.min.js>"></script>
<script>
$(document).ready(function() {
$('#feedback-form').on('submit', function(event) {
event.preventDefault();
const formData = $(this).serializeArray().reduce((obj, item) => {
obj[item.name] = item.value;
return obj;
}, {});
$.post('/feedback', JSON.stringify(formData), function(data) {
$('#feedback-result').html(`<h4>${data.message}</h4>`);
});
});
});
</script>
</body>
</html>
In this example, we create a feedback form where users can rate their experience and provide comments. The feedback is sent to the /feedback
endpoint and stored for further analysis.
3. Response Time
Response time is critical for user experience. We need to ensure that the dashboard responds promptly to user queries.
Example: Measuring Response Time
We can measure the response time for different queries using Python's time module.
evaluate_response_time.py:
import time
import requests
# Function to measure response time
def measure_response_time(endpoint, data):
start_time = time.time()
response = requests.post(endpoint, data=data)
end_time = time.time()
response_time = end_time - start_time
return response_time
# Measure response time for sentiment analysis
data = {'text': 'This is a great product!', 'model_type': 'logistic_regression'}
response_time = measure_response_time('<http://localhost:5000/analyze>', data)
print(f'Sentiment Analysis Response Time: {response_time} seconds')
By measuring response time, we can ensure that the dashboard meets the desired performance criteria.
13.5.2 Deploying the Dashboard
Once the dashboard is evaluated and performs satisfactorily, the next step is deployment. We will deploy the dashboard to a suitable platform where users can interact with it.
1. Web Application Deployment
We can deploy the dashboard as a web application using a cloud platform such as Heroku, AWS, or Google Cloud.
Example: Deploying on Heroku
To deploy the dashboard on Heroku, follow these steps:
- Install Heroku CLI: Download and install the Heroku CLI from Heroku.
- Log In to Heroku: Open a terminal and log in to your Heroku account.
heroku login
- Create a Heroku App: Create a new Heroku app.
heroku create your-app-name
- Prepare the Project for Deployment: Create a
Procfile
andrequirements.txt
in the project directory.
Procfile:requirements.txt:web: python app.py
Flask
requests
pandas
scikit-learn
tensorflow
plotly - Push the Project to Heroku: Initialize a Git repository, add the project files, and push to Heroku.
git init
git add .
git commit -m "Initial commit"
git push heroku master - Open the Heroku App: Open the deployed app in your browser.
heroku open
By following these steps, the dashboard will be deployed on Heroku and accessible via a public URL.
2. Integrating with Messaging Apps
We can also integrate the dashboard with messaging apps like Slack or Microsoft Teams for real-time sentiment analysis updates.
Example: Integrating with Slack
To integrate the dashboard with Slack, follow these steps:
- Create a Slack App: Create a new app on the Slack API.
- Enable Webhooks: Enable incoming webhooks for your app.
- Set Up Webhook URL: Set up a URL to receive messages from Slack.
slack_integration.py:
from flask import Flask, request, jsonify
import requests
app = Flask(__name__)
SLACK_WEBHOOK_URL = 'your_slack_webhook_url'
@app.route('/slack', methods=['POST'])
def slack():
data =
request.json
text = data['text']
# Perform sentiment analysis (using the logistic regression model as an example)
preprocessed_text = preprocess_text(text)
prediction = logistic_regression_model.predict(preprocessed_text)
sentiment = 'Positive' if prediction[0] == 1 else 'Negative'
# Send result back to Slack
response = {
'response_type': 'in_channel',
'text': f'Sentiment: {sentiment}'
}
return jsonify(response)
if __name__ == '__main__':
app.run(debug=True)
In this script, we create a route /slack
to receive messages from Slack, perform sentiment analysis, and send the result back to Slack.
We covered the evaluation and deployment of our sentiment analysis dashboard. We discussed various metrics and methods to evaluate its performance, including accuracy metrics, user feedback, and response time. We provided examples of deploying the dashboard as a web application using Heroku and integrating it with Slack for real-time sentiment analysis updates.
By following these steps, you can ensure your dashboard performs well in real-world scenarios and is accessible to users on different platforms. The deployment process makes the dashboard available to users, allowing them to interact with it and benefit from its functionalities.
13.5 Evaluating and Deploying the Dashboard
In this section, we will focus on evaluating the performance of the sentiment analysis dashboard and deploying it to a suitable platform. Evaluation helps ensure that the dashboard meets user expectations and performs well under various conditions. Deployment makes the dashboard accessible to users, allowing them to benefit from its functionalities.
13.5.1 Evaluating the Dashboard
Evaluation involves measuring the dashboard's performance in terms of accuracy, responsiveness, and user satisfaction. We will use different metrics and methods to evaluate these aspects.
1. Accuracy Metrics
Accuracy metrics assess how well the sentiment analysis models classify the sentiment of the text data. Common metrics include accuracy, precision, recall, F1-score, and confusion matrix.
Example: Evaluating Sentiment Analysis Models
We can use the sklearn library to calculate precision, recall, F1-score, and confusion matrix for the sentiment analysis models.
evaluate_models.py:
import pandas as pd
import pickle
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix, ConfusionMatrixDisplay
import matplotlib.pyplot as plt
# Load test data and preprocessed text
test_data = pd.read_csv('data/processed_data/test_data_preprocessed.csv')
with open('data/processed_data/X_test.pickle', 'rb') as file:
X_test = pickle.load(file)
y_test = test_data['sentiment'].apply(lambda x: 1 if x == 'positive' else 0)
# Load Logistic Regression model
with open('models/best_logistic_regression_model.pickle', 'rb') as file:
logistic_regression_model = pickle.load(file)
# Evaluate Logistic Regression model
y_pred_lr = logistic_regression_model.predict(X_test)
accuracy_lr = accuracy_score(y_test, y_pred_lr)
print(f'Logistic Regression Accuracy: {accuracy_lr}')
print(classification_report(y_test, y_pred_lr))
cm_lr = confusion_matrix(y_test, y_pred_lr, labels=[0, 1])
disp_lr = ConfusionMatrixDisplay(confusion_matrix=cm_lr, display_labels=['Negative', 'Positive'])
disp_lr.plot(cmap=plt.cm.Blues)
plt.title('Logistic Regression Confusion Matrix')
plt.show()
# Load LSTM model
from tensorflow.keras.models import load_model
lstm_model = load_model('models/lstm_model.h5')
# Evaluate LSTM model
y_pred_prob_lstm = lstm_model.predict(X_test)
y_pred_lstm = (y_pred_prob_lstm > 0.5).astype(int)
accuracy_lstm = accuracy_score(y_test, y_pred_lstm)
print(f'LSTM Accuracy: {accuracy_lstm}')
print(classification_report(y_test, y_pred_lstm))
cm_lstm = confusion_matrix(y_test, y_pred_lstm, labels=[0, 1])
disp_lstm = ConfusionMatrixDisplay(confusion_matrix=cm_lstm, display_labels=['Negative', 'Positive'])
disp_lstm.plot(cmap=plt.cm.Blues)
plt.title('LSTM Confusion Matrix')
plt.show()
In this script, we evaluate the Logistic Regression and LSTM models on the test set, calculate various metrics, and plot the confusion matrices to visualize the performance.
2. User Feedback
User feedback is essential for assessing the usability and satisfaction of the dashboard. We can collect feedback through surveys or direct ratings.
Example: Collecting User Feedback
We can modify our Flask application to include a feedback form where users can rate their experience and provide comments.
app.py (continued):
feedback_data = []
@app.route('/feedback', methods=['POST'])
def feedback():
user_feedback = request.json
feedback_data.append(user_feedback)
return jsonify({'message': 'Thank you for your feedback!'})
# HTML template for feedback form
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Sentiment Analysis Dashboard</title>
<link rel="stylesheet" href="<https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/css/bootstrap.min.css>">
</head>
<body>
<div class="container">
<h1 class="mt-5">Sentiment Analysis Dashboard</h1>
<form id="feedback-form" class="mt-4">
<div class="form-group">
<label for="rating">Rate your experience:</label>
<select class="form-control" id="rating" name="rating">
<option value="1">1 - Poor</option>
<option value="2">2 - Fair</option>
<option value="3">3 - Good</option>
<option value="4">4 - Very Good</option>
<option value="5">5 - Excellent</option>
</select>
</div>
<div class="form-group">
<label for="comments">Comments:</label>
<textarea class="form-control" id="comments" name="comments" rows="3"></textarea>
</div>
<button type="submit" class="btn btn-primary">Submit Feedback</button>
</form>
<div id="feedback-result" class="mt-4"></div>
</div>
<script src="<https://code.jquery.com/jquery-3.5.1.min.js>"></script>
<script>
$(document).ready(function() {
$('#feedback-form').on('submit', function(event) {
event.preventDefault();
const formData = $(this).serializeArray().reduce((obj, item) => {
obj[item.name] = item.value;
return obj;
}, {});
$.post('/feedback', JSON.stringify(formData), function(data) {
$('#feedback-result').html(`<h4>${data.message}</h4>`);
});
});
});
</script>
</body>
</html>
In this example, we create a feedback form where users can rate their experience and provide comments. The feedback is sent to the /feedback
endpoint and stored for further analysis.
3. Response Time
Response time is critical for user experience. We need to ensure that the dashboard responds promptly to user queries.
Example: Measuring Response Time
We can measure the response time for different queries using Python's time module.
evaluate_response_time.py:
import time
import requests
# Function to measure response time
def measure_response_time(endpoint, data):
start_time = time.time()
response = requests.post(endpoint, data=data)
end_time = time.time()
response_time = end_time - start_time
return response_time
# Measure response time for sentiment analysis
data = {'text': 'This is a great product!', 'model_type': 'logistic_regression'}
response_time = measure_response_time('<http://localhost:5000/analyze>', data)
print(f'Sentiment Analysis Response Time: {response_time} seconds')
By measuring response time, we can ensure that the dashboard meets the desired performance criteria.
13.5.2 Deploying the Dashboard
Once the dashboard is evaluated and performs satisfactorily, the next step is deployment. We will deploy the dashboard to a suitable platform where users can interact with it.
1. Web Application Deployment
We can deploy the dashboard as a web application using a cloud platform such as Heroku, AWS, or Google Cloud.
Example: Deploying on Heroku
To deploy the dashboard on Heroku, follow these steps:
- Install Heroku CLI: Download and install the Heroku CLI from Heroku.
- Log In to Heroku: Open a terminal and log in to your Heroku account.
heroku login
- Create a Heroku App: Create a new Heroku app.
heroku create your-app-name
- Prepare the Project for Deployment: Create a
Procfile
andrequirements.txt
in the project directory.
Procfile:requirements.txt:web: python app.py
Flask
requests
pandas
scikit-learn
tensorflow
plotly - Push the Project to Heroku: Initialize a Git repository, add the project files, and push to Heroku.
git init
git add .
git commit -m "Initial commit"
git push heroku master - Open the Heroku App: Open the deployed app in your browser.
heroku open
By following these steps, the dashboard will be deployed on Heroku and accessible via a public URL.
2. Integrating with Messaging Apps
We can also integrate the dashboard with messaging apps like Slack or Microsoft Teams for real-time sentiment analysis updates.
Example: Integrating with Slack
To integrate the dashboard with Slack, follow these steps:
- Create a Slack App: Create a new app on the Slack API.
- Enable Webhooks: Enable incoming webhooks for your app.
- Set Up Webhook URL: Set up a URL to receive messages from Slack.
slack_integration.py:
from flask import Flask, request, jsonify
import requests
app = Flask(__name__)
SLACK_WEBHOOK_URL = 'your_slack_webhook_url'
@app.route('/slack', methods=['POST'])
def slack():
data =
request.json
text = data['text']
# Perform sentiment analysis (using the logistic regression model as an example)
preprocessed_text = preprocess_text(text)
prediction = logistic_regression_model.predict(preprocessed_text)
sentiment = 'Positive' if prediction[0] == 1 else 'Negative'
# Send result back to Slack
response = {
'response_type': 'in_channel',
'text': f'Sentiment: {sentiment}'
}
return jsonify(response)
if __name__ == '__main__':
app.run(debug=True)
In this script, we create a route /slack
to receive messages from Slack, perform sentiment analysis, and send the result back to Slack.
We covered the evaluation and deployment of our sentiment analysis dashboard. We discussed various metrics and methods to evaluate its performance, including accuracy metrics, user feedback, and response time. We provided examples of deploying the dashboard as a web application using Heroku and integrating it with Slack for real-time sentiment analysis updates.
By following these steps, you can ensure your dashboard performs well in real-world scenarios and is accessible to users on different platforms. The deployment process makes the dashboard available to users, allowing them to interact with it and benefit from its functionalities.
13.5 Evaluating and Deploying the Dashboard
In this section, we will focus on evaluating the performance of the sentiment analysis dashboard and deploying it to a suitable platform. Evaluation helps ensure that the dashboard meets user expectations and performs well under various conditions. Deployment makes the dashboard accessible to users, allowing them to benefit from its functionalities.
13.5.1 Evaluating the Dashboard
Evaluation involves measuring the dashboard's performance in terms of accuracy, responsiveness, and user satisfaction. We will use different metrics and methods to evaluate these aspects.
1. Accuracy Metrics
Accuracy metrics assess how well the sentiment analysis models classify the sentiment of the text data. Common metrics include accuracy, precision, recall, F1-score, and confusion matrix.
Example: Evaluating Sentiment Analysis Models
We can use the sklearn library to calculate precision, recall, F1-score, and confusion matrix for the sentiment analysis models.
evaluate_models.py:
import pandas as pd
import pickle
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix, ConfusionMatrixDisplay
import matplotlib.pyplot as plt
# Load test data and preprocessed text
test_data = pd.read_csv('data/processed_data/test_data_preprocessed.csv')
with open('data/processed_data/X_test.pickle', 'rb') as file:
X_test = pickle.load(file)
y_test = test_data['sentiment'].apply(lambda x: 1 if x == 'positive' else 0)
# Load Logistic Regression model
with open('models/best_logistic_regression_model.pickle', 'rb') as file:
logistic_regression_model = pickle.load(file)
# Evaluate Logistic Regression model
y_pred_lr = logistic_regression_model.predict(X_test)
accuracy_lr = accuracy_score(y_test, y_pred_lr)
print(f'Logistic Regression Accuracy: {accuracy_lr}')
print(classification_report(y_test, y_pred_lr))
cm_lr = confusion_matrix(y_test, y_pred_lr, labels=[0, 1])
disp_lr = ConfusionMatrixDisplay(confusion_matrix=cm_lr, display_labels=['Negative', 'Positive'])
disp_lr.plot(cmap=plt.cm.Blues)
plt.title('Logistic Regression Confusion Matrix')
plt.show()
# Load LSTM model
from tensorflow.keras.models import load_model
lstm_model = load_model('models/lstm_model.h5')
# Evaluate LSTM model
y_pred_prob_lstm = lstm_model.predict(X_test)
y_pred_lstm = (y_pred_prob_lstm > 0.5).astype(int)
accuracy_lstm = accuracy_score(y_test, y_pred_lstm)
print(f'LSTM Accuracy: {accuracy_lstm}')
print(classification_report(y_test, y_pred_lstm))
cm_lstm = confusion_matrix(y_test, y_pred_lstm, labels=[0, 1])
disp_lstm = ConfusionMatrixDisplay(confusion_matrix=cm_lstm, display_labels=['Negative', 'Positive'])
disp_lstm.plot(cmap=plt.cm.Blues)
plt.title('LSTM Confusion Matrix')
plt.show()
In this script, we evaluate the Logistic Regression and LSTM models on the test set, calculate various metrics, and plot the confusion matrices to visualize the performance.
2. User Feedback
User feedback is essential for assessing the usability and satisfaction of the dashboard. We can collect feedback through surveys or direct ratings.
Example: Collecting User Feedback
We can modify our Flask application to include a feedback form where users can rate their experience and provide comments.
app.py (continued):
feedback_data = []
@app.route('/feedback', methods=['POST'])
def feedback():
user_feedback = request.json
feedback_data.append(user_feedback)
return jsonify({'message': 'Thank you for your feedback!'})
# HTML template for feedback form
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Sentiment Analysis Dashboard</title>
<link rel="stylesheet" href="<https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/css/bootstrap.min.css>">
</head>
<body>
<div class="container">
<h1 class="mt-5">Sentiment Analysis Dashboard</h1>
<form id="feedback-form" class="mt-4">
<div class="form-group">
<label for="rating">Rate your experience:</label>
<select class="form-control" id="rating" name="rating">
<option value="1">1 - Poor</option>
<option value="2">2 - Fair</option>
<option value="3">3 - Good</option>
<option value="4">4 - Very Good</option>
<option value="5">5 - Excellent</option>
</select>
</div>
<div class="form-group">
<label for="comments">Comments:</label>
<textarea class="form-control" id="comments" name="comments" rows="3"></textarea>
</div>
<button type="submit" class="btn btn-primary">Submit Feedback</button>
</form>
<div id="feedback-result" class="mt-4"></div>
</div>
<script src="<https://code.jquery.com/jquery-3.5.1.min.js>"></script>
<script>
$(document).ready(function() {
$('#feedback-form').on('submit', function(event) {
event.preventDefault();
const formData = $(this).serializeArray().reduce((obj, item) => {
obj[item.name] = item.value;
return obj;
}, {});
$.post('/feedback', JSON.stringify(formData), function(data) {
$('#feedback-result').html(`<h4>${data.message}</h4>`);
});
});
});
</script>
</body>
</html>
In this example, we create a feedback form where users can rate their experience and provide comments. The feedback is sent to the /feedback
endpoint and stored for further analysis.
3. Response Time
Response time is critical for user experience. We need to ensure that the dashboard responds promptly to user queries.
Example: Measuring Response Time
We can measure the response time for different queries using Python's time module.
evaluate_response_time.py:
import time
import requests
# Function to measure response time
def measure_response_time(endpoint, data):
start_time = time.time()
response = requests.post(endpoint, data=data)
end_time = time.time()
response_time = end_time - start_time
return response_time
# Measure response time for sentiment analysis
data = {'text': 'This is a great product!', 'model_type': 'logistic_regression'}
response_time = measure_response_time('<http://localhost:5000/analyze>', data)
print(f'Sentiment Analysis Response Time: {response_time} seconds')
By measuring response time, we can ensure that the dashboard meets the desired performance criteria.
13.5.2 Deploying the Dashboard
Once the dashboard is evaluated and performs satisfactorily, the next step is deployment. We will deploy the dashboard to a suitable platform where users can interact with it.
1. Web Application Deployment
We can deploy the dashboard as a web application using a cloud platform such as Heroku, AWS, or Google Cloud.
Example: Deploying on Heroku
To deploy the dashboard on Heroku, follow these steps:
- Install Heroku CLI: Download and install the Heroku CLI from Heroku.
- Log In to Heroku: Open a terminal and log in to your Heroku account.
heroku login
- Create a Heroku App: Create a new Heroku app.
heroku create your-app-name
- Prepare the Project for Deployment: Create a
Procfile
andrequirements.txt
in the project directory.
Procfile:requirements.txt:web: python app.py
Flask
requests
pandas
scikit-learn
tensorflow
plotly - Push the Project to Heroku: Initialize a Git repository, add the project files, and push to Heroku.
git init
git add .
git commit -m "Initial commit"
git push heroku master - Open the Heroku App: Open the deployed app in your browser.
heroku open
By following these steps, the dashboard will be deployed on Heroku and accessible via a public URL.
2. Integrating with Messaging Apps
We can also integrate the dashboard with messaging apps like Slack or Microsoft Teams for real-time sentiment analysis updates.
Example: Integrating with Slack
To integrate the dashboard with Slack, follow these steps:
- Create a Slack App: Create a new app on the Slack API.
- Enable Webhooks: Enable incoming webhooks for your app.
- Set Up Webhook URL: Set up a URL to receive messages from Slack.
slack_integration.py:
from flask import Flask, request, jsonify
import requests
app = Flask(__name__)
SLACK_WEBHOOK_URL = 'your_slack_webhook_url'
@app.route('/slack', methods=['POST'])
def slack():
data =
request.json
text = data['text']
# Perform sentiment analysis (using the logistic regression model as an example)
preprocessed_text = preprocess_text(text)
prediction = logistic_regression_model.predict(preprocessed_text)
sentiment = 'Positive' if prediction[0] == 1 else 'Negative'
# Send result back to Slack
response = {
'response_type': 'in_channel',
'text': f'Sentiment: {sentiment}'
}
return jsonify(response)
if __name__ == '__main__':
app.run(debug=True)
In this script, we create a route /slack
to receive messages from Slack, perform sentiment analysis, and send the result back to Slack.
We covered the evaluation and deployment of our sentiment analysis dashboard. We discussed various metrics and methods to evaluate its performance, including accuracy metrics, user feedback, and response time. We provided examples of deploying the dashboard as a web application using Heroku and integrating it with Slack for real-time sentiment analysis updates.
By following these steps, you can ensure your dashboard performs well in real-world scenarios and is accessible to users on different platforms. The deployment process makes the dashboard available to users, allowing them to interact with it and benefit from its functionalities.
13.5 Evaluating and Deploying the Dashboard
In this section, we will focus on evaluating the performance of the sentiment analysis dashboard and deploying it to a suitable platform. Evaluation helps ensure that the dashboard meets user expectations and performs well under various conditions. Deployment makes the dashboard accessible to users, allowing them to benefit from its functionalities.
13.5.1 Evaluating the Dashboard
Evaluation involves measuring the dashboard's performance in terms of accuracy, responsiveness, and user satisfaction. We will use different metrics and methods to evaluate these aspects.
1. Accuracy Metrics
Accuracy metrics assess how well the sentiment analysis models classify the sentiment of the text data. Common metrics include accuracy, precision, recall, F1-score, and confusion matrix.
Example: Evaluating Sentiment Analysis Models
We can use the sklearn library to calculate precision, recall, F1-score, and confusion matrix for the sentiment analysis models.
evaluate_models.py:
import pandas as pd
import pickle
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix, ConfusionMatrixDisplay
import matplotlib.pyplot as plt
# Load test data and preprocessed text
test_data = pd.read_csv('data/processed_data/test_data_preprocessed.csv')
with open('data/processed_data/X_test.pickle', 'rb') as file:
X_test = pickle.load(file)
y_test = test_data['sentiment'].apply(lambda x: 1 if x == 'positive' else 0)
# Load Logistic Regression model
with open('models/best_logistic_regression_model.pickle', 'rb') as file:
logistic_regression_model = pickle.load(file)
# Evaluate Logistic Regression model
y_pred_lr = logistic_regression_model.predict(X_test)
accuracy_lr = accuracy_score(y_test, y_pred_lr)
print(f'Logistic Regression Accuracy: {accuracy_lr}')
print(classification_report(y_test, y_pred_lr))
cm_lr = confusion_matrix(y_test, y_pred_lr, labels=[0, 1])
disp_lr = ConfusionMatrixDisplay(confusion_matrix=cm_lr, display_labels=['Negative', 'Positive'])
disp_lr.plot(cmap=plt.cm.Blues)
plt.title('Logistic Regression Confusion Matrix')
plt.show()
# Load LSTM model
from tensorflow.keras.models import load_model
lstm_model = load_model('models/lstm_model.h5')
# Evaluate LSTM model
y_pred_prob_lstm = lstm_model.predict(X_test)
y_pred_lstm = (y_pred_prob_lstm > 0.5).astype(int)
accuracy_lstm = accuracy_score(y_test, y_pred_lstm)
print(f'LSTM Accuracy: {accuracy_lstm}')
print(classification_report(y_test, y_pred_lstm))
cm_lstm = confusion_matrix(y_test, y_pred_lstm, labels=[0, 1])
disp_lstm = ConfusionMatrixDisplay(confusion_matrix=cm_lstm, display_labels=['Negative', 'Positive'])
disp_lstm.plot(cmap=plt.cm.Blues)
plt.title('LSTM Confusion Matrix')
plt.show()
In this script, we evaluate the Logistic Regression and LSTM models on the test set, calculate various metrics, and plot the confusion matrices to visualize the performance.
2. User Feedback
User feedback is essential for assessing the usability and satisfaction of the dashboard. We can collect feedback through surveys or direct ratings.
Example: Collecting User Feedback
We can modify our Flask application to include a feedback form where users can rate their experience and provide comments.
app.py (continued):
feedback_data = []
@app.route('/feedback', methods=['POST'])
def feedback():
user_feedback = request.json
feedback_data.append(user_feedback)
return jsonify({'message': 'Thank you for your feedback!'})
# HTML template for feedback form
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Sentiment Analysis Dashboard</title>
<link rel="stylesheet" href="<https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/css/bootstrap.min.css>">
</head>
<body>
<div class="container">
<h1 class="mt-5">Sentiment Analysis Dashboard</h1>
<form id="feedback-form" class="mt-4">
<div class="form-group">
<label for="rating">Rate your experience:</label>
<select class="form-control" id="rating" name="rating">
<option value="1">1 - Poor</option>
<option value="2">2 - Fair</option>
<option value="3">3 - Good</option>
<option value="4">4 - Very Good</option>
<option value="5">5 - Excellent</option>
</select>
</div>
<div class="form-group">
<label for="comments">Comments:</label>
<textarea class="form-control" id="comments" name="comments" rows="3"></textarea>
</div>
<button type="submit" class="btn btn-primary">Submit Feedback</button>
</form>
<div id="feedback-result" class="mt-4"></div>
</div>
<script src="<https://code.jquery.com/jquery-3.5.1.min.js>"></script>
<script>
$(document).ready(function() {
$('#feedback-form').on('submit', function(event) {
event.preventDefault();
const formData = $(this).serializeArray().reduce((obj, item) => {
obj[item.name] = item.value;
return obj;
}, {});
$.post('/feedback', JSON.stringify(formData), function(data) {
$('#feedback-result').html(`<h4>${data.message}</h4>`);
});
});
});
</script>
</body>
</html>
In this example, we create a feedback form where users can rate their experience and provide comments. The feedback is sent to the /feedback
endpoint and stored for further analysis.
3. Response Time
Response time is critical for user experience. We need to ensure that the dashboard responds promptly to user queries.
Example: Measuring Response Time
We can measure the response time for different queries using Python's time module.
evaluate_response_time.py:
import time
import requests
# Function to measure response time
def measure_response_time(endpoint, data):
start_time = time.time()
response = requests.post(endpoint, data=data)
end_time = time.time()
response_time = end_time - start_time
return response_time
# Measure response time for sentiment analysis
data = {'text': 'This is a great product!', 'model_type': 'logistic_regression'}
response_time = measure_response_time('<http://localhost:5000/analyze>', data)
print(f'Sentiment Analysis Response Time: {response_time} seconds')
By measuring response time, we can ensure that the dashboard meets the desired performance criteria.
13.5.2 Deploying the Dashboard
Once the dashboard is evaluated and performs satisfactorily, the next step is deployment. We will deploy the dashboard to a suitable platform where users can interact with it.
1. Web Application Deployment
We can deploy the dashboard as a web application using a cloud platform such as Heroku, AWS, or Google Cloud.
Example: Deploying on Heroku
To deploy the dashboard on Heroku, follow these steps:
- Install Heroku CLI: Download and install the Heroku CLI from Heroku.
- Log In to Heroku: Open a terminal and log in to your Heroku account.
heroku login
- Create a Heroku App: Create a new Heroku app.
heroku create your-app-name
- Prepare the Project for Deployment: Create a
Procfile
andrequirements.txt
in the project directory.
Procfile:requirements.txt:web: python app.py
Flask
requests
pandas
scikit-learn
tensorflow
plotly - Push the Project to Heroku: Initialize a Git repository, add the project files, and push to Heroku.
git init
git add .
git commit -m "Initial commit"
git push heroku master - Open the Heroku App: Open the deployed app in your browser.
heroku open
By following these steps, the dashboard will be deployed on Heroku and accessible via a public URL.
2. Integrating with Messaging Apps
We can also integrate the dashboard with messaging apps like Slack or Microsoft Teams for real-time sentiment analysis updates.
Example: Integrating with Slack
To integrate the dashboard with Slack, follow these steps:
- Create a Slack App: Create a new app on the Slack API.
- Enable Webhooks: Enable incoming webhooks for your app.
- Set Up Webhook URL: Set up a URL to receive messages from Slack.
slack_integration.py:
from flask import Flask, request, jsonify
import requests
app = Flask(__name__)
SLACK_WEBHOOK_URL = 'your_slack_webhook_url'
@app.route('/slack', methods=['POST'])
def slack():
data =
request.json
text = data['text']
# Perform sentiment analysis (using the logistic regression model as an example)
preprocessed_text = preprocess_text(text)
prediction = logistic_regression_model.predict(preprocessed_text)
sentiment = 'Positive' if prediction[0] == 1 else 'Negative'
# Send result back to Slack
response = {
'response_type': 'in_channel',
'text': f'Sentiment: {sentiment}'
}
return jsonify(response)
if __name__ == '__main__':
app.run(debug=True)
In this script, we create a route /slack
to receive messages from Slack, perform sentiment analysis, and send the result back to Slack.
We covered the evaluation and deployment of our sentiment analysis dashboard. We discussed various metrics and methods to evaluate its performance, including accuracy metrics, user feedback, and response time. We provided examples of deploying the dashboard as a web application using Heroku and integrating it with Slack for real-time sentiment analysis updates.
By following these steps, you can ensure your dashboard performs well in real-world scenarios and is accessible to users on different platforms. The deployment process makes the dashboard available to users, allowing them to interact with it and benefit from its functionalities.