Code icon

The App is Under a Quick Maintenance

We apologize for the inconvenience. Please come back later

Menu iconMenu iconData Analysis Foundations with Python
Data Analysis Foundations with Python

Chapter 17: Case Study 2: Social Media Sentiment Analysis

17.4 Practical Exercises of Chapter 17: Case Study 2: Social Media Sentiment Analysis

Fantastic! You've just navigated through an intricate but fascinating case study on social media sentiment analysis. Now it's time to get your hands dirty with some practical exercises that will solidify your understanding. Grab your keyboard and let's get started!

Exercise 1: Data Collection

Collect 50 tweets containing the hashtag #Python. You can either do this manually or use an API.

Solution:

# Note: You'll need Twitter API credentials for this
import tweepy

consumer_key = 'your_consumer_key'
consumer_secret = 'your_consumer_secret'
access_token = 'your_access_token'
access_token_secret = 'your_access_token_secret'

auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)

api = tweepy.API(auth)

tweets = api.search(q='#Python', count=50)
for tweet in tweets:
    print(tweet.text)

Exercise 2: Text Preprocessing

Remove stop words and special characters from the tweets collected in Exercise 1.

Solution:

from nltk.corpus import stopwords
import re

stop_words = set(stopwords.words('english'))

def clean_text(text):
    text = re.sub('[^a-zA-Z]', ' ', text)
    text = text.lower().split()
    text = [word for word in text if not word in stop_words]
    return ' '.join(text)

cleaned_tweets = [clean_text(tweet.text) for tweet in tweets]

Exercise 3: Sentiment Analysis with Naive Bayes

Use the Naive Bayes model you've built earlier in the chapter to classify the sentiments of the cleaned tweets.

Solution:

# Use the previously defined `extract_features` and `classifier`
test_data = [extract_features(tweet) for tweet in cleaned_tweets]
predictions = [classifier.classify(features) for features in test_data]

# Display the results
for i, (tweet, sentiment) in enumerate(zip(cleaned_tweets, predictions)):
    print(f"Tweet {i+1}: {tweet} -> Sentiment: {sentiment}")

Feel free to refer back to these exercises whenever you need a refresher or extra practice. They encompass the critical steps in setting up and performing sentiment analysis. Enjoy coding!

17.4 Practical Exercises of Chapter 17: Case Study 2: Social Media Sentiment Analysis

Fantastic! You've just navigated through an intricate but fascinating case study on social media sentiment analysis. Now it's time to get your hands dirty with some practical exercises that will solidify your understanding. Grab your keyboard and let's get started!

Exercise 1: Data Collection

Collect 50 tweets containing the hashtag #Python. You can either do this manually or use an API.

Solution:

# Note: You'll need Twitter API credentials for this
import tweepy

consumer_key = 'your_consumer_key'
consumer_secret = 'your_consumer_secret'
access_token = 'your_access_token'
access_token_secret = 'your_access_token_secret'

auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)

api = tweepy.API(auth)

tweets = api.search(q='#Python', count=50)
for tweet in tweets:
    print(tweet.text)

Exercise 2: Text Preprocessing

Remove stop words and special characters from the tweets collected in Exercise 1.

Solution:

from nltk.corpus import stopwords
import re

stop_words = set(stopwords.words('english'))

def clean_text(text):
    text = re.sub('[^a-zA-Z]', ' ', text)
    text = text.lower().split()
    text = [word for word in text if not word in stop_words]
    return ' '.join(text)

cleaned_tweets = [clean_text(tweet.text) for tweet in tweets]

Exercise 3: Sentiment Analysis with Naive Bayes

Use the Naive Bayes model you've built earlier in the chapter to classify the sentiments of the cleaned tweets.

Solution:

# Use the previously defined `extract_features` and `classifier`
test_data = [extract_features(tweet) for tweet in cleaned_tweets]
predictions = [classifier.classify(features) for features in test_data]

# Display the results
for i, (tweet, sentiment) in enumerate(zip(cleaned_tweets, predictions)):
    print(f"Tweet {i+1}: {tweet} -> Sentiment: {sentiment}")

Feel free to refer back to these exercises whenever you need a refresher or extra practice. They encompass the critical steps in setting up and performing sentiment analysis. Enjoy coding!

17.4 Practical Exercises of Chapter 17: Case Study 2: Social Media Sentiment Analysis

Fantastic! You've just navigated through an intricate but fascinating case study on social media sentiment analysis. Now it's time to get your hands dirty with some practical exercises that will solidify your understanding. Grab your keyboard and let's get started!

Exercise 1: Data Collection

Collect 50 tweets containing the hashtag #Python. You can either do this manually or use an API.

Solution:

# Note: You'll need Twitter API credentials for this
import tweepy

consumer_key = 'your_consumer_key'
consumer_secret = 'your_consumer_secret'
access_token = 'your_access_token'
access_token_secret = 'your_access_token_secret'

auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)

api = tweepy.API(auth)

tweets = api.search(q='#Python', count=50)
for tweet in tweets:
    print(tweet.text)

Exercise 2: Text Preprocessing

Remove stop words and special characters from the tweets collected in Exercise 1.

Solution:

from nltk.corpus import stopwords
import re

stop_words = set(stopwords.words('english'))

def clean_text(text):
    text = re.sub('[^a-zA-Z]', ' ', text)
    text = text.lower().split()
    text = [word for word in text if not word in stop_words]
    return ' '.join(text)

cleaned_tweets = [clean_text(tweet.text) for tweet in tweets]

Exercise 3: Sentiment Analysis with Naive Bayes

Use the Naive Bayes model you've built earlier in the chapter to classify the sentiments of the cleaned tweets.

Solution:

# Use the previously defined `extract_features` and `classifier`
test_data = [extract_features(tweet) for tweet in cleaned_tweets]
predictions = [classifier.classify(features) for features in test_data]

# Display the results
for i, (tweet, sentiment) in enumerate(zip(cleaned_tweets, predictions)):
    print(f"Tweet {i+1}: {tweet} -> Sentiment: {sentiment}")

Feel free to refer back to these exercises whenever you need a refresher or extra practice. They encompass the critical steps in setting up and performing sentiment analysis. Enjoy coding!

17.4 Practical Exercises of Chapter 17: Case Study 2: Social Media Sentiment Analysis

Fantastic! You've just navigated through an intricate but fascinating case study on social media sentiment analysis. Now it's time to get your hands dirty with some practical exercises that will solidify your understanding. Grab your keyboard and let's get started!

Exercise 1: Data Collection

Collect 50 tweets containing the hashtag #Python. You can either do this manually or use an API.

Solution:

# Note: You'll need Twitter API credentials for this
import tweepy

consumer_key = 'your_consumer_key'
consumer_secret = 'your_consumer_secret'
access_token = 'your_access_token'
access_token_secret = 'your_access_token_secret'

auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)

api = tweepy.API(auth)

tweets = api.search(q='#Python', count=50)
for tweet in tweets:
    print(tweet.text)

Exercise 2: Text Preprocessing

Remove stop words and special characters from the tweets collected in Exercise 1.

Solution:

from nltk.corpus import stopwords
import re

stop_words = set(stopwords.words('english'))

def clean_text(text):
    text = re.sub('[^a-zA-Z]', ' ', text)
    text = text.lower().split()
    text = [word for word in text if not word in stop_words]
    return ' '.join(text)

cleaned_tweets = [clean_text(tweet.text) for tweet in tweets]

Exercise 3: Sentiment Analysis with Naive Bayes

Use the Naive Bayes model you've built earlier in the chapter to classify the sentiments of the cleaned tweets.

Solution:

# Use the previously defined `extract_features` and `classifier`
test_data = [extract_features(tweet) for tweet in cleaned_tweets]
predictions = [classifier.classify(features) for features in test_data]

# Display the results
for i, (tweet, sentiment) in enumerate(zip(cleaned_tweets, predictions)):
    print(f"Tweet {i+1}: {tweet} -> Sentiment: {sentiment}")

Feel free to refer back to these exercises whenever you need a refresher or extra practice. They encompass the critical steps in setting up and performing sentiment analysis. Enjoy coding!