Code icon

The App is Under a Quick Maintenance

We apologize for the inconvenience. Please come back later

Menu iconMenu iconIngeniería de Características para el Machine Learning Moderno con Scikit-Learn
Ingeniería de Características para el Machine Learning Moderno con Scikit-Learn

Chapter 3: Automating Feature Engineering with Pipelines

3.3 Practical Exercises for Chapter 3

These exercises will help you practice automating data preprocessing with Scikit-learn’s Pipeline and FeatureUnion classes. Each exercise includes a solution with code for guidance.

Exercise 1: Building a Simple Pipeline with Standard Scaling and Logistic Regression

Create a pipeline that applies Standard Scaling to the numeric features Age and Income, and then uses Logistic Regression to classify a target variable Churn.

  1. Load the dataset and split it into features (X) and target (y).
  2. Create a pipeline with StandardScaler and LogisticRegression.
  3. Train the pipeline and evaluate it on the test set.
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
import pandas as pd

# Sample dataset
data = {'Age': [25, 32, 47, 51, 62],
        'Income': [50000, 65000, 85000, 90000, 120000],
        'Churn': [0, 0, 1, 1, 1]}
df = pd.DataFrame(data)

# Features and target
X = df[['Age', 'Income']]
y = df['Churn']

# Split data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Solution: Create a pipeline with StandardScaler and LogisticRegression
pipeline = Pipeline([
    ('scaler', StandardScaler()),
    ('log_reg', LogisticRegression())
])

# Fit the pipeline
pipeline.fit(X_train, y_train)

# Make predictions and evaluate
y_pred = pipeline.predict(X_test)
print("Model Accuracy:", accuracy_score(y_test, y_pred))

In this solution:

The pipeline automates the scaling and training steps, and accuracy is calculated to evaluate the model’s performance.

Exercise 2: Building a Pipeline with Imputation and One-Hot Encoding

Extend the pipeline to handle missing values in the Age column and apply OneHotEncoding to the categorical feature Gender.

  1. Add a missing value imputer for Age and one-hot encoding for Gender in the pipeline.
  2. Train the model and observe the transformed feature set.
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import OneHotEncoder
from sklearn.compose import ColumnTransformer

# Sample dataset with missing values and a categorical feature
data = {'Age': [25, None, 47, 51, 62],
        'Income': [50000, 65000, 85000, 90000, 120000],
        'Gender': ['Male', 'Female', 'Female', 'Male', 'Female'],
        'Churn': [0, 0, 1, 1, 1]}
df = pd.DataFrame(data)

# Define features and target
X = df[['Age', 'Income', 'Gender']]
y = df['Churn']

# Define transformers for numeric and categorical features
numeric_transformer = Pipeline(steps=[
    ('imputer', SimpleImputer(strategy='mean')),
    ('scaler', StandardScaler())
])

categorical_transformer = Pipeline(steps=[
    ('onehot', OneHotEncoder())
])

# Solution: Create ColumnTransformer for handling numeric and categorical features
preprocessor = ColumnTransformer(
    transformers=[
        ('num', numeric_transformer, ['Age', 'Income']),
        ('cat', categorical_transformer, ['Gender'])
    ])

# Create pipeline with preprocessor and logistic regression
pipeline = Pipeline(steps=[
    ('preprocessor', preprocessor),
    ('classifier', LogisticRegression())
])

# Fit the pipeline and view transformed features
pipeline.fit(X, y)
print("\\nTransformed Feature Set (Sample):")
print(preprocessor.fit_transform(X)[:5])

In this solution:

The pipeline handles missing values in Age and encodes Gender with one-hot encoding, resulting in a fully transformed feature set.

Exercise 3: Using FeatureUnion to Combine Scaling and Polynomial Features

Create a pipeline that uses FeatureUnion to apply both Standard Scaling and Polynomial Features to the Income column, then applies Logistic Regression.

  1. Define a FeatureUnion with scaling and polynomial feature generation for Income.
  2. Integrate FeatureUnion with other transformations in the pipeline.
from sklearn.pipeline import FeatureUnion
from sklearn.preprocessing import PolynomialFeatures

# Sample dataset with a numeric feature
data = {'Age': [25, 32, 47, 51, 62],
        'Income': [50000, 65000, 85000, 90000, 120000],
        'Churn': [0, 0, 1, 1, 1]}
df = pd.DataFrame(data)

# Define features and target
X = df[['Age', 'Income']]
y = df['Churn']

# FeatureUnion for scaling and polynomial features for Income
numeric_features = ['Income']
numeric_transformers = FeatureUnion([
    ('scaler', StandardScaler()),
    ('poly', PolynomialFeatures(degree=2))
])

# ColumnTransformer to handle both Age and Income transformations
preprocessor = ColumnTransformer(
    transformers=[
        ('num', numeric_transformers, numeric_features),
        ('age_scaler', StandardScaler(), ['Age'])
    ])

# Solution: Create pipeline with FeatureUnion and Logistic Regression
pipeline = Pipeline(steps=[
    ('preprocessor', preprocessor),
    ('classifier', LogisticRegression())
])

# Fit the pipeline
pipeline.fit(X, y)

# View transformed feature set
print("\\nTransformed Feature Set (Sample):")
print(preprocessor.fit_transform(X)[:5])

In this solution:

FeatureUnion allows both scaling and polynomial features for Income, demonstrating how to manage multiple transformations on the same feature.

Exercise 4: Building a Custom Transformer for Frequency Encoding

Create a pipeline that uses a custom transformer to perform frequency encoding on the Occupation column, alongside standard scaling for numerical features.

  1. Define a custom transformer for frequency encoding.
  2. Combine this transformer with other preprocessing steps in a pipeline.
from sklearn.base import BaseEstimator, TransformerMixin

# Sample dataset with a categorical feature for frequency encoding
data = {'Age': [25, 32, 47, 51, 62],
        'Income': [50000, 65000, 85000, 90000, 120000],
        'Occupation': ['Engineer', 'Doctor', 'Artist', 'Engineer', 'Artist'],
        'Churn': [0, 0, 1, 1, 1]}
df = pd.DataFrame(data)

# Custom transformer for frequency encoding
class FrequencyEncoder(BaseEstimator, TransformerMixin):
    def __init__(self, column):
        self.column = column

    def fit(self, X, y=None):
        self.freq_encoding = X[self.column].value_counts(normalize=True).to_dict()
        return self

    def transform(self, X):
        X_copy = X.copy()
        X_copy[self.column] = X_copy[self.column].map(self.freq_encoding)
        return X_copy[[self.column]]

# Define features and target
X = df[['Age', 'Income', 'Occupation']]
y = df['Churn']

# Solution: Apply frequency encoding and scaling in a pipeline
preprocessor = ColumnTransformer(
    transformers=[
        ('age_scaler', StandardScaler(), ['Age', 'Income']),
        ('occupation_encoder', FrequencyEncoder(column='Occupation'), ['Occupation'])
    ])

pipeline = Pipeline(steps=[
    ('preprocessor', preprocessor),
    ('classifier', LogisticRegression())
])

# Fit the pipeline
pipeline.fit(X, y)

# View transformed feature set
print("\\nTransformed Feature Set (Sample):")
print(preprocessor.fit_transform(X)[:5])

In this solution:

custom transformer is created for frequency encoding Occupation, demonstrating how to incorporate custom transformations into a pipeline.

These exercises cover a range of automated data preprocessing techniques, from basic scaling to advanced feature engineering with FeatureUnion and custom transformers. By working through these exercises, you’ll gain hands-on experience in using Scikit-learn’s pipeline tools to streamline complex data workflows and improve model accuracy.

3.3 Practical Exercises for Chapter 3

These exercises will help you practice automating data preprocessing with Scikit-learn’s Pipeline and FeatureUnion classes. Each exercise includes a solution with code for guidance.

Exercise 1: Building a Simple Pipeline with Standard Scaling and Logistic Regression

Create a pipeline that applies Standard Scaling to the numeric features Age and Income, and then uses Logistic Regression to classify a target variable Churn.

  1. Load the dataset and split it into features (X) and target (y).
  2. Create a pipeline with StandardScaler and LogisticRegression.
  3. Train the pipeline and evaluate it on the test set.
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
import pandas as pd

# Sample dataset
data = {'Age': [25, 32, 47, 51, 62],
        'Income': [50000, 65000, 85000, 90000, 120000],
        'Churn': [0, 0, 1, 1, 1]}
df = pd.DataFrame(data)

# Features and target
X = df[['Age', 'Income']]
y = df['Churn']

# Split data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Solution: Create a pipeline with StandardScaler and LogisticRegression
pipeline = Pipeline([
    ('scaler', StandardScaler()),
    ('log_reg', LogisticRegression())
])

# Fit the pipeline
pipeline.fit(X_train, y_train)

# Make predictions and evaluate
y_pred = pipeline.predict(X_test)
print("Model Accuracy:", accuracy_score(y_test, y_pred))

In this solution:

The pipeline automates the scaling and training steps, and accuracy is calculated to evaluate the model’s performance.

Exercise 2: Building a Pipeline with Imputation and One-Hot Encoding

Extend the pipeline to handle missing values in the Age column and apply OneHotEncoding to the categorical feature Gender.

  1. Add a missing value imputer for Age and one-hot encoding for Gender in the pipeline.
  2. Train the model and observe the transformed feature set.
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import OneHotEncoder
from sklearn.compose import ColumnTransformer

# Sample dataset with missing values and a categorical feature
data = {'Age': [25, None, 47, 51, 62],
        'Income': [50000, 65000, 85000, 90000, 120000],
        'Gender': ['Male', 'Female', 'Female', 'Male', 'Female'],
        'Churn': [0, 0, 1, 1, 1]}
df = pd.DataFrame(data)

# Define features and target
X = df[['Age', 'Income', 'Gender']]
y = df['Churn']

# Define transformers for numeric and categorical features
numeric_transformer = Pipeline(steps=[
    ('imputer', SimpleImputer(strategy='mean')),
    ('scaler', StandardScaler())
])

categorical_transformer = Pipeline(steps=[
    ('onehot', OneHotEncoder())
])

# Solution: Create ColumnTransformer for handling numeric and categorical features
preprocessor = ColumnTransformer(
    transformers=[
        ('num', numeric_transformer, ['Age', 'Income']),
        ('cat', categorical_transformer, ['Gender'])
    ])

# Create pipeline with preprocessor and logistic regression
pipeline = Pipeline(steps=[
    ('preprocessor', preprocessor),
    ('classifier', LogisticRegression())
])

# Fit the pipeline and view transformed features
pipeline.fit(X, y)
print("\\nTransformed Feature Set (Sample):")
print(preprocessor.fit_transform(X)[:5])

In this solution:

The pipeline handles missing values in Age and encodes Gender with one-hot encoding, resulting in a fully transformed feature set.

Exercise 3: Using FeatureUnion to Combine Scaling and Polynomial Features

Create a pipeline that uses FeatureUnion to apply both Standard Scaling and Polynomial Features to the Income column, then applies Logistic Regression.

  1. Define a FeatureUnion with scaling and polynomial feature generation for Income.
  2. Integrate FeatureUnion with other transformations in the pipeline.
from sklearn.pipeline import FeatureUnion
from sklearn.preprocessing import PolynomialFeatures

# Sample dataset with a numeric feature
data = {'Age': [25, 32, 47, 51, 62],
        'Income': [50000, 65000, 85000, 90000, 120000],
        'Churn': [0, 0, 1, 1, 1]}
df = pd.DataFrame(data)

# Define features and target
X = df[['Age', 'Income']]
y = df['Churn']

# FeatureUnion for scaling and polynomial features for Income
numeric_features = ['Income']
numeric_transformers = FeatureUnion([
    ('scaler', StandardScaler()),
    ('poly', PolynomialFeatures(degree=2))
])

# ColumnTransformer to handle both Age and Income transformations
preprocessor = ColumnTransformer(
    transformers=[
        ('num', numeric_transformers, numeric_features),
        ('age_scaler', StandardScaler(), ['Age'])
    ])

# Solution: Create pipeline with FeatureUnion and Logistic Regression
pipeline = Pipeline(steps=[
    ('preprocessor', preprocessor),
    ('classifier', LogisticRegression())
])

# Fit the pipeline
pipeline.fit(X, y)

# View transformed feature set
print("\\nTransformed Feature Set (Sample):")
print(preprocessor.fit_transform(X)[:5])

In this solution:

FeatureUnion allows both scaling and polynomial features for Income, demonstrating how to manage multiple transformations on the same feature.

Exercise 4: Building a Custom Transformer for Frequency Encoding

Create a pipeline that uses a custom transformer to perform frequency encoding on the Occupation column, alongside standard scaling for numerical features.

  1. Define a custom transformer for frequency encoding.
  2. Combine this transformer with other preprocessing steps in a pipeline.
from sklearn.base import BaseEstimator, TransformerMixin

# Sample dataset with a categorical feature for frequency encoding
data = {'Age': [25, 32, 47, 51, 62],
        'Income': [50000, 65000, 85000, 90000, 120000],
        'Occupation': ['Engineer', 'Doctor', 'Artist', 'Engineer', 'Artist'],
        'Churn': [0, 0, 1, 1, 1]}
df = pd.DataFrame(data)

# Custom transformer for frequency encoding
class FrequencyEncoder(BaseEstimator, TransformerMixin):
    def __init__(self, column):
        self.column = column

    def fit(self, X, y=None):
        self.freq_encoding = X[self.column].value_counts(normalize=True).to_dict()
        return self

    def transform(self, X):
        X_copy = X.copy()
        X_copy[self.column] = X_copy[self.column].map(self.freq_encoding)
        return X_copy[[self.column]]

# Define features and target
X = df[['Age', 'Income', 'Occupation']]
y = df['Churn']

# Solution: Apply frequency encoding and scaling in a pipeline
preprocessor = ColumnTransformer(
    transformers=[
        ('age_scaler', StandardScaler(), ['Age', 'Income']),
        ('occupation_encoder', FrequencyEncoder(column='Occupation'), ['Occupation'])
    ])

pipeline = Pipeline(steps=[
    ('preprocessor', preprocessor),
    ('classifier', LogisticRegression())
])

# Fit the pipeline
pipeline.fit(X, y)

# View transformed feature set
print("\\nTransformed Feature Set (Sample):")
print(preprocessor.fit_transform(X)[:5])

In this solution:

custom transformer is created for frequency encoding Occupation, demonstrating how to incorporate custom transformations into a pipeline.

These exercises cover a range of automated data preprocessing techniques, from basic scaling to advanced feature engineering with FeatureUnion and custom transformers. By working through these exercises, you’ll gain hands-on experience in using Scikit-learn’s pipeline tools to streamline complex data workflows and improve model accuracy.

3.3 Practical Exercises for Chapter 3

These exercises will help you practice automating data preprocessing with Scikit-learn’s Pipeline and FeatureUnion classes. Each exercise includes a solution with code for guidance.

Exercise 1: Building a Simple Pipeline with Standard Scaling and Logistic Regression

Create a pipeline that applies Standard Scaling to the numeric features Age and Income, and then uses Logistic Regression to classify a target variable Churn.

  1. Load the dataset and split it into features (X) and target (y).
  2. Create a pipeline with StandardScaler and LogisticRegression.
  3. Train the pipeline and evaluate it on the test set.
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
import pandas as pd

# Sample dataset
data = {'Age': [25, 32, 47, 51, 62],
        'Income': [50000, 65000, 85000, 90000, 120000],
        'Churn': [0, 0, 1, 1, 1]}
df = pd.DataFrame(data)

# Features and target
X = df[['Age', 'Income']]
y = df['Churn']

# Split data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Solution: Create a pipeline with StandardScaler and LogisticRegression
pipeline = Pipeline([
    ('scaler', StandardScaler()),
    ('log_reg', LogisticRegression())
])

# Fit the pipeline
pipeline.fit(X_train, y_train)

# Make predictions and evaluate
y_pred = pipeline.predict(X_test)
print("Model Accuracy:", accuracy_score(y_test, y_pred))

In this solution:

The pipeline automates the scaling and training steps, and accuracy is calculated to evaluate the model’s performance.

Exercise 2: Building a Pipeline with Imputation and One-Hot Encoding

Extend the pipeline to handle missing values in the Age column and apply OneHotEncoding to the categorical feature Gender.

  1. Add a missing value imputer for Age and one-hot encoding for Gender in the pipeline.
  2. Train the model and observe the transformed feature set.
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import OneHotEncoder
from sklearn.compose import ColumnTransformer

# Sample dataset with missing values and a categorical feature
data = {'Age': [25, None, 47, 51, 62],
        'Income': [50000, 65000, 85000, 90000, 120000],
        'Gender': ['Male', 'Female', 'Female', 'Male', 'Female'],
        'Churn': [0, 0, 1, 1, 1]}
df = pd.DataFrame(data)

# Define features and target
X = df[['Age', 'Income', 'Gender']]
y = df['Churn']

# Define transformers for numeric and categorical features
numeric_transformer = Pipeline(steps=[
    ('imputer', SimpleImputer(strategy='mean')),
    ('scaler', StandardScaler())
])

categorical_transformer = Pipeline(steps=[
    ('onehot', OneHotEncoder())
])

# Solution: Create ColumnTransformer for handling numeric and categorical features
preprocessor = ColumnTransformer(
    transformers=[
        ('num', numeric_transformer, ['Age', 'Income']),
        ('cat', categorical_transformer, ['Gender'])
    ])

# Create pipeline with preprocessor and logistic regression
pipeline = Pipeline(steps=[
    ('preprocessor', preprocessor),
    ('classifier', LogisticRegression())
])

# Fit the pipeline and view transformed features
pipeline.fit(X, y)
print("\\nTransformed Feature Set (Sample):")
print(preprocessor.fit_transform(X)[:5])

In this solution:

The pipeline handles missing values in Age and encodes Gender with one-hot encoding, resulting in a fully transformed feature set.

Exercise 3: Using FeatureUnion to Combine Scaling and Polynomial Features

Create a pipeline that uses FeatureUnion to apply both Standard Scaling and Polynomial Features to the Income column, then applies Logistic Regression.

  1. Define a FeatureUnion with scaling and polynomial feature generation for Income.
  2. Integrate FeatureUnion with other transformations in the pipeline.
from sklearn.pipeline import FeatureUnion
from sklearn.preprocessing import PolynomialFeatures

# Sample dataset with a numeric feature
data = {'Age': [25, 32, 47, 51, 62],
        'Income': [50000, 65000, 85000, 90000, 120000],
        'Churn': [0, 0, 1, 1, 1]}
df = pd.DataFrame(data)

# Define features and target
X = df[['Age', 'Income']]
y = df['Churn']

# FeatureUnion for scaling and polynomial features for Income
numeric_features = ['Income']
numeric_transformers = FeatureUnion([
    ('scaler', StandardScaler()),
    ('poly', PolynomialFeatures(degree=2))
])

# ColumnTransformer to handle both Age and Income transformations
preprocessor = ColumnTransformer(
    transformers=[
        ('num', numeric_transformers, numeric_features),
        ('age_scaler', StandardScaler(), ['Age'])
    ])

# Solution: Create pipeline with FeatureUnion and Logistic Regression
pipeline = Pipeline(steps=[
    ('preprocessor', preprocessor),
    ('classifier', LogisticRegression())
])

# Fit the pipeline
pipeline.fit(X, y)

# View transformed feature set
print("\\nTransformed Feature Set (Sample):")
print(preprocessor.fit_transform(X)[:5])

In this solution:

FeatureUnion allows both scaling and polynomial features for Income, demonstrating how to manage multiple transformations on the same feature.

Exercise 4: Building a Custom Transformer for Frequency Encoding

Create a pipeline that uses a custom transformer to perform frequency encoding on the Occupation column, alongside standard scaling for numerical features.

  1. Define a custom transformer for frequency encoding.
  2. Combine this transformer with other preprocessing steps in a pipeline.
from sklearn.base import BaseEstimator, TransformerMixin

# Sample dataset with a categorical feature for frequency encoding
data = {'Age': [25, 32, 47, 51, 62],
        'Income': [50000, 65000, 85000, 90000, 120000],
        'Occupation': ['Engineer', 'Doctor', 'Artist', 'Engineer', 'Artist'],
        'Churn': [0, 0, 1, 1, 1]}
df = pd.DataFrame(data)

# Custom transformer for frequency encoding
class FrequencyEncoder(BaseEstimator, TransformerMixin):
    def __init__(self, column):
        self.column = column

    def fit(self, X, y=None):
        self.freq_encoding = X[self.column].value_counts(normalize=True).to_dict()
        return self

    def transform(self, X):
        X_copy = X.copy()
        X_copy[self.column] = X_copy[self.column].map(self.freq_encoding)
        return X_copy[[self.column]]

# Define features and target
X = df[['Age', 'Income', 'Occupation']]
y = df['Churn']

# Solution: Apply frequency encoding and scaling in a pipeline
preprocessor = ColumnTransformer(
    transformers=[
        ('age_scaler', StandardScaler(), ['Age', 'Income']),
        ('occupation_encoder', FrequencyEncoder(column='Occupation'), ['Occupation'])
    ])

pipeline = Pipeline(steps=[
    ('preprocessor', preprocessor),
    ('classifier', LogisticRegression())
])

# Fit the pipeline
pipeline.fit(X, y)

# View transformed feature set
print("\\nTransformed Feature Set (Sample):")
print(preprocessor.fit_transform(X)[:5])

In this solution:

custom transformer is created for frequency encoding Occupation, demonstrating how to incorporate custom transformations into a pipeline.

These exercises cover a range of automated data preprocessing techniques, from basic scaling to advanced feature engineering with FeatureUnion and custom transformers. By working through these exercises, you’ll gain hands-on experience in using Scikit-learn’s pipeline tools to streamline complex data workflows and improve model accuracy.

3.3 Practical Exercises for Chapter 3

These exercises will help you practice automating data preprocessing with Scikit-learn’s Pipeline and FeatureUnion classes. Each exercise includes a solution with code for guidance.

Exercise 1: Building a Simple Pipeline with Standard Scaling and Logistic Regression

Create a pipeline that applies Standard Scaling to the numeric features Age and Income, and then uses Logistic Regression to classify a target variable Churn.

  1. Load the dataset and split it into features (X) and target (y).
  2. Create a pipeline with StandardScaler and LogisticRegression.
  3. Train the pipeline and evaluate it on the test set.
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
import pandas as pd

# Sample dataset
data = {'Age': [25, 32, 47, 51, 62],
        'Income': [50000, 65000, 85000, 90000, 120000],
        'Churn': [0, 0, 1, 1, 1]}
df = pd.DataFrame(data)

# Features and target
X = df[['Age', 'Income']]
y = df['Churn']

# Split data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Solution: Create a pipeline with StandardScaler and LogisticRegression
pipeline = Pipeline([
    ('scaler', StandardScaler()),
    ('log_reg', LogisticRegression())
])

# Fit the pipeline
pipeline.fit(X_train, y_train)

# Make predictions and evaluate
y_pred = pipeline.predict(X_test)
print("Model Accuracy:", accuracy_score(y_test, y_pred))

In this solution:

The pipeline automates the scaling and training steps, and accuracy is calculated to evaluate the model’s performance.

Exercise 2: Building a Pipeline with Imputation and One-Hot Encoding

Extend the pipeline to handle missing values in the Age column and apply OneHotEncoding to the categorical feature Gender.

  1. Add a missing value imputer for Age and one-hot encoding for Gender in the pipeline.
  2. Train the model and observe the transformed feature set.
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import OneHotEncoder
from sklearn.compose import ColumnTransformer

# Sample dataset with missing values and a categorical feature
data = {'Age': [25, None, 47, 51, 62],
        'Income': [50000, 65000, 85000, 90000, 120000],
        'Gender': ['Male', 'Female', 'Female', 'Male', 'Female'],
        'Churn': [0, 0, 1, 1, 1]}
df = pd.DataFrame(data)

# Define features and target
X = df[['Age', 'Income', 'Gender']]
y = df['Churn']

# Define transformers for numeric and categorical features
numeric_transformer = Pipeline(steps=[
    ('imputer', SimpleImputer(strategy='mean')),
    ('scaler', StandardScaler())
])

categorical_transformer = Pipeline(steps=[
    ('onehot', OneHotEncoder())
])

# Solution: Create ColumnTransformer for handling numeric and categorical features
preprocessor = ColumnTransformer(
    transformers=[
        ('num', numeric_transformer, ['Age', 'Income']),
        ('cat', categorical_transformer, ['Gender'])
    ])

# Create pipeline with preprocessor and logistic regression
pipeline = Pipeline(steps=[
    ('preprocessor', preprocessor),
    ('classifier', LogisticRegression())
])

# Fit the pipeline and view transformed features
pipeline.fit(X, y)
print("\\nTransformed Feature Set (Sample):")
print(preprocessor.fit_transform(X)[:5])

In this solution:

The pipeline handles missing values in Age and encodes Gender with one-hot encoding, resulting in a fully transformed feature set.

Exercise 3: Using FeatureUnion to Combine Scaling and Polynomial Features

Create a pipeline that uses FeatureUnion to apply both Standard Scaling and Polynomial Features to the Income column, then applies Logistic Regression.

  1. Define a FeatureUnion with scaling and polynomial feature generation for Income.
  2. Integrate FeatureUnion with other transformations in the pipeline.
from sklearn.pipeline import FeatureUnion
from sklearn.preprocessing import PolynomialFeatures

# Sample dataset with a numeric feature
data = {'Age': [25, 32, 47, 51, 62],
        'Income': [50000, 65000, 85000, 90000, 120000],
        'Churn': [0, 0, 1, 1, 1]}
df = pd.DataFrame(data)

# Define features and target
X = df[['Age', 'Income']]
y = df['Churn']

# FeatureUnion for scaling and polynomial features for Income
numeric_features = ['Income']
numeric_transformers = FeatureUnion([
    ('scaler', StandardScaler()),
    ('poly', PolynomialFeatures(degree=2))
])

# ColumnTransformer to handle both Age and Income transformations
preprocessor = ColumnTransformer(
    transformers=[
        ('num', numeric_transformers, numeric_features),
        ('age_scaler', StandardScaler(), ['Age'])
    ])

# Solution: Create pipeline with FeatureUnion and Logistic Regression
pipeline = Pipeline(steps=[
    ('preprocessor', preprocessor),
    ('classifier', LogisticRegression())
])

# Fit the pipeline
pipeline.fit(X, y)

# View transformed feature set
print("\\nTransformed Feature Set (Sample):")
print(preprocessor.fit_transform(X)[:5])

In this solution:

FeatureUnion allows both scaling and polynomial features for Income, demonstrating how to manage multiple transformations on the same feature.

Exercise 4: Building a Custom Transformer for Frequency Encoding

Create a pipeline that uses a custom transformer to perform frequency encoding on the Occupation column, alongside standard scaling for numerical features.

  1. Define a custom transformer for frequency encoding.
  2. Combine this transformer with other preprocessing steps in a pipeline.
from sklearn.base import BaseEstimator, TransformerMixin

# Sample dataset with a categorical feature for frequency encoding
data = {'Age': [25, 32, 47, 51, 62],
        'Income': [50000, 65000, 85000, 90000, 120000],
        'Occupation': ['Engineer', 'Doctor', 'Artist', 'Engineer', 'Artist'],
        'Churn': [0, 0, 1, 1, 1]}
df = pd.DataFrame(data)

# Custom transformer for frequency encoding
class FrequencyEncoder(BaseEstimator, TransformerMixin):
    def __init__(self, column):
        self.column = column

    def fit(self, X, y=None):
        self.freq_encoding = X[self.column].value_counts(normalize=True).to_dict()
        return self

    def transform(self, X):
        X_copy = X.copy()
        X_copy[self.column] = X_copy[self.column].map(self.freq_encoding)
        return X_copy[[self.column]]

# Define features and target
X = df[['Age', 'Income', 'Occupation']]
y = df['Churn']

# Solution: Apply frequency encoding and scaling in a pipeline
preprocessor = ColumnTransformer(
    transformers=[
        ('age_scaler', StandardScaler(), ['Age', 'Income']),
        ('occupation_encoder', FrequencyEncoder(column='Occupation'), ['Occupation'])
    ])

pipeline = Pipeline(steps=[
    ('preprocessor', preprocessor),
    ('classifier', LogisticRegression())
])

# Fit the pipeline
pipeline.fit(X, y)

# View transformed feature set
print("\\nTransformed Feature Set (Sample):")
print(preprocessor.fit_transform(X)[:5])

In this solution:

custom transformer is created for frequency encoding Occupation, demonstrating how to incorporate custom transformations into a pipeline.

These exercises cover a range of automated data preprocessing techniques, from basic scaling to advanced feature engineering with FeatureUnion and custom transformers. By working through these exercises, you’ll gain hands-on experience in using Scikit-learn’s pipeline tools to streamline complex data workflows and improve model accuracy.