Code icon

The App is Under a Quick Maintenance

We apologize for the inconvenience. Please come back later

Menu iconMenu iconOpenAI API Bible Volume 1
OpenAI API Bible Volume 1

Chapter 3: Understanding and Comparing OpenAI Models

Practical Exercises β€” Chapter 3

Congratulations on finishing Chapter 3! Now it’s time to practice what you’ve learned. These exercises will help solidify your understanding of different models, their capabilities, limitations, token management, and pricing considerations.

Exercise 1: Compare GPT-3.5-turbo and GPT-4o Responses

Send the same question to both GPT-3.5-turbo and GPT-4o, and observe the differences in response quality and detail.

Task:

Prompt: "Briefly explain the concept of gravity."

💡 Solution (Python Example):

import openai
import os

openai.api_key = os.getenv("OPENAI_API_KEY")

models = ["gpt-3.5-turbo", "gpt-4o"]

for model in models:
    response = openai.ChatCompletion.create(
        model=model,
        messages=[
            {"role": "user", "content": "Briefly explain the concept of gravity."}
        ]
    )
    print(f"\nResponse from {model}:")
    print(response["choices"][0]["message"]["content"])

Review each model's output and note differences in clarity, depth, and accuracy.

Exercise 2: Testing Token Counts

Write a short Python script to check the number of tokens in a given prompt, using the tiktoken library.

Task:

Prompt: "Can you provide three tips for improving coding productivity?"

💡 Solution:

First, ensure tiktoken is installed:

pip install tiktoken

Then run the script:

import tiktoken

encoding = tiktoken.encoding_for_model("gpt-4o")
prompt = "Can you provide three tips for improving coding productivity?"

tokens = encoding.encode(prompt)
print(f"Total tokens in prompt: {len(tokens)}")

Exercise 3: Experiment with o3-mini-high

Use the o3-mini-high model to quickly respond to a straightforward command.

Task:

Prompt: "List three popular programming languages."

💡 Solution:

response = openai.ChatCompletion.create(
    model="o3-mini-high",
    messages=[
        {"role": "user", "content": "List three popular programming languages."}
    ]
)

print(response["choices"][0]["message"]["content"])

Check the response speed and simplicity. Ideal for autocomplete or quick prompts.

Exercise 4: Analyze Cost Efficiency

Estimate the monthly cost if your application uses GPT-4o and handles about 2 million tokens per month.

Task:

GPT-4o pricing: approximately $5 per million tokens.

💡 Solution (Manual Calculation):

  • Monthly tokens: 2 million
  • Cost per million tokens: $5
  • Total monthly cost:

    2 million tokens×$5/million tokens=$102 \text{ million tokens} \times \$5/\text{million tokens} = \$10 per month.

(No code solution needed, but critical for budgeting purposes.)

Exercise 5: Implement Model Selection Logic

Write a Python function that automatically selects between GPT-4o-mini and GPT-4o based on task complexity.

Task:

Use GPT-4o-mini for simple tasks and GPT-4o for complex prompts.

💡 Solution:

def select_model(prompt_complexity):
    if prompt_complexity == "simple":
        return "gpt-4o-mini"
    else:
        return "gpt-4o"

def get_response(prompt, complexity):
    model = select_model(complexity)
    response = openai.ChatCompletion.create(
        model=model,
        messages=[{"role": "user", "content": prompt}]
    )
    return response["choices"][0]["message"]["content"]

# Example usage:
simple_prompt = "What day comes after Monday?"
complex_prompt = "Explain quantum computing in simple terms."

print("Simple:", get_response(simple_prompt, "simple"))
print("\nComplex:", get_response(complex_prompt, "complex"))

This logic helps you efficiently manage costs and performance.

✅ Exercise 6: Handling Token Limits

Demonstrate handling token limit errors gracefully by catching exceptions in Python.

Task:

Send an intentionally oversized prompt to gpt-3.5-turbo to trigger a token limit error.

💡 Solution:

try:
    large_text = "word " * 50000  # deliberately oversized prompt
    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": large_text}]
    )
    print(response["choices"][0]["message"]["content"])

except openai.error.InvalidRequestError as e:
    print("Token limit exceeded. Please shorten your prompt.")
    print("Error details:", e)

Graceful error handling improves user experience and stability.

Exercise 7: Performance Measurement

Measure and compare API response latency between GPT-3.5-turbo and GPT-4o.

Task:

Use Python’s time module to measure execution speed.

💡 Solution:

import time

def measure_latency(model, prompt):
    start_time = time.time()
    response = openai.ChatCompletion.create(
        model=model,
        messages=[{"role": "user", "content": prompt}]
    )
    end_time = time.time()
    latency = end_time - start_time
    return latency, response["choices"][0]["message"]["content"]

prompt = "What is artificial intelligence?"

models = ["gpt-3.5-turbo", "gpt-4o"]

for model in models:
    latency, content = measure_latency(model, prompt)
    print(f"\nModel: {model}")
    print(f"Latency: {latency:.2f} seconds")
    print(f"Response: {content}")

Understanding latency differences helps you select the right model for your needs.

If you successfully completed these exercises, you've mastered the key concepts around OpenAI model selection, performance analysis, cost estimation, and token management. You're now fully equipped to confidently apply these insights to real-world AI projects.

Practical Exercises β€” Chapter 3

Congratulations on finishing Chapter 3! Now it’s time to practice what you’ve learned. These exercises will help solidify your understanding of different models, their capabilities, limitations, token management, and pricing considerations.

Exercise 1: Compare GPT-3.5-turbo and GPT-4o Responses

Send the same question to both GPT-3.5-turbo and GPT-4o, and observe the differences in response quality and detail.

Task:

Prompt: "Briefly explain the concept of gravity."

💡 Solution (Python Example):

import openai
import os

openai.api_key = os.getenv("OPENAI_API_KEY")

models = ["gpt-3.5-turbo", "gpt-4o"]

for model in models:
    response = openai.ChatCompletion.create(
        model=model,
        messages=[
            {"role": "user", "content": "Briefly explain the concept of gravity."}
        ]
    )
    print(f"\nResponse from {model}:")
    print(response["choices"][0]["message"]["content"])

Review each model's output and note differences in clarity, depth, and accuracy.

Exercise 2: Testing Token Counts

Write a short Python script to check the number of tokens in a given prompt, using the tiktoken library.

Task:

Prompt: "Can you provide three tips for improving coding productivity?"

💡 Solution:

First, ensure tiktoken is installed:

pip install tiktoken

Then run the script:

import tiktoken

encoding = tiktoken.encoding_for_model("gpt-4o")
prompt = "Can you provide three tips for improving coding productivity?"

tokens = encoding.encode(prompt)
print(f"Total tokens in prompt: {len(tokens)}")

Exercise 3: Experiment with o3-mini-high

Use the o3-mini-high model to quickly respond to a straightforward command.

Task:

Prompt: "List three popular programming languages."

💡 Solution:

response = openai.ChatCompletion.create(
    model="o3-mini-high",
    messages=[
        {"role": "user", "content": "List three popular programming languages."}
    ]
)

print(response["choices"][0]["message"]["content"])

Check the response speed and simplicity. Ideal for autocomplete or quick prompts.

Exercise 4: Analyze Cost Efficiency

Estimate the monthly cost if your application uses GPT-4o and handles about 2 million tokens per month.

Task:

GPT-4o pricing: approximately $5 per million tokens.

💡 Solution (Manual Calculation):

  • Monthly tokens: 2 million
  • Cost per million tokens: $5
  • Total monthly cost:

    2 million tokens×$5/million tokens=$102 \text{ million tokens} \times \$5/\text{million tokens} = \$10 per month.

(No code solution needed, but critical for budgeting purposes.)

Exercise 5: Implement Model Selection Logic

Write a Python function that automatically selects between GPT-4o-mini and GPT-4o based on task complexity.

Task:

Use GPT-4o-mini for simple tasks and GPT-4o for complex prompts.

💡 Solution:

def select_model(prompt_complexity):
    if prompt_complexity == "simple":
        return "gpt-4o-mini"
    else:
        return "gpt-4o"

def get_response(prompt, complexity):
    model = select_model(complexity)
    response = openai.ChatCompletion.create(
        model=model,
        messages=[{"role": "user", "content": prompt}]
    )
    return response["choices"][0]["message"]["content"]

# Example usage:
simple_prompt = "What day comes after Monday?"
complex_prompt = "Explain quantum computing in simple terms."

print("Simple:", get_response(simple_prompt, "simple"))
print("\nComplex:", get_response(complex_prompt, "complex"))

This logic helps you efficiently manage costs and performance.

✅ Exercise 6: Handling Token Limits

Demonstrate handling token limit errors gracefully by catching exceptions in Python.

Task:

Send an intentionally oversized prompt to gpt-3.5-turbo to trigger a token limit error.

💡 Solution:

try:
    large_text = "word " * 50000  # deliberately oversized prompt
    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": large_text}]
    )
    print(response["choices"][0]["message"]["content"])

except openai.error.InvalidRequestError as e:
    print("Token limit exceeded. Please shorten your prompt.")
    print("Error details:", e)

Graceful error handling improves user experience and stability.

Exercise 7: Performance Measurement

Measure and compare API response latency between GPT-3.5-turbo and GPT-4o.

Task:

Use Python’s time module to measure execution speed.

💡 Solution:

import time

def measure_latency(model, prompt):
    start_time = time.time()
    response = openai.ChatCompletion.create(
        model=model,
        messages=[{"role": "user", "content": prompt}]
    )
    end_time = time.time()
    latency = end_time - start_time
    return latency, response["choices"][0]["message"]["content"]

prompt = "What is artificial intelligence?"

models = ["gpt-3.5-turbo", "gpt-4o"]

for model in models:
    latency, content = measure_latency(model, prompt)
    print(f"\nModel: {model}")
    print(f"Latency: {latency:.2f} seconds")
    print(f"Response: {content}")

Understanding latency differences helps you select the right model for your needs.

If you successfully completed these exercises, you've mastered the key concepts around OpenAI model selection, performance analysis, cost estimation, and token management. You're now fully equipped to confidently apply these insights to real-world AI projects.

Practical Exercises β€” Chapter 3

Congratulations on finishing Chapter 3! Now it’s time to practice what you’ve learned. These exercises will help solidify your understanding of different models, their capabilities, limitations, token management, and pricing considerations.

Exercise 1: Compare GPT-3.5-turbo and GPT-4o Responses

Send the same question to both GPT-3.5-turbo and GPT-4o, and observe the differences in response quality and detail.

Task:

Prompt: "Briefly explain the concept of gravity."

💡 Solution (Python Example):

import openai
import os

openai.api_key = os.getenv("OPENAI_API_KEY")

models = ["gpt-3.5-turbo", "gpt-4o"]

for model in models:
    response = openai.ChatCompletion.create(
        model=model,
        messages=[
            {"role": "user", "content": "Briefly explain the concept of gravity."}
        ]
    )
    print(f"\nResponse from {model}:")
    print(response["choices"][0]["message"]["content"])

Review each model's output and note differences in clarity, depth, and accuracy.

Exercise 2: Testing Token Counts

Write a short Python script to check the number of tokens in a given prompt, using the tiktoken library.

Task:

Prompt: "Can you provide three tips for improving coding productivity?"

💡 Solution:

First, ensure tiktoken is installed:

pip install tiktoken

Then run the script:

import tiktoken

encoding = tiktoken.encoding_for_model("gpt-4o")
prompt = "Can you provide three tips for improving coding productivity?"

tokens = encoding.encode(prompt)
print(f"Total tokens in prompt: {len(tokens)}")

Exercise 3: Experiment with o3-mini-high

Use the o3-mini-high model to quickly respond to a straightforward command.

Task:

Prompt: "List three popular programming languages."

💡 Solution:

response = openai.ChatCompletion.create(
    model="o3-mini-high",
    messages=[
        {"role": "user", "content": "List three popular programming languages."}
    ]
)

print(response["choices"][0]["message"]["content"])

Check the response speed and simplicity. Ideal for autocomplete or quick prompts.

Exercise 4: Analyze Cost Efficiency

Estimate the monthly cost if your application uses GPT-4o and handles about 2 million tokens per month.

Task:

GPT-4o pricing: approximately $5 per million tokens.

💡 Solution (Manual Calculation):

  • Monthly tokens: 2 million
  • Cost per million tokens: $5
  • Total monthly cost:

    2 million tokens×$5/million tokens=$102 \text{ million tokens} \times \$5/\text{million tokens} = \$10 per month.

(No code solution needed, but critical for budgeting purposes.)

Exercise 5: Implement Model Selection Logic

Write a Python function that automatically selects between GPT-4o-mini and GPT-4o based on task complexity.

Task:

Use GPT-4o-mini for simple tasks and GPT-4o for complex prompts.

💡 Solution:

def select_model(prompt_complexity):
    if prompt_complexity == "simple":
        return "gpt-4o-mini"
    else:
        return "gpt-4o"

def get_response(prompt, complexity):
    model = select_model(complexity)
    response = openai.ChatCompletion.create(
        model=model,
        messages=[{"role": "user", "content": prompt}]
    )
    return response["choices"][0]["message"]["content"]

# Example usage:
simple_prompt = "What day comes after Monday?"
complex_prompt = "Explain quantum computing in simple terms."

print("Simple:", get_response(simple_prompt, "simple"))
print("\nComplex:", get_response(complex_prompt, "complex"))

This logic helps you efficiently manage costs and performance.

✅ Exercise 6: Handling Token Limits

Demonstrate handling token limit errors gracefully by catching exceptions in Python.

Task:

Send an intentionally oversized prompt to gpt-3.5-turbo to trigger a token limit error.

💡 Solution:

try:
    large_text = "word " * 50000  # deliberately oversized prompt
    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": large_text}]
    )
    print(response["choices"][0]["message"]["content"])

except openai.error.InvalidRequestError as e:
    print("Token limit exceeded. Please shorten your prompt.")
    print("Error details:", e)

Graceful error handling improves user experience and stability.

Exercise 7: Performance Measurement

Measure and compare API response latency between GPT-3.5-turbo and GPT-4o.

Task:

Use Python’s time module to measure execution speed.

💡 Solution:

import time

def measure_latency(model, prompt):
    start_time = time.time()
    response = openai.ChatCompletion.create(
        model=model,
        messages=[{"role": "user", "content": prompt}]
    )
    end_time = time.time()
    latency = end_time - start_time
    return latency, response["choices"][0]["message"]["content"]

prompt = "What is artificial intelligence?"

models = ["gpt-3.5-turbo", "gpt-4o"]

for model in models:
    latency, content = measure_latency(model, prompt)
    print(f"\nModel: {model}")
    print(f"Latency: {latency:.2f} seconds")
    print(f"Response: {content}")

Understanding latency differences helps you select the right model for your needs.

If you successfully completed these exercises, you've mastered the key concepts around OpenAI model selection, performance analysis, cost estimation, and token management. You're now fully equipped to confidently apply these insights to real-world AI projects.

Practical Exercises β€” Chapter 3

Congratulations on finishing Chapter 3! Now it’s time to practice what you’ve learned. These exercises will help solidify your understanding of different models, their capabilities, limitations, token management, and pricing considerations.

Exercise 1: Compare GPT-3.5-turbo and GPT-4o Responses

Send the same question to both GPT-3.5-turbo and GPT-4o, and observe the differences in response quality and detail.

Task:

Prompt: "Briefly explain the concept of gravity."

💡 Solution (Python Example):

import openai
import os

openai.api_key = os.getenv("OPENAI_API_KEY")

models = ["gpt-3.5-turbo", "gpt-4o"]

for model in models:
    response = openai.ChatCompletion.create(
        model=model,
        messages=[
            {"role": "user", "content": "Briefly explain the concept of gravity."}
        ]
    )
    print(f"\nResponse from {model}:")
    print(response["choices"][0]["message"]["content"])

Review each model's output and note differences in clarity, depth, and accuracy.

Exercise 2: Testing Token Counts

Write a short Python script to check the number of tokens in a given prompt, using the tiktoken library.

Task:

Prompt: "Can you provide three tips for improving coding productivity?"

💡 Solution:

First, ensure tiktoken is installed:

pip install tiktoken

Then run the script:

import tiktoken

encoding = tiktoken.encoding_for_model("gpt-4o")
prompt = "Can you provide three tips for improving coding productivity?"

tokens = encoding.encode(prompt)
print(f"Total tokens in prompt: {len(tokens)}")

Exercise 3: Experiment with o3-mini-high

Use the o3-mini-high model to quickly respond to a straightforward command.

Task:

Prompt: "List three popular programming languages."

💡 Solution:

response = openai.ChatCompletion.create(
    model="o3-mini-high",
    messages=[
        {"role": "user", "content": "List three popular programming languages."}
    ]
)

print(response["choices"][0]["message"]["content"])

Check the response speed and simplicity. Ideal for autocomplete or quick prompts.

Exercise 4: Analyze Cost Efficiency

Estimate the monthly cost if your application uses GPT-4o and handles about 2 million tokens per month.

Task:

GPT-4o pricing: approximately $5 per million tokens.

💡 Solution (Manual Calculation):

  • Monthly tokens: 2 million
  • Cost per million tokens: $5
  • Total monthly cost:

    2 million tokens×$5/million tokens=$102 \text{ million tokens} \times \$5/\text{million tokens} = \$10 per month.

(No code solution needed, but critical for budgeting purposes.)

Exercise 5: Implement Model Selection Logic

Write a Python function that automatically selects between GPT-4o-mini and GPT-4o based on task complexity.

Task:

Use GPT-4o-mini for simple tasks and GPT-4o for complex prompts.

💡 Solution:

def select_model(prompt_complexity):
    if prompt_complexity == "simple":
        return "gpt-4o-mini"
    else:
        return "gpt-4o"

def get_response(prompt, complexity):
    model = select_model(complexity)
    response = openai.ChatCompletion.create(
        model=model,
        messages=[{"role": "user", "content": prompt}]
    )
    return response["choices"][0]["message"]["content"]

# Example usage:
simple_prompt = "What day comes after Monday?"
complex_prompt = "Explain quantum computing in simple terms."

print("Simple:", get_response(simple_prompt, "simple"))
print("\nComplex:", get_response(complex_prompt, "complex"))

This logic helps you efficiently manage costs and performance.

✅ Exercise 6: Handling Token Limits

Demonstrate handling token limit errors gracefully by catching exceptions in Python.

Task:

Send an intentionally oversized prompt to gpt-3.5-turbo to trigger a token limit error.

💡 Solution:

try:
    large_text = "word " * 50000  # deliberately oversized prompt
    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": large_text}]
    )
    print(response["choices"][0]["message"]["content"])

except openai.error.InvalidRequestError as e:
    print("Token limit exceeded. Please shorten your prompt.")
    print("Error details:", e)

Graceful error handling improves user experience and stability.

Exercise 7: Performance Measurement

Measure and compare API response latency between GPT-3.5-turbo and GPT-4o.

Task:

Use Python’s time module to measure execution speed.

💡 Solution:

import time

def measure_latency(model, prompt):
    start_time = time.time()
    response = openai.ChatCompletion.create(
        model=model,
        messages=[{"role": "user", "content": prompt}]
    )
    end_time = time.time()
    latency = end_time - start_time
    return latency, response["choices"][0]["message"]["content"]

prompt = "What is artificial intelligence?"

models = ["gpt-3.5-turbo", "gpt-4o"]

for model in models:
    latency, content = measure_latency(model, prompt)
    print(f"\nModel: {model}")
    print(f"Latency: {latency:.2f} seconds")
    print(f"Response: {content}")

Understanding latency differences helps you select the right model for your needs.

If you successfully completed these exercises, you've mastered the key concepts around OpenAI model selection, performance analysis, cost estimation, and token management. You're now fully equipped to confidently apply these insights to real-world AI projects.