Code icon

The App is Under a Quick Maintenance

We apologize for the inconvenience. Please come back later

Menu iconMenu iconOpenAI API Bible – Volume 1
OpenAI API Bible – Volume 1

Chapter 4: The Chat Completions API

Practical Exercises β€” Chapter 4

Test your understanding of the Chat Completions API by completing these exercises. Work through each task, and then compare your solution with the provided code examples.

Exercise 1: Construct a Multi-Turn Conversation Using Roles

Task:

Create a conversation that includes a system message to set the assistant's behavior, a user query, and a previous assistant message. Then send a new user message to ask a follow-up question.

Solution:

import openai
import os
from dotenv import load_dotenv

load_dotenv()
openai.api_key = os.getenv("OPENAI_API_KEY")

# Construct a multi-turn conversation
messages = [
    {"role": "system", "content": "You are an expert coding tutor who always provides clear explanations."},
    {"role": "user", "content": "Can you explain what a loop is in programming?"},
    {"role": "assistant", "content": "A loop is a sequence of instructions that is repeated until a certain condition is met. For example, a 'for' loop in Python is often used to iterate over a sequence."},
    {"role": "user", "content": "Great, can you provide a simple example in Python?"}
]

response = openai.ChatCompletion.create(
    model="gpt-4o",
    messages=messages,
    max_tokens=150
)

print("Exercise 1 Output:")
print(response["choices"][0]["message"]["content"])

This exercise reinforces the structure of multi-turn conversations using the roles: system, user, and assistant.

Exercise 2: Experiment with Sampling Parameters

Task:

Send the same prompt using different temperature and top-p values, and compare the responses. Use a fixed prompt asking for a short explanation of “machine learning.”

Solution:

import openai
import os
from dotenv import load_dotenv

load_dotenv()
openai.api_key = os.getenv("OPENAI_API_KEY")

prompt = "Explain machine learning briefly."

# Response with lower temperature and lower top-p
response_low = openai.ChatCompletion.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": prompt}],
    temperature=0.3,
    top_p=0.5,
    max_tokens=100
)

# Response with higher temperature and higher top-p
response_high = openai.ChatCompletion.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": prompt}],
    temperature=0.8,
    top_p=0.9,
    max_tokens=100
)

print("Exercise 2 Output (Low Temp & Low Top-p):")
print(response_low["choices"][0]["message"]["content"])
print("\nExercise 2 Output (High Temp & High Top-p):")
print(response_high["choices"][0]["message"]["content"])

By changing the sampling parameters, observe how the response style and creativity differ.

Exercise 3: Control Response Length Using max_tokens and Stop Sequences

Task:

Send a prompt that asks for a list of three benefits of exercise. Configure the request to stop the response as soon as a semicolon appears, and limit the output to 80 tokens.

Solution:

response = openai.ChatCompletion.create(
    model="gpt-4o",
    messages=[
        {"role": "user", "content": "List three benefits of regular exercise, separating each with a semicolon."}
    ],
    max_tokens=80,
    stop=";",
    temperature=0.5
)

print("Exercise 3 Output:")
print(response["choices"][0]["message"]["content"])

This exercise demonstrates how using max_tokens and the stop parameter can help control the format and length of the output.

Exercise 4: Implement Streaming for Real-Time Output

Task:

Create a Python script that sends a prompt asking for an inspirational quote and streams the output in real time. Print each streamed chunk as it arrives.

Solution:

import openai
import os
from dotenv import load_dotenv

load_dotenv()
openai.api_key = os.getenv("OPENAI_API_KEY")

response = openai.ChatCompletion.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "You are a motivational speaker."},
        {"role": "user", "content": "Share an inspirational quote to start the day."}
    ],
    max_tokens=100,
    stream=True  # Enable streaming
)

print("Exercise 4 Output (Streaming):")
for chunk in response:
    if "choices" in chunk:
        part = chunk["choices"][0].get("delta", {}).get("content", "")
        print(part, end="", flush=True)

print("\n\nStreaming complete!")

Streaming lets you see parts of the response as they’re generated, which improves the user experience in interactive applications.

Exercise 5: Combining Parameters in a Real-World Scenario

Task:

Create an application snippet that uses a combination of system instructions, sampling parameters, and control tokens. Your prompt should ask for advice on balancing work and personal life, setting the appropriate temperature, top-p, max_tokens, and using a stop sequence if the assistant starts listing further advice beyond a set point.

Solution:

response = openai.ChatCompletion.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "You are a wise life coach providing actionable advice."},
        {"role": "user", "content": "What are three key tips for balancing work and personal life? Please list them."}
    ],
    temperature=0.6,
    top_p=0.8,
    max_tokens=120,
    stop=["\n", "Tip 4:"]
)

print("Exercise 5 Output:")
print(response["choices"][0]["message"]["content"])

This script combines multiple configuration parameters to generate a clear, concise list of advice while ensuring the response stops when it reaches a predefined point.

If you've completed these exercises, you're well on your way to mastering the use of the Chat Completions API. These tasks help you develop a deep understanding of conversation structuring, parameter tuning, and real-time interaction—skills that are vital for building efficient and engaging AI-powered applications.

Practical Exercises β€” Chapter 4

Test your understanding of the Chat Completions API by completing these exercises. Work through each task, and then compare your solution with the provided code examples.

Exercise 1: Construct a Multi-Turn Conversation Using Roles

Task:

Create a conversation that includes a system message to set the assistant's behavior, a user query, and a previous assistant message. Then send a new user message to ask a follow-up question.

Solution:

import openai
import os
from dotenv import load_dotenv

load_dotenv()
openai.api_key = os.getenv("OPENAI_API_KEY")

# Construct a multi-turn conversation
messages = [
    {"role": "system", "content": "You are an expert coding tutor who always provides clear explanations."},
    {"role": "user", "content": "Can you explain what a loop is in programming?"},
    {"role": "assistant", "content": "A loop is a sequence of instructions that is repeated until a certain condition is met. For example, a 'for' loop in Python is often used to iterate over a sequence."},
    {"role": "user", "content": "Great, can you provide a simple example in Python?"}
]

response = openai.ChatCompletion.create(
    model="gpt-4o",
    messages=messages,
    max_tokens=150
)

print("Exercise 1 Output:")
print(response["choices"][0]["message"]["content"])

This exercise reinforces the structure of multi-turn conversations using the roles: system, user, and assistant.

Exercise 2: Experiment with Sampling Parameters

Task:

Send the same prompt using different temperature and top-p values, and compare the responses. Use a fixed prompt asking for a short explanation of “machine learning.”

Solution:

import openai
import os
from dotenv import load_dotenv

load_dotenv()
openai.api_key = os.getenv("OPENAI_API_KEY")

prompt = "Explain machine learning briefly."

# Response with lower temperature and lower top-p
response_low = openai.ChatCompletion.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": prompt}],
    temperature=0.3,
    top_p=0.5,
    max_tokens=100
)

# Response with higher temperature and higher top-p
response_high = openai.ChatCompletion.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": prompt}],
    temperature=0.8,
    top_p=0.9,
    max_tokens=100
)

print("Exercise 2 Output (Low Temp & Low Top-p):")
print(response_low["choices"][0]["message"]["content"])
print("\nExercise 2 Output (High Temp & High Top-p):")
print(response_high["choices"][0]["message"]["content"])

By changing the sampling parameters, observe how the response style and creativity differ.

Exercise 3: Control Response Length Using max_tokens and Stop Sequences

Task:

Send a prompt that asks for a list of three benefits of exercise. Configure the request to stop the response as soon as a semicolon appears, and limit the output to 80 tokens.

Solution:

response = openai.ChatCompletion.create(
    model="gpt-4o",
    messages=[
        {"role": "user", "content": "List three benefits of regular exercise, separating each with a semicolon."}
    ],
    max_tokens=80,
    stop=";",
    temperature=0.5
)

print("Exercise 3 Output:")
print(response["choices"][0]["message"]["content"])

This exercise demonstrates how using max_tokens and the stop parameter can help control the format and length of the output.

Exercise 4: Implement Streaming for Real-Time Output

Task:

Create a Python script that sends a prompt asking for an inspirational quote and streams the output in real time. Print each streamed chunk as it arrives.

Solution:

import openai
import os
from dotenv import load_dotenv

load_dotenv()
openai.api_key = os.getenv("OPENAI_API_KEY")

response = openai.ChatCompletion.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "You are a motivational speaker."},
        {"role": "user", "content": "Share an inspirational quote to start the day."}
    ],
    max_tokens=100,
    stream=True  # Enable streaming
)

print("Exercise 4 Output (Streaming):")
for chunk in response:
    if "choices" in chunk:
        part = chunk["choices"][0].get("delta", {}).get("content", "")
        print(part, end="", flush=True)

print("\n\nStreaming complete!")

Streaming lets you see parts of the response as they’re generated, which improves the user experience in interactive applications.

Exercise 5: Combining Parameters in a Real-World Scenario

Task:

Create an application snippet that uses a combination of system instructions, sampling parameters, and control tokens. Your prompt should ask for advice on balancing work and personal life, setting the appropriate temperature, top-p, max_tokens, and using a stop sequence if the assistant starts listing further advice beyond a set point.

Solution:

response = openai.ChatCompletion.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "You are a wise life coach providing actionable advice."},
        {"role": "user", "content": "What are three key tips for balancing work and personal life? Please list them."}
    ],
    temperature=0.6,
    top_p=0.8,
    max_tokens=120,
    stop=["\n", "Tip 4:"]
)

print("Exercise 5 Output:")
print(response["choices"][0]["message"]["content"])

This script combines multiple configuration parameters to generate a clear, concise list of advice while ensuring the response stops when it reaches a predefined point.

If you've completed these exercises, you're well on your way to mastering the use of the Chat Completions API. These tasks help you develop a deep understanding of conversation structuring, parameter tuning, and real-time interaction—skills that are vital for building efficient and engaging AI-powered applications.

Practical Exercises β€” Chapter 4

Test your understanding of the Chat Completions API by completing these exercises. Work through each task, and then compare your solution with the provided code examples.

Exercise 1: Construct a Multi-Turn Conversation Using Roles

Task:

Create a conversation that includes a system message to set the assistant's behavior, a user query, and a previous assistant message. Then send a new user message to ask a follow-up question.

Solution:

import openai
import os
from dotenv import load_dotenv

load_dotenv()
openai.api_key = os.getenv("OPENAI_API_KEY")

# Construct a multi-turn conversation
messages = [
    {"role": "system", "content": "You are an expert coding tutor who always provides clear explanations."},
    {"role": "user", "content": "Can you explain what a loop is in programming?"},
    {"role": "assistant", "content": "A loop is a sequence of instructions that is repeated until a certain condition is met. For example, a 'for' loop in Python is often used to iterate over a sequence."},
    {"role": "user", "content": "Great, can you provide a simple example in Python?"}
]

response = openai.ChatCompletion.create(
    model="gpt-4o",
    messages=messages,
    max_tokens=150
)

print("Exercise 1 Output:")
print(response["choices"][0]["message"]["content"])

This exercise reinforces the structure of multi-turn conversations using the roles: system, user, and assistant.

Exercise 2: Experiment with Sampling Parameters

Task:

Send the same prompt using different temperature and top-p values, and compare the responses. Use a fixed prompt asking for a short explanation of “machine learning.”

Solution:

import openai
import os
from dotenv import load_dotenv

load_dotenv()
openai.api_key = os.getenv("OPENAI_API_KEY")

prompt = "Explain machine learning briefly."

# Response with lower temperature and lower top-p
response_low = openai.ChatCompletion.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": prompt}],
    temperature=0.3,
    top_p=0.5,
    max_tokens=100
)

# Response with higher temperature and higher top-p
response_high = openai.ChatCompletion.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": prompt}],
    temperature=0.8,
    top_p=0.9,
    max_tokens=100
)

print("Exercise 2 Output (Low Temp & Low Top-p):")
print(response_low["choices"][0]["message"]["content"])
print("\nExercise 2 Output (High Temp & High Top-p):")
print(response_high["choices"][0]["message"]["content"])

By changing the sampling parameters, observe how the response style and creativity differ.

Exercise 3: Control Response Length Using max_tokens and Stop Sequences

Task:

Send a prompt that asks for a list of three benefits of exercise. Configure the request to stop the response as soon as a semicolon appears, and limit the output to 80 tokens.

Solution:

response = openai.ChatCompletion.create(
    model="gpt-4o",
    messages=[
        {"role": "user", "content": "List three benefits of regular exercise, separating each with a semicolon."}
    ],
    max_tokens=80,
    stop=";",
    temperature=0.5
)

print("Exercise 3 Output:")
print(response["choices"][0]["message"]["content"])

This exercise demonstrates how using max_tokens and the stop parameter can help control the format and length of the output.

Exercise 4: Implement Streaming for Real-Time Output

Task:

Create a Python script that sends a prompt asking for an inspirational quote and streams the output in real time. Print each streamed chunk as it arrives.

Solution:

import openai
import os
from dotenv import load_dotenv

load_dotenv()
openai.api_key = os.getenv("OPENAI_API_KEY")

response = openai.ChatCompletion.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "You are a motivational speaker."},
        {"role": "user", "content": "Share an inspirational quote to start the day."}
    ],
    max_tokens=100,
    stream=True  # Enable streaming
)

print("Exercise 4 Output (Streaming):")
for chunk in response:
    if "choices" in chunk:
        part = chunk["choices"][0].get("delta", {}).get("content", "")
        print(part, end="", flush=True)

print("\n\nStreaming complete!")

Streaming lets you see parts of the response as they’re generated, which improves the user experience in interactive applications.

Exercise 5: Combining Parameters in a Real-World Scenario

Task:

Create an application snippet that uses a combination of system instructions, sampling parameters, and control tokens. Your prompt should ask for advice on balancing work and personal life, setting the appropriate temperature, top-p, max_tokens, and using a stop sequence if the assistant starts listing further advice beyond a set point.

Solution:

response = openai.ChatCompletion.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "You are a wise life coach providing actionable advice."},
        {"role": "user", "content": "What are three key tips for balancing work and personal life? Please list them."}
    ],
    temperature=0.6,
    top_p=0.8,
    max_tokens=120,
    stop=["\n", "Tip 4:"]
)

print("Exercise 5 Output:")
print(response["choices"][0]["message"]["content"])

This script combines multiple configuration parameters to generate a clear, concise list of advice while ensuring the response stops when it reaches a predefined point.

If you've completed these exercises, you're well on your way to mastering the use of the Chat Completions API. These tasks help you develop a deep understanding of conversation structuring, parameter tuning, and real-time interaction—skills that are vital for building efficient and engaging AI-powered applications.

Practical Exercises β€” Chapter 4

Test your understanding of the Chat Completions API by completing these exercises. Work through each task, and then compare your solution with the provided code examples.

Exercise 1: Construct a Multi-Turn Conversation Using Roles

Task:

Create a conversation that includes a system message to set the assistant's behavior, a user query, and a previous assistant message. Then send a new user message to ask a follow-up question.

Solution:

import openai
import os
from dotenv import load_dotenv

load_dotenv()
openai.api_key = os.getenv("OPENAI_API_KEY")

# Construct a multi-turn conversation
messages = [
    {"role": "system", "content": "You are an expert coding tutor who always provides clear explanations."},
    {"role": "user", "content": "Can you explain what a loop is in programming?"},
    {"role": "assistant", "content": "A loop is a sequence of instructions that is repeated until a certain condition is met. For example, a 'for' loop in Python is often used to iterate over a sequence."},
    {"role": "user", "content": "Great, can you provide a simple example in Python?"}
]

response = openai.ChatCompletion.create(
    model="gpt-4o",
    messages=messages,
    max_tokens=150
)

print("Exercise 1 Output:")
print(response["choices"][0]["message"]["content"])

This exercise reinforces the structure of multi-turn conversations using the roles: system, user, and assistant.

Exercise 2: Experiment with Sampling Parameters

Task:

Send the same prompt using different temperature and top-p values, and compare the responses. Use a fixed prompt asking for a short explanation of “machine learning.”

Solution:

import openai
import os
from dotenv import load_dotenv

load_dotenv()
openai.api_key = os.getenv("OPENAI_API_KEY")

prompt = "Explain machine learning briefly."

# Response with lower temperature and lower top-p
response_low = openai.ChatCompletion.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": prompt}],
    temperature=0.3,
    top_p=0.5,
    max_tokens=100
)

# Response with higher temperature and higher top-p
response_high = openai.ChatCompletion.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": prompt}],
    temperature=0.8,
    top_p=0.9,
    max_tokens=100
)

print("Exercise 2 Output (Low Temp & Low Top-p):")
print(response_low["choices"][0]["message"]["content"])
print("\nExercise 2 Output (High Temp & High Top-p):")
print(response_high["choices"][0]["message"]["content"])

By changing the sampling parameters, observe how the response style and creativity differ.

Exercise 3: Control Response Length Using max_tokens and Stop Sequences

Task:

Send a prompt that asks for a list of three benefits of exercise. Configure the request to stop the response as soon as a semicolon appears, and limit the output to 80 tokens.

Solution:

response = openai.ChatCompletion.create(
    model="gpt-4o",
    messages=[
        {"role": "user", "content": "List three benefits of regular exercise, separating each with a semicolon."}
    ],
    max_tokens=80,
    stop=";",
    temperature=0.5
)

print("Exercise 3 Output:")
print(response["choices"][0]["message"]["content"])

This exercise demonstrates how using max_tokens and the stop parameter can help control the format and length of the output.

Exercise 4: Implement Streaming for Real-Time Output

Task:

Create a Python script that sends a prompt asking for an inspirational quote and streams the output in real time. Print each streamed chunk as it arrives.

Solution:

import openai
import os
from dotenv import load_dotenv

load_dotenv()
openai.api_key = os.getenv("OPENAI_API_KEY")

response = openai.ChatCompletion.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "You are a motivational speaker."},
        {"role": "user", "content": "Share an inspirational quote to start the day."}
    ],
    max_tokens=100,
    stream=True  # Enable streaming
)

print("Exercise 4 Output (Streaming):")
for chunk in response:
    if "choices" in chunk:
        part = chunk["choices"][0].get("delta", {}).get("content", "")
        print(part, end="", flush=True)

print("\n\nStreaming complete!")

Streaming lets you see parts of the response as they’re generated, which improves the user experience in interactive applications.

Exercise 5: Combining Parameters in a Real-World Scenario

Task:

Create an application snippet that uses a combination of system instructions, sampling parameters, and control tokens. Your prompt should ask for advice on balancing work and personal life, setting the appropriate temperature, top-p, max_tokens, and using a stop sequence if the assistant starts listing further advice beyond a set point.

Solution:

response = openai.ChatCompletion.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "You are a wise life coach providing actionable advice."},
        {"role": "user", "content": "What are three key tips for balancing work and personal life? Please list them."}
    ],
    temperature=0.6,
    top_p=0.8,
    max_tokens=120,
    stop=["\n", "Tip 4:"]
)

print("Exercise 5 Output:")
print(response["choices"][0]["message"]["content"])

This script combines multiple configuration parameters to generate a clear, concise list of advice while ensuring the response stops when it reaches a predefined point.

If you've completed these exercises, you're well on your way to mastering the use of the Chat Completions API. These tasks help you develop a deep understanding of conversation structuring, parameter tuning, and real-time interaction—skills that are vital for building efficient and engaging AI-powered applications.