Code icon

The App is Under a Quick Maintenance

We apologize for the inconvenience. Please come back later

Menu iconMenu iconOpenAI API Bible โ€“ Volume 1
OpenAI API Bible โ€“ Volume 1

ES Quiz Part II

Questions

Chapter 4: The Chat Completions API

1. What are the three message roles used in the Chat Completions API?

A) Sender, Receiver, Assistant

B) Author, Responder, Moderator

C) System, User, Assistant

D) Prompt, Response, Instruction

2. Which parameter helps control the randomness of the model’s responses?

A) stop

B) max_tokens

C) temperature

D) role

3. What is the purpose of the stream=True parameter?

A) It makes responses downloadable as audio

B) It sends a message to multiple recipients

C) It allows the model to return output chunk by chunk in real time

D) It enables persistent memory

Chapter 5: Prompt Engineering and System Instructions

4. What is the purpose of using a system message at the start of a conversation?

A) To execute an external function

B) To define the assistant’s behavior or persona

C) To reset the model’s memory

D) To set temperature to zero

5. Which of the following is NOT a recommended technique in prompt engineering?

A) Including examples within the prompt

B) Using vague instructions

C) Iterative refinement

D) Prompt chaining

Chapter 6: Function Calling and Tool Use

6. What is required when defining a function schema for use in the Chat Completions API?

A) A description and a unique identifier only

B) An arguments list of responses

C) A name, description, and JSON-formatted parameters

D) A Python function decorated with @openai.callable

7. In function calling, what does function_call="auto" do?

A) Automatically executes your backend logic

B) Forces the assistant to call all defined functions

C) Lets the model decide when to call a function

D) Streams the function arguments in real time

8. What is Retrieval-Augmented Generation (RAG)?

A) A technique to improve summarization

B) A method of using embedding-based search to retrieve relevant context before generation

C) A native memory system in GPT-4

D) A streaming assistant for long conversations

Chapter 7: Memory and Multi-Turn Conversations

9. What is the difference between short-term and long-term memory in OpenAI apps?

A) Short-term memory uses a database; long-term uses the context window

B) Short-term memory is built-in, long-term memory must be manually implemented

C) Long-term memory is only available in GPT-3.5

D) Both are managed automatically by OpenAI in all APIs

10. What is a common workaround to avoid exceeding the model’s context window?

A) Increasing temperature to 1.5

B) Truncating system messages

C) Summarizing earlier messages and replacing them with a condensed version

D) Restarting the assistant every 5 messages

Short-Answer Questions

11. Describe one use case where the Assistants API would be more appropriate than the Chat Completions API.

12. Write a sample function schema (in JSON) for a function named get_stock_price that takes a company symbol and returns its current price.

Questions

Chapter 4: The Chat Completions API

1. What are the three message roles used in the Chat Completions API?

A) Sender, Receiver, Assistant

B) Author, Responder, Moderator

C) System, User, Assistant

D) Prompt, Response, Instruction

2. Which parameter helps control the randomness of the model’s responses?

A) stop

B) max_tokens

C) temperature

D) role

3. What is the purpose of the stream=True parameter?

A) It makes responses downloadable as audio

B) It sends a message to multiple recipients

C) It allows the model to return output chunk by chunk in real time

D) It enables persistent memory

Chapter 5: Prompt Engineering and System Instructions

4. What is the purpose of using a system message at the start of a conversation?

A) To execute an external function

B) To define the assistant’s behavior or persona

C) To reset the model’s memory

D) To set temperature to zero

5. Which of the following is NOT a recommended technique in prompt engineering?

A) Including examples within the prompt

B) Using vague instructions

C) Iterative refinement

D) Prompt chaining

Chapter 6: Function Calling and Tool Use

6. What is required when defining a function schema for use in the Chat Completions API?

A) A description and a unique identifier only

B) An arguments list of responses

C) A name, description, and JSON-formatted parameters

D) A Python function decorated with @openai.callable

7. In function calling, what does function_call="auto" do?

A) Automatically executes your backend logic

B) Forces the assistant to call all defined functions

C) Lets the model decide when to call a function

D) Streams the function arguments in real time

8. What is Retrieval-Augmented Generation (RAG)?

A) A technique to improve summarization

B) A method of using embedding-based search to retrieve relevant context before generation

C) A native memory system in GPT-4

D) A streaming assistant for long conversations

Chapter 7: Memory and Multi-Turn Conversations

9. What is the difference between short-term and long-term memory in OpenAI apps?

A) Short-term memory uses a database; long-term uses the context window

B) Short-term memory is built-in, long-term memory must be manually implemented

C) Long-term memory is only available in GPT-3.5

D) Both are managed automatically by OpenAI in all APIs

10. What is a common workaround to avoid exceeding the model’s context window?

A) Increasing temperature to 1.5

B) Truncating system messages

C) Summarizing earlier messages and replacing them with a condensed version

D) Restarting the assistant every 5 messages

Short-Answer Questions

11. Describe one use case where the Assistants API would be more appropriate than the Chat Completions API.

12. Write a sample function schema (in JSON) for a function named get_stock_price that takes a company symbol and returns its current price.

Questions

Chapter 4: The Chat Completions API

1. What are the three message roles used in the Chat Completions API?

A) Sender, Receiver, Assistant

B) Author, Responder, Moderator

C) System, User, Assistant

D) Prompt, Response, Instruction

2. Which parameter helps control the randomness of the model’s responses?

A) stop

B) max_tokens

C) temperature

D) role

3. What is the purpose of the stream=True parameter?

A) It makes responses downloadable as audio

B) It sends a message to multiple recipients

C) It allows the model to return output chunk by chunk in real time

D) It enables persistent memory

Chapter 5: Prompt Engineering and System Instructions

4. What is the purpose of using a system message at the start of a conversation?

A) To execute an external function

B) To define the assistant’s behavior or persona

C) To reset the model’s memory

D) To set temperature to zero

5. Which of the following is NOT a recommended technique in prompt engineering?

A) Including examples within the prompt

B) Using vague instructions

C) Iterative refinement

D) Prompt chaining

Chapter 6: Function Calling and Tool Use

6. What is required when defining a function schema for use in the Chat Completions API?

A) A description and a unique identifier only

B) An arguments list of responses

C) A name, description, and JSON-formatted parameters

D) A Python function decorated with @openai.callable

7. In function calling, what does function_call="auto" do?

A) Automatically executes your backend logic

B) Forces the assistant to call all defined functions

C) Lets the model decide when to call a function

D) Streams the function arguments in real time

8. What is Retrieval-Augmented Generation (RAG)?

A) A technique to improve summarization

B) A method of using embedding-based search to retrieve relevant context before generation

C) A native memory system in GPT-4

D) A streaming assistant for long conversations

Chapter 7: Memory and Multi-Turn Conversations

9. What is the difference between short-term and long-term memory in OpenAI apps?

A) Short-term memory uses a database; long-term uses the context window

B) Short-term memory is built-in, long-term memory must be manually implemented

C) Long-term memory is only available in GPT-3.5

D) Both are managed automatically by OpenAI in all APIs

10. What is a common workaround to avoid exceeding the model’s context window?

A) Increasing temperature to 1.5

B) Truncating system messages

C) Summarizing earlier messages and replacing them with a condensed version

D) Restarting the assistant every 5 messages

Short-Answer Questions

11. Describe one use case where the Assistants API would be more appropriate than the Chat Completions API.

12. Write a sample function schema (in JSON) for a function named get_stock_price that takes a company symbol and returns its current price.

Questions

Chapter 4: The Chat Completions API

1. What are the three message roles used in the Chat Completions API?

A) Sender, Receiver, Assistant

B) Author, Responder, Moderator

C) System, User, Assistant

D) Prompt, Response, Instruction

2. Which parameter helps control the randomness of the model’s responses?

A) stop

B) max_tokens

C) temperature

D) role

3. What is the purpose of the stream=True parameter?

A) It makes responses downloadable as audio

B) It sends a message to multiple recipients

C) It allows the model to return output chunk by chunk in real time

D) It enables persistent memory

Chapter 5: Prompt Engineering and System Instructions

4. What is the purpose of using a system message at the start of a conversation?

A) To execute an external function

B) To define the assistant’s behavior or persona

C) To reset the model’s memory

D) To set temperature to zero

5. Which of the following is NOT a recommended technique in prompt engineering?

A) Including examples within the prompt

B) Using vague instructions

C) Iterative refinement

D) Prompt chaining

Chapter 6: Function Calling and Tool Use

6. What is required when defining a function schema for use in the Chat Completions API?

A) A description and a unique identifier only

B) An arguments list of responses

C) A name, description, and JSON-formatted parameters

D) A Python function decorated with @openai.callable

7. In function calling, what does function_call="auto" do?

A) Automatically executes your backend logic

B) Forces the assistant to call all defined functions

C) Lets the model decide when to call a function

D) Streams the function arguments in real time

8. What is Retrieval-Augmented Generation (RAG)?

A) A technique to improve summarization

B) A method of using embedding-based search to retrieve relevant context before generation

C) A native memory system in GPT-4

D) A streaming assistant for long conversations

Chapter 7: Memory and Multi-Turn Conversations

9. What is the difference between short-term and long-term memory in OpenAI apps?

A) Short-term memory uses a database; long-term uses the context window

B) Short-term memory is built-in, long-term memory must be manually implemented

C) Long-term memory is only available in GPT-3.5

D) Both are managed automatically by OpenAI in all APIs

10. What is a common workaround to avoid exceeding the model’s context window?

A) Increasing temperature to 1.5

B) Truncating system messages

C) Summarizing earlier messages and replacing them with a condensed version

D) Restarting the assistant every 5 messages

Short-Answer Questions

11. Describe one use case where the Assistants API would be more appropriate than the Chat Completions API.

12. Write a sample function schema (in JSON) for a function named get_stock_price that takes a company symbol and returns its current price.