# Quiz: Foundations of Deep Learning

## Answers - Quiz: Foundations of Deep Learning

- A) Input Layer, Hidden Layers, Output Layer
- B) Sigmoid outputs values between 0 and 1, ReLU outputs the input if it is positive, otherwise zero
- C) It updates the weights by calculating the gradient of the loss function
- B) Cross-Entropy Loss
- C) When the model performs well on training data but poorly on new data; can be mitigated by using regularization techniques and data augmentation
- C) Generative models learn \(P(X, Y)\), discriminative models learn \(P(Y|X)\)
- B) Generator and Discriminator
- B) The encoder maps input data to a latent space, the decoder generates new data from the latent space
- B) By generating one data point at a time, conditioned on the previous points
- B) They provide exact likelihood estimation and efficient sampling

This quiz covers the basic to intermediate concepts introduced in the first part of the book and will help solidify your understanding of deep learning's core features and generative models.

## Answers - Quiz: Foundations of Deep Learning

- A) Input Layer, Hidden Layers, Output Layer
- B) Sigmoid outputs values between 0 and 1, ReLU outputs the input if it is positive, otherwise zero
- C) It updates the weights by calculating the gradient of the loss function
- B) Cross-Entropy Loss
- C) When the model performs well on training data but poorly on new data; can be mitigated by using regularization techniques and data augmentation
- C) Generative models learn \(P(X, Y)\), discriminative models learn \(P(Y|X)\)
- B) Generator and Discriminator
- B) The encoder maps input data to a latent space, the decoder generates new data from the latent space
- B) By generating one data point at a time, conditioned on the previous points
- B) They provide exact likelihood estimation and efficient sampling

This quiz covers the basic to intermediate concepts introduced in the first part of the book and will help solidify your understanding of deep learning's core features and generative models.

## Answers - Quiz: Foundations of Deep Learning

- A) Input Layer, Hidden Layers, Output Layer
- B) Sigmoid outputs values between 0 and 1, ReLU outputs the input if it is positive, otherwise zero
- C) It updates the weights by calculating the gradient of the loss function
- B) Cross-Entropy Loss
- C) When the model performs well on training data but poorly on new data; can be mitigated by using regularization techniques and data augmentation
- C) Generative models learn \(P(X, Y)\), discriminative models learn \(P(Y|X)\)
- B) Generator and Discriminator
- B) The encoder maps input data to a latent space, the decoder generates new data from the latent space
- B) By generating one data point at a time, conditioned on the previous points
- B) They provide exact likelihood estimation and efficient sampling

This quiz covers the basic to intermediate concepts introduced in the first part of the book and will help solidify your understanding of deep learning's core features and generative models.

## Answers - Quiz: Foundations of Deep Learning

- A) Input Layer, Hidden Layers, Output Layer
- B) Sigmoid outputs values between 0 and 1, ReLU outputs the input if it is positive, otherwise zero
- C) It updates the weights by calculating the gradient of the loss function
- B) Cross-Entropy Loss
- C) Generative models learn \(P(X, Y)\), discriminative models learn \(P(Y|X)\)
- B) Generator and Discriminator
- B) By generating one data point at a time, conditioned on the previous points
- B) They provide exact likelihood estimation and efficient sampling