Quiz Part I: Foundations of NLP
Chapter 3: Feature Engineering for NLP
- What does TF-IDF stand for?
a) Term Frequency-Inverse Document Frequency
b) Text Frequency-Inverse Data Frequency
c) Token Frequency-Indexed Data Frequency
d) Term Frequency-Indexed Document Frequency
- Which model is based on predicting context words given a target word or predicting a target word given context words?
a) TF-IDF
b) Bag of Words
c) Word2Vec
d) BERT
- What is a key advantage of BERT over traditional word embeddings like Word2Vec and GloVe?
a) BERT is simpler to implement.
b) BERT generates context-aware embeddings.
c) BERT is based on frequency counts.
d) BERT uses a smaller model size.
- Which library is commonly used to implement BERT embeddings in Python?
- a) scikit-learn
- b) nltk
- c) transformers
- d) gensim
Chapter 3: Feature Engineering for NLP
- What does TF-IDF stand for?
a) Term Frequency-Inverse Document Frequency
b) Text Frequency-Inverse Data Frequency
c) Token Frequency-Indexed Data Frequency
d) Term Frequency-Indexed Document Frequency
- Which model is based on predicting context words given a target word or predicting a target word given context words?
a) TF-IDF
b) Bag of Words
c) Word2Vec
d) BERT
- What is a key advantage of BERT over traditional word embeddings like Word2Vec and GloVe?
a) BERT is simpler to implement.
b) BERT generates context-aware embeddings.
c) BERT is based on frequency counts.
d) BERT uses a smaller model size.
- Which library is commonly used to implement BERT embeddings in Python?
- a) scikit-learn
- b) nltk
- c) transformers
- d) gensim
Chapter 3: Feature Engineering for NLP
- What does TF-IDF stand for?
a) Term Frequency-Inverse Document Frequency
b) Text Frequency-Inverse Data Frequency
c) Token Frequency-Indexed Data Frequency
d) Term Frequency-Indexed Document Frequency
- Which model is based on predicting context words given a target word or predicting a target word given context words?
a) TF-IDF
b) Bag of Words
c) Word2Vec
d) BERT
- What is a key advantage of BERT over traditional word embeddings like Word2Vec and GloVe?
a) BERT is simpler to implement.
b) BERT generates context-aware embeddings.
c) BERT is based on frequency counts.
d) BERT uses a smaller model size.
- Which library is commonly used to implement BERT embeddings in Python?
- a) scikit-learn
- b) nltk
- c) transformers
- d) gensim
Chapter 3: Feature Engineering for NLP
- What does TF-IDF stand for?
a) Term Frequency-Inverse Document Frequency
b) Text Frequency-Inverse Data Frequency
c) Token Frequency-Indexed Data Frequency
d) Term Frequency-Indexed Document Frequency
- Which model is based on predicting context words given a target word or predicting a target word given context words?
a) TF-IDF
b) Bag of Words
c) Word2Vec
d) BERT
- What is a key advantage of BERT over traditional word embeddings like Word2Vec and GloVe?
a) BERT is simpler to implement.
b) BERT generates context-aware embeddings.
c) BERT is based on frequency counts.
d) BERT uses a smaller model size.
- Which library is commonly used to implement BERT embeddings in Python?
- a) scikit-learn
- b) nltk
- c) transformers
- d) gensim