Code icon

The App is Under a Quick Maintenance

We apologize for the inconvenience. Please come back later

Menu iconMenu iconIntroduction to Natural Language Processing with Transformers
Introduction to Natural Language Processing with Transformers

Chapter 13: Appendices

13.2 References

This section will provide a list of references that were used in this book and can be explored for more detailed understanding and further reading. References will typically include published papers, official documentation, books, and reputable online articles. Below are a few examples:

  1. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L. & Polosukhin, I. (2017). "Attention is all you need". https://arxiv.org/abs/1706.03762
  2. Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding". https://arxiv.org/abs/1810.04805
  3. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). "Language Models are Unsupervised Multitask Learners". OpenAI Blog. https://openai.com/research/gpt-2
  4. Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., & Amodei, D. (2020). "Language Models are Few-Shot Learners". https://arxiv.org/abs/2005.14165
  5. Hugging Face Transformer Documentation. https://huggingface.co/transformers/
  6. TensorFlow Official Documentation. https://www.tensorflow.org/
  7. PyTorch Official Documentation. https://pytorch.org/
  8. Chollet, F. (2017). "Deep Learning with Python". Manning Publications.
  9. Goodfellow, I., Bengio, Y., & Courville, A. (2016). "Deep Learning". MIT Press.

13.2 References

This section will provide a list of references that were used in this book and can be explored for more detailed understanding and further reading. References will typically include published papers, official documentation, books, and reputable online articles. Below are a few examples:

  1. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L. & Polosukhin, I. (2017). "Attention is all you need". https://arxiv.org/abs/1706.03762
  2. Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding". https://arxiv.org/abs/1810.04805
  3. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). "Language Models are Unsupervised Multitask Learners". OpenAI Blog. https://openai.com/research/gpt-2
  4. Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., & Amodei, D. (2020). "Language Models are Few-Shot Learners". https://arxiv.org/abs/2005.14165
  5. Hugging Face Transformer Documentation. https://huggingface.co/transformers/
  6. TensorFlow Official Documentation. https://www.tensorflow.org/
  7. PyTorch Official Documentation. https://pytorch.org/
  8. Chollet, F. (2017). "Deep Learning with Python". Manning Publications.
  9. Goodfellow, I., Bengio, Y., & Courville, A. (2016). "Deep Learning". MIT Press.

13.2 References

This section will provide a list of references that were used in this book and can be explored for more detailed understanding and further reading. References will typically include published papers, official documentation, books, and reputable online articles. Below are a few examples:

  1. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L. & Polosukhin, I. (2017). "Attention is all you need". https://arxiv.org/abs/1706.03762
  2. Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding". https://arxiv.org/abs/1810.04805
  3. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). "Language Models are Unsupervised Multitask Learners". OpenAI Blog. https://openai.com/research/gpt-2
  4. Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., & Amodei, D. (2020). "Language Models are Few-Shot Learners". https://arxiv.org/abs/2005.14165
  5. Hugging Face Transformer Documentation. https://huggingface.co/transformers/
  6. TensorFlow Official Documentation. https://www.tensorflow.org/
  7. PyTorch Official Documentation. https://pytorch.org/
  8. Chollet, F. (2017). "Deep Learning with Python". Manning Publications.
  9. Goodfellow, I., Bengio, Y., & Courville, A. (2016). "Deep Learning". MIT Press.

13.2 References

This section will provide a list of references that were used in this book and can be explored for more detailed understanding and further reading. References will typically include published papers, official documentation, books, and reputable online articles. Below are a few examples:

  1. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L. & Polosukhin, I. (2017). "Attention is all you need". https://arxiv.org/abs/1706.03762
  2. Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding". https://arxiv.org/abs/1810.04805
  3. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). "Language Models are Unsupervised Multitask Learners". OpenAI Blog. https://openai.com/research/gpt-2
  4. Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., & Amodei, D. (2020). "Language Models are Few-Shot Learners". https://arxiv.org/abs/2005.14165
  5. Hugging Face Transformer Documentation. https://huggingface.co/transformers/
  6. TensorFlow Official Documentation. https://www.tensorflow.org/
  7. PyTorch Official Documentation. https://pytorch.org/
  8. Chollet, F. (2017). "Deep Learning with Python". Manning Publications.
  9. Goodfellow, I., Bengio, Y., & Courville, A. (2016). "Deep Learning". MIT Press.