Chapter 13: Appendices
13.2 References
This section will provide a list of references that were used in this book and can be explored for more detailed understanding and further reading. References will typically include published papers, official documentation, books, and reputable online articles. Below are a few examples:
- Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L. & Polosukhin, I. (2017). "Attention is all you need". https://arxiv.org/abs/1706.03762
- Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding". https://arxiv.org/abs/1810.04805
- Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). "Language Models are Unsupervised Multitask Learners". OpenAI Blog. https://openai.com/research/gpt-2
- Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., & Amodei, D. (2020). "Language Models are Few-Shot Learners". https://arxiv.org/abs/2005.14165
- Hugging Face Transformer Documentation. https://huggingface.co/transformers/
- TensorFlow Official Documentation. https://www.tensorflow.org/
- PyTorch Official Documentation. https://pytorch.org/
- Chollet, F. (2017). "Deep Learning with Python". Manning Publications.
- Goodfellow, I., Bengio, Y., & Courville, A. (2016). "Deep Learning". MIT Press.
13.2 References
This section will provide a list of references that were used in this book and can be explored for more detailed understanding and further reading. References will typically include published papers, official documentation, books, and reputable online articles. Below are a few examples:
- Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L. & Polosukhin, I. (2017). "Attention is all you need". https://arxiv.org/abs/1706.03762
- Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding". https://arxiv.org/abs/1810.04805
- Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). "Language Models are Unsupervised Multitask Learners". OpenAI Blog. https://openai.com/research/gpt-2
- Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., & Amodei, D. (2020). "Language Models are Few-Shot Learners". https://arxiv.org/abs/2005.14165
- Hugging Face Transformer Documentation. https://huggingface.co/transformers/
- TensorFlow Official Documentation. https://www.tensorflow.org/
- PyTorch Official Documentation. https://pytorch.org/
- Chollet, F. (2017). "Deep Learning with Python". Manning Publications.
- Goodfellow, I., Bengio, Y., & Courville, A. (2016). "Deep Learning". MIT Press.
13.2 References
This section will provide a list of references that were used in this book and can be explored for more detailed understanding and further reading. References will typically include published papers, official documentation, books, and reputable online articles. Below are a few examples:
- Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L. & Polosukhin, I. (2017). "Attention is all you need". https://arxiv.org/abs/1706.03762
- Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding". https://arxiv.org/abs/1810.04805
- Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). "Language Models are Unsupervised Multitask Learners". OpenAI Blog. https://openai.com/research/gpt-2
- Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., & Amodei, D. (2020). "Language Models are Few-Shot Learners". https://arxiv.org/abs/2005.14165
- Hugging Face Transformer Documentation. https://huggingface.co/transformers/
- TensorFlow Official Documentation. https://www.tensorflow.org/
- PyTorch Official Documentation. https://pytorch.org/
- Chollet, F. (2017). "Deep Learning with Python". Manning Publications.
- Goodfellow, I., Bengio, Y., & Courville, A. (2016). "Deep Learning". MIT Press.
13.2 References
This section will provide a list of references that were used in this book and can be explored for more detailed understanding and further reading. References will typically include published papers, official documentation, books, and reputable online articles. Below are a few examples:
- Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L. & Polosukhin, I. (2017). "Attention is all you need". https://arxiv.org/abs/1706.03762
- Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding". https://arxiv.org/abs/1810.04805
- Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). "Language Models are Unsupervised Multitask Learners". OpenAI Blog. https://openai.com/research/gpt-2
- Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., & Amodei, D. (2020). "Language Models are Few-Shot Learners". https://arxiv.org/abs/2005.14165
- Hugging Face Transformer Documentation. https://huggingface.co/transformers/
- TensorFlow Official Documentation. https://www.tensorflow.org/
- PyTorch Official Documentation. https://pytorch.org/
- Chollet, F. (2017). "Deep Learning with Python". Manning Publications.
- Goodfellow, I., Bengio, Y., & Courville, A. (2016). "Deep Learning". MIT Press.