In this lecture, we will understand how GPT-3 works. We will start by looking at the history of GPT: from transformers to GPT to GPT-2 to GPT-3 and then to GPT-4. Then we will discuss about Zero-shot and Few-shot learning. We will discuss the autoregressive and unsupervised nature of GPT pre-training in detail. We end the lecture with a note on the emergent behavior shown by language models.
0:00 Introduction and recap
1:20 Transformers, GPT, GPT-2, GPT-3 and GPT-4
9:50 Zero Shot vs Few Shot learning
18:18 Datasets for GPT pre-training
27:16 Next word prediction
38:11 Emergent behaviour
42:19 Recap of lecture
=================================================
✉️ Join our FREE Newsletter: https://vizuara.ai/our-newsletter/
=================================================
Vizuara philosophy:
As we learn AI/ML/DL the material, we will share thoughts on what is actually useful in industry and what has become irrelevant. We will also share a lot of information on which subject contains open areas of research. Interested students can also start their research journey there.
Students who are confused or stuck in their ML journey, maybe courses and offline videos are not inspiring enough. What might inspire you is if you see someone else learning and implementing machine learning from scratch.
No cost. No hidden charges. Pure old school teaching and learning.
=================================================
🌟 Meet Our Team: 🌟
🎓 Dr. Raj Dandekar (MIT PhD, IIT Madras department topper)
🔗 LinkedIn: / raj-abhijit-dandekar-67a33118a
🎓 Dr. Rajat Dandekar (Purdue PhD, IIT Madras department gold medalist)
🔗 LinkedIn: / rajat-dandekar-901324b1
🎓 Dr. Sreedath Panat (MIT PhD, IIT Madras department gold medalist)
🔗 LinkedIn: / sreedath-panat-8a03b69a
🎓 Sahil Pocker (Machine Learning Engineer at Vizuara)
🔗 LinkedIn: / sahil-p-a7a30a8b
🎓 Abhijeet Singh (Software Developer at Vizuara, GSOC 24, SOB 23)
🔗 LinkedIn: / abhijeet-singh-9a1881192
🎓 Sourav Jana (Software Developer at Vizuara)
🔗 LinkedIn: / souravjana131