Welcome to the official code and experiment hub for the DeepLearning MegaThread β a 5-part blog series by the GDGC ML Team that dives into the practical side of deep learning with a perfect blend of theory, code, and good vibes. π
Weβre talking hands-on implementation, clear explanations, and models that actually work β all wrapped into weekly drops designed to make deep learning more accessible, exciting, and deployable.
Author: Anaant Raj
Anaant kicked things off with a banger π₯ by walking us through how to fine-tune MobileNetV2 on CIFAR-10.
From loading pre-trained weights to understanding which layers to freeze, this post is a must-read if youβre diving into transfer learning.
π Read the blog
Author: Nidhi Rohra
Nidhi broke down the GPT-2 architecture and the entire Transformer revolution with crystal-clear visuals and that golden example:
βIn βThe cat sat on the mat,β the word βsatβ pays attention to both βcatβ and βmat.ββ π€―
She also included runnable code and a clean walkthrough of why decoder-only models like GPT-2 are so powerful.
π Read the blog
Author: Arihant Bhandari
CNNs are great at extracting local features. Transformers are amazing at global reasoning. So... why not both?
MobileViT does just that β blending MobileNetβs efficiency with ViTβs attention mechanism for a hybrid model that punches way above its size.
We experiment with MobileViT-XXS under multiple settings using Tiny ImageNet to test how image resolution, pretraining, and architecture affect performance.
π Read the blog
Author: Samyak Waghdare
This oneβs all about securing GPT-like models against manipulation. Samyak walks you through how adversarial prompts work, how to generate them using libraries like TextAttack, and how adversarial training can improve model robustness β without sacrificing performance.
π Read the blog
Author: Sahil Garje
In this hilarious and thought-provoking finale, Sahil takes on the challenge of humanizing GPT output β testing dozens of prompting techniques and measuring them using GPTZero, a state-of-the-art AI detector. It's prompt engineering meets performance art.
π Read the blog
All the code, experiments, and notebooks from the series β neatly organized for you to run, tweak, and learn from.
π Folder: VisionGPT/
| Notebook | Description |
|---|---|
mobilevit_xxs_scratch_64.ipynb |
Trains MobileViT-XXS from scratch on 64Γ64 Tiny ImageNet images. |
mobilevit_xxs_pretrained_64.ipynb |
Fine-tunes pretrained MobileViT-XXS on 64Γ64 Tiny ImageNet. |
mobilevit_xxs_scratch_96.ipynb |
Trains MobileViT-XXS from scratch on 96Γ96 resolution input. |
mobilevit_xxs_pretrained_96.ipynb |
Fine-tunes pretrained MobileViT-XXS on 96Γ96 Tiny ImageNet. |
π Folder: GPTAdversarial/
| Notebook | Description |
|---|---|
gdgc-llm-adversarial.ipynb |
Core training + evaluation pipeline for adversarial robustness. |
gdgc-llm-adversarial-training-fgsm.ipynb |
Implements FGSM adversarial training on LLMs. |
gdgc-llm-adversarial-training-text-attack-deepbugword.ipynb |
Uses TextAttack with the DeepBugWord attack method. |
gdgc-llm-adversarial-training-text-attack-textfooler.ipynb |
Uses TextAttack with the TextFooler method. |
Because letβs face it β deep learning can feel like a black box sometimes.
This series is our way of turning that box transparent.
- π€ From pre-trained models to attention mechanisms
- π§ From βhow it worksβ to βwhy it mattersβ
- π» From paper to practice
Whether you're a beginner or a practitioner brushing up on fundamentals, weβve got something here for you.
π§ Arihant Bhandari
π§ Sahil Garje
π§ Anaant Raj
π§ Nidhi Rohra
π§ Samyak Waghdare
We're the GDGC ML Team β on a mission to make deep learning more practical, fun, and open to all.
Spotted a bug, want to contribute, or just want to say hi?
Open an issue, fork the repo, or comment on the blog posts β we love hearing from fellow learners and builders.
π§ Happy Learning from the GDGC ML Team!
π This wraps up our 5-part DeepLearning MegaThread series β thanks for following along!
We hope it sparked ideas, curiosity, and a love for building. Until next time! π