Skip to content

A curated repository of all code, models, and experiments from the Deep Learning MegaThread blog series.

Notifications You must be signed in to change notification settings

GDGCVITBHOPAL/DeepLearning_MegaThread

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

7 Commits
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

🧠 DeepLearning_MegaThread

Welcome to the official code and experiment hub for the DeepLearning MegaThread β€” a 5-part blog series by the GDGC ML Team that dives into the practical side of deep learning with a perfect blend of theory, code, and good vibes. πŸš€

We’re talking hands-on implementation, clear explanations, and models that actually work β€” all wrapped into weekly drops designed to make deep learning more accessible, exciting, and deployable.


✨ Blog Highlights

πŸ”’ 1. Transfer Learning Fundamentals

Author: Anaant Raj
Anaant kicked things off with a banger πŸ’₯ by walking us through how to fine-tune MobileNetV2 on CIFAR-10.
From loading pre-trained weights to understanding which layers to freeze, this post is a must-read if you’re diving into transfer learning.

πŸ”— Read the blog


🧠 2. Understanding GPT-2 & the Rise of Transformers

Author: Nidhi Rohra
Nidhi broke down the GPT-2 architecture and the entire Transformer revolution with crystal-clear visuals and that golden example:

β€œIn β€˜The cat sat on the mat,’ the word β€˜sat’ pays attention to both β€˜cat’ and β€˜mat.’” 🀯

She also included runnable code and a clean walkthrough of why decoder-only models like GPT-2 are so powerful.

πŸ”— Read the blog


πŸ” 3. VisionGPT: Tiny Transformer, Mighty Vision!

Author: Arihant Bhandari
CNNs are great at extracting local features. Transformers are amazing at global reasoning. So... why not both?
MobileViT does just that β€” blending MobileNet’s efficiency with ViT’s attention mechanism for a hybrid model that punches way above its size.

We experiment with MobileViT-XXS under multiple settings using Tiny ImageNet to test how image resolution, pretraining, and architecture affect performance.

πŸ”— Read the blog


πŸ›‘οΈ 4. From Vulnerable to Vigilant: Adversarial Training for LLMs

Author: Samyak Waghdare
This one’s all about securing GPT-like models against manipulation. Samyak walks you through how adversarial prompts work, how to generate them using libraries like TextAttack, and how adversarial training can improve model robustness β€” without sacrificing performance.

πŸ”— Read the blog


πŸ€– 5. Gemini-Senpai, Notice Our Humanity: Prompting LLMs to Sound Human

Author: Sahil Garje
In this hilarious and thought-provoking finale, Sahil takes on the challenge of humanizing GPT output β€” testing dozens of prompting techniques and measuring them using GPTZero, a state-of-the-art AI detector. It's prompt engineering meets performance art.

πŸ”— Read the blog


πŸ“ What's in This Repo?

All the code, experiments, and notebooks from the series β€” neatly organized for you to run, tweak, and learn from.

πŸ“Œ VisionGPT Experiments

πŸ“‚ Folder: VisionGPT/

Notebook Description
mobilevit_xxs_scratch_64.ipynb Trains MobileViT-XXS from scratch on 64Γ—64 Tiny ImageNet images.
mobilevit_xxs_pretrained_64.ipynb Fine-tunes pretrained MobileViT-XXS on 64Γ—64 Tiny ImageNet.
mobilevit_xxs_scratch_96.ipynb Trains MobileViT-XXS from scratch on 96Γ—96 resolution input.
mobilevit_xxs_pretrained_96.ipynb Fine-tunes pretrained MobileViT-XXS on 96Γ—96 Tiny ImageNet.

πŸ“Œ Adversarial Training for GPT Experiments

πŸ“‚ Folder: GPTAdversarial/

Notebook Description
gdgc-llm-adversarial.ipynb Core training + evaluation pipeline for adversarial robustness.
gdgc-llm-adversarial-training-fgsm.ipynb Implements FGSM adversarial training on LLMs.
gdgc-llm-adversarial-training-text-attack-deepbugword.ipynb Uses TextAttack with the DeepBugWord attack method.
gdgc-llm-adversarial-training-text-attack-textfooler.ipynb Uses TextAttack with the TextFooler method.

πŸ’¬ Why This Series?

Because let’s face it β€” deep learning can feel like a black box sometimes.
This series is our way of turning that box transparent.

  • πŸ€– From pre-trained models to attention mechanisms
  • 🧠 From β€œhow it works” to β€œwhy it matters”
  • πŸ’» From paper to practice

Whether you're a beginner or a practitioner brushing up on fundamentals, we’ve got something here for you.


πŸ™Œ Meet the Team

🧠 Arihant Bhandari
🧠 Sahil Garje
🧠 Anaant Raj
🧠 Nidhi Rohra
🧠 Samyak Waghdare

We're the GDGC ML Team β€” on a mission to make deep learning more practical, fun, and open to all.


πŸ“¬ Contribute / Connect

Spotted a bug, want to contribute, or just want to say hi?
Open an issue, fork the repo, or comment on the blog posts β€” we love hearing from fellow learners and builders.

🧠 Happy Learning from the GDGC ML Team!

πŸŽ‰ This wraps up our 5-part DeepLearning MegaThread series β€” thanks for following along!
We hope it sparked ideas, curiosity, and a love for building. Until next time! πŸ’™

About

A curated repository of all code, models, and experiments from the Deep Learning MegaThread blog series.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •