- 🔭 I’m currently a fourth-year Ph.D student in SJTU(IWIN Lab).
- 🌱 I’m currently learning LLM/MLLM, explainable attention, information flow, Truthful AI.
- 💬 欢迎大家找我一起做一些有趣的工作,目前做的是信息流可解释性,diffusion-LLM.😊😊
- 📫 我的邮箱: framebreak@sjtu.edu.cn. 微信:SemiZxf
- 📕 钱塘江上朝信来,今日方知我是我。
- 🌱 homepage:(zhangbaijin.github.io)
- 💬 Google sclolar: Google scholar
-
PhD @ SJTU
- Shang Hai
-
04:54
(UTC +08:00) - zhangbaijin.github.io
Pinned Loading
-
LVLMs-Saliency
LVLMs-Saliency Public template[ICLR 2026 Oral] 🎉Hallucination Begins Where Saliency Drops
Python 21
-
ErikZ719/CoTA
ErikZ719/CoTA Public[ICLR 26] Context Tokens are Anchors: Understanding the Repeat Curse in dMLLMs from an Information Flow Perspective
Python 5
-
From-Redundancy-to-Relevance
From-Redundancy-to-Relevance Public[NAACL 2025 Oral] From redundancy to relevance: Enhancing explainability in multimodal large language models
-
0x4e8/Simignore
0x4e8/Simignore Public[AAAI 2025] Code for paper:Enhancing Multimodal Large Language Models Complex Reasoning via Similarity Computation
-
itsqyh/Shallow-Focus-Deep-Fixes
itsqyh/Shallow-Focus-Deep-Fixes Public[EMNLP 2025 Oral] 🎉 Shallow Focus, Deep Fixes: Enhancing Shallow Layers Vision Attention Sinks to Alleviate Hallucination in LVLMs
Python 9
-
If the problem persists, check the GitHub status page or contact support.
