Skip to content
View zhangbaijin's full-sized avatar

Block or report zhangbaijin

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
zhangbaijin/README.md

Hi there 👋

  • 🔭 I’m currently a fourth-year Ph.D student in SJTU(IWIN Lab).
  • 🌱 I’m currently learning LLM/MLLM, explainable attention, information flow, Truthful AI.
  • 💬 欢迎大家找我一起做一些有趣的工作,目前做的是信息流可解释性,diffusion-LLM.😊😊
  • 📫 我的邮箱: framebreak@sjtu.edu.cn. 微信:SemiZxf
  • 📕 钱塘江上朝信来,今日方知我是我。
  • 🌱 homepage:(zhangbaijin.github.io)
  • 💬 Google sclolar: Google scholar

Pinned Loading

  1. LVLMs-Saliency LVLMs-Saliency Public template

    [ICLR 2026 Oral] 🎉Hallucination Begins Where Saliency Drops

    Python 21

  2. ErikZ719/CoTA ErikZ719/CoTA Public

    [ICLR 26] Context Tokens are Anchors: Understanding the Repeat Curse in dMLLMs from an Information Flow Perspective

    Python 5

  3. From-Redundancy-to-Relevance From-Redundancy-to-Relevance Public

    [NAACL 2025 Oral] From redundancy to relevance: Enhancing explainability in multimodal large language models

    Python 128 10

  4. 0x4e8/Simignore 0x4e8/Simignore Public

    [AAAI 2025] Code for paper:Enhancing Multimodal Large Language Models Complex Reasoning via Similarity Computation

    Python 28 1

  5. itsqyh/Shallow-Focus-Deep-Fixes itsqyh/Shallow-Focus-Deep-Fixes Public

    [EMNLP 2025 Oral] 🎉 Shallow Focus, Deep Fixes: Enhancing Shallow Layers Vision Attention Sinks to Alleviate Hallucination in LVLMs

    Python 9

  6. zhangbaijin.github.io zhangbaijin.github.io Public

    HTML