This is LCM-Lab, an open-source research team within the OpenNLG Group that focuses on long-context modeling and optimization. Below is a list of our work—please feel free to explore!
If you have any questions about the code or paper details, please don’t hesitate to open an issue or contact this email zecheng.tang@foxmail.com.
-
LongRM: Pushing the limits of reward modeling beyond 128K tokens
-
MemoryRewardBench: Benchmarking Reward Models for Long-Term Memory Management in Large Language Models
-
LOOM-Eval: A comprehensive and efficient framework for long-context model evaluation
-
L-CiteEval (ACL 2025): A faithfulness-oriented benchmark for long-context citation
-
MMLongCite: A Benchmark for Evaluating Fidelity of Long-Context Vision-Language Models
-
LOGO (ICML 2025): Long cOntext aliGnment via efficient preference Optimization
-
Global-Mamba (ACL 2025): Efficient long-context modeling architecture