Awex is a high-performance RL training-inference weight synchronization framework, designed to enable second-level parameter updates from training to inference in RL workflows. It minimizes iteration latency, ensuring rollout phases consistently use the latest model.
- Extreme Sync Speed: Trillion-parameter models fully synchronized within 10 seconds; validated on thousand-GPU clusters with industry-leading performance.
- Unified Weight Adaptation Layer: Automatically handles tensor format/layout differences across parallel strategies and engine frameworks, supporting any model architecture.
- Zero-Redundancy Transfer & In-Place Update: Transfers only necessary shards; supports in-place GPU memory updates on inference, avoiding costly allocation and copying.
- Multi-Mode Transfer Support: Support NCCL, RDMA, and shared memory transfer mode to leverage NVLink/NVSwitch/RDMA bandwidth and reduce long-tail latency.
- Heterogeneous Deployment Compatibility: Fully supports co-location and separation modes, make RL sync/async algorithms runs seamlessly.
- Extensibility: Easily extends to support new training and inference engines.
The Awex weight exchange framework consists primarily of three components:
- WeightWriter: Runs within each training process, responsible for metadata collection and reporting of weight shards for the current training process, weight convert, resharding transfer plan construction, weight transmission, and other functions;
- WeightReader: Runs on the control process of each inference instance, which starts a WorkerWeightsReader on each GPU managed by the inference instance, corresponding to the WeightWriter of the training process. Responsible for metadata collection and reporting of weight shards for each inference process, weight convert, resharding transfer plan construction, weight reception, and other functions;
- MetaServer: Job-level global server for service discovery and weight metadata exchange between training and inference engines, as well as event notification functions in co-located scenarios;
The core modules of weight exchange consist mainly of 5 parts:
- Unified training-inference weight convert: Responsible for converting weights from training and inference engines with different parallelism strategies and tensor layouts into a unified format for subsequent weight metadata calculation and weight transmission;
- Global weight metadata calculation and exchange: After converting training and inference weights into a unified format, collects all weight shard metadata from each worker and reports to Meta Server for subsequent weight transmission plan construction;
- P2P weight transmission execution plan: Training and inference engines obtain global weight shard metadata from all workers, then separately construct peer-to-peer deterministic transfer plan for sending and receiving;
- NCCL weight transmission: Uses NCCL's send/recv API for peer-to-peer weight transmission based on the constructed transmission plan;
- RDMA weight transmission: Uses NUMA affinity and RDMA communication for globally load-balanced transfer plan for weight updates;
Awex also supports tensor-level validation of weights, comparing weights loaded through file system mode with those loaded through transmission mode at the tensor level for fine-grained comparison, ensuring the correctness of the transmission mode.
See more details on our Document.
For comprehensive introduction about awex, see the medium article
On thousand-GPU scale clusters, Awex using NCCL transmission can exchange 10B-scale model weights within one second, and exchange 1T-scale model weights within twenty seconds. Using RDMA for transmission, 1T model weight exchange time can be further reduced to six seconds.
| Weight Parameter Scale | Weight Data Size | Verl Time | Awex NCCL Transmission Time | Awex RDMA Transmission Time |
|---|---|---|---|---|
| 10B | 31GB | 3.5S | 0.8S | 0.5S |
| 100B | 191GB | 35S | 9S | 3.2S |
| 1000B | 1000GB (FP8) | / | 20S | 6S |
- Python 3.8 or higher
- PyTorch 2.0.0 or higher (for GPU support)
Install awex using pip:
pip install awexClone the repository and install in development mode:
git clone git@github.com:inclusionAI/awex.git
cd awex
pip install -e .For development with additional tools:
pip install -e ".[dev]"Awex is a pure Python library that can be installed and used with one command, supporting Python 3.8 and above.
pip install awexMegatron training engine weight sending example:
from awex import NCCLWeightsWriter
from awex.engine.mcore import MegatronEngine
# init
train_engine = MegatronEngine(awex_config, hf_config, mcore_model)
writer = NCCLWeightsWriter(train_engine)
writer.initialize()
# write weights
writer.write_weights(step_id=1)SGLang inference engine weight update example:
from awex import WeightsReader, InferenceConfig
from awex.engine.sglang import SGLangEngine
import sglang as sgl
sgl_engine = sgl.Engine(model_path="xxx", tp_size=2, random_seed=42)
awex_config = InferenceConfig.from_sgl_engine(sgl_engine, comm_backend="nccl")
# for sglang support, you must ensure https://github.com/sgl-project/sglang/pull/13595
# is included in your sglang version
inference_engine = SGLangEngine(awex_config, sgl_engine)
reader = WeightsReader(inference_engine)
reader.initialize()
# update weights
reader.update_weights(step_id=1)These scripts compare weight formats across Megatron, vLLM, and SGLang by converting all parameters into HF-style names and then diffing tensors.
Intended use (for new model bringโup):
- These scripts primarily validate Awex converter coverage. They help answer: โDoes the current converter support this new model, or do we need mapping fixes?โ
- If your target stack is Megatron โ vLLM, usually running
verify_weight_conversion.py+compare_megatron_vllm_weights.pyis sufficient. - Use
compare_vllm_sglang_weights.pyonly if you also care about vLLM โ SGLang parity (or youโre adding SGLang support for a new model).
GPU/NPU notes
- All compare/verify scripts accept
--device-backend(auto/cuda/npu/cpu), but they are CUDA-only today because vLLM/SGLang backends require CUDA. Use--device-backend cudaexplicitly if auto-detection picks the wrong device. - For NPU, use these scripts on CUDA to validate converter coverage, then validate the runtime weight update path on NPU with the integration tests.
Awex normalizes parameter names from different backends into a single canonical HF-style naming scheme so Megatron, vLLM, and SGLang can be compared directly. There are three โnamespacesโ involved:
- Megatron (mcore) names โ e.g.
decoder.layers.0.self_attention.linear_qkv.weight - vLLM/SGLang names โ e.g.
model.layers.0.self_attn.qkv_proj.weight - Awex canonical HF-style names โ e.g.
model.layers.0.attention.query_key_value_proj.weight
Example for QKV conversion:
- Megatron
self_attention.linear_qkv.weight
โ (mcore converter)self_attn.qkv_proj.weight
โ (normalize)attention.query_key_value_proj.weight - vLLM
self_attn.qkv_proj.weight
โ (normalize)attention.query_key_value_proj.weight
So self_attn.qkv_proj is not the canonical HF name; it is a vLLM name (and
also an intermediate name in the Megatron converter). The canonical name used
for comparison is attention.query_key_value_proj.
Qwen3 note: HF checkpoints store unfused q_proj/k_proj/v_proj weights. The
verifier treats those as valid matches for the canonical query_key_value_proj.
- Compare vLLM vs SGLang HF-loaded weights:
- Script:
awex/tests/experimental/compare_vllm_sglang_weights.py - Example:
python awex/tests/experimental/compare_vllm_sglang_weights.py \ --model-path /path/to/hf/model \ --out-dir /tmp/vllm_sglang_compare \ --device-backend cuda \ --trust-remote-code \ --max-layers 4 \ --include-non-layer
- Script:
- Compare Megatron vs vLLM (via converters to HF naming):
- Script:
awex/tests/experimental/compare_megatron_vllm_weights.py - Note: We default to mbridge for all models. Use
--no-mbridgeto force the Megatron convert.py path (Qwen3 will still fall back to mbridge). - Example:
python awex/tests/experimental/compare_megatron_vllm_weights.py \ --model-path /path/to/hf/model \ --out-dir /tmp/megatron_vllm_compare \ --device-backend cuda \ --trust-remote-code \ --max-layers 4 \ --include-non-layer
- Multi-GPU (torchrun) variant:
- Script:
awex/tests/experimental/compare_megatron_vllm_weights_multi.py - Example:
torchrun --nproc_per_node=2 awex/tests/experimental/compare_megatron_vllm_weights_multi.py \ --stage megatron_dump \ --model-path /path/to/hf/model \ --out-dir /tmp/megatron_vllm_compare \ --device-backend cuda \ --train-tp-size 2 \ --train-pp-size 1 \ --train-ep-size 1 \ --train-cuda-devices 0,1 python awex/tests/experimental/compare_megatron_vllm_weights_multi.py \ --stage vllm_compare \ --model-path /path/to/hf/model \ --out-dir /tmp/megatron_vllm_compare
- Script:
- Script:
Both scripts produce a JSON report with missing keys, shape/dtype mismatches,
and value diffs. You can limit comparison to the first N layers with
--max-layers N. For large models, expect heavy disk usage because each tensor
is saved to disk for comparison.
- Verify HF weight conversion coverage:
- Script:
awex/tests/experimental/verify_weight_conversion.py - Note: Qwen3 HF checkpoints store unfused q/k/v (and o_proj) weights, so the verifier treats those as valid matches for vLLM qkv/o_proj names.
- Example:
python awex/tests/experimental/verify_weight_conversion.py \ --model-path /path/to/hf/model \ --device-backend cuda
- Script:
- Megatron โ vLLM weight exchange (requires 2 GPUs and Awex vLLM plugin):
- Script:
awex/tests/weights_exchange_vllm_it.py - Example:
CUDA_VISIBLE_DEVICES=0,1 python awex/tests/weights_exchange_vllm_it.py \ --comm_backend nccl \ --model-path /path/to/hf/model \ --device-backend cuda \ --validate
- Optional: add
--validateto run a consistency check and print "weights are consistent" logs (supported for NCCL or file backend). --model-pathdefaults tovllm_inference_config["model_path"]inside the script. Set it explicitly for your local model directory.- NPU (experimental, requires vllm-ascend + MindSpeed + Megatron):
ASCEND_RT_VISIBLE_DEVICES=0,1 AWEX_USE_MINDSPEED=1 \ python awex/tests/weights_exchange_vllm_it.py \ --comm_backend hccl \ --device-backend npu
- Multi-process (
torchrun) integration is currently excluded because startup is not stable in our test environment. Use the single-process script above as the baseline validation path.
- Script:
Awex includes experimental NPU support for the weight-exchange runtime path (training โ inference). This path is intended for MindSpeed + Megatron on Ascend and vllm-ascend on the inference side.
- Device backend: set
AWEX_DEVICE_TYPE=nputo switch the internal device helpers to NPU semantics. For communication, usecomm_backend=hcclandweights_exchange_ipc_backend=cpu(CUDA IPC is not supported on NPU). - MindSpeed patching: set
AWEX_USE_MINDSPEED=1before importingmegatron/megatron.coreso MindSpeed can patch Megatron internals. - Inference: requires
vllm-ascendwith the Awex plugin enabled. This integration has been validated in our environment. - Memory debug logging: set
AWEX_MEM_DEBUG=1to emit additional memory diagnostics during weight conversion and NCCL send-op construction. This is intended for debugging memory pressure or unexpected tensor retention and should remain disabled in normal runs.
What is NOT NPU-ready yet
compare_megatron_vllm_weights.py,verify_weight_conversion.py, andcompare_vllm_sglang_weights.pyare CUDA-only (they rely on vLLM CUDA kernels and torch.cuda).- If you target NPU, use these scripts on CUDA to validate converter coverage, then validate the runtime weight update path on NPU.
Awex is an open-source project. We welcome all forms of contributions:
- Report Issues: Found a bug? Open an issue
- Suggest Features: Have an idea? Start a discussion
- Improve Docs: Documentation improvements are always welcome
- Submit Code: See our Contributing Guide
- Agent Workflows: Read the Repository Guidelines for structure, testing, and PR expectations.
git clone https://github.com/inclusionAI/awex.git
cd awex
# Install in development mode with dev dependencies
pip install -e ".[dev]"
# Run tests
pytest -v -s .
# Run specific test
pytest -v -s awex/tests/test_meta_resolver.py
# Run heavy GPU integration tests (requires Megatron-LM and 2 GPUs)
CUDA_VISIBLE_DEVICES=0,1 pytest -v -s awex/tests/test_weights_writer.py
# Format code
ruff format .
ruff check --fix .See DEVELOPMENT.md for detailed build instructions.
Apache License 2.0. See LICENSE for details.
Awex - high-performance RL training-inference weight synchronization framework with second-level parameter updates
We welcome contributions! Whether it's bug reports, feature requests, documentation improvements, or code contributions, we appreciate your help.
- Star the project on GitHub โญ
