Skip to content

feat: add tensor-rt inference engine support#2000

Merged
varungup90 merged 7 commits intovllm-project:mainfrom
varungup90:feat/add-trt-support
Mar 13, 2026
Merged

feat: add tensor-rt inference engine support#2000
varungup90 merged 7 commits intovllm-project:mainfrom
varungup90:feat/add-trt-support

Conversation

@varungup90
Copy link
Collaborator

Pull Request Description

add tensor-rt inference engine support

Related Issues

Resolves: #[Insert issue number(s)]

Important: Before submitting, please complete the description above and review the checklist below.


Contribution Guidelines (Expand for Details)

We appreciate your contribution to aibrix! To ensure a smooth review process and maintain high code quality, please adhere to the following guidelines:

Pull Request Title Format

Your PR title should start with one of these prefixes to indicate the nature of the change:

  • [Bug]: Corrections to existing functionality
  • [CI]: Changes to build process or CI pipeline
  • [Docs]: Updates or additions to documentation
  • [API]: Modifications to aibrix's API or interface
  • [CLI]: Changes or additions to the Command Line Interface
  • [Misc]: For changes not covered above (use sparingly)

Note: For changes spanning multiple categories, use multiple prefixes in order of importance.

Submission Checklist

  • PR title includes appropriate prefix(es)
  • Changes are clearly explained in the PR description
  • New and existing tests pass successfully
  • Code adheres to project style and best practices
  • Documentation updated to reflect changes (if applicable)
  • Thorough testing completed, no regressions introduced

By submitting this PR, you confirm that you've read these guidelines and your changes align with the project's contribution standards.

Signed-off-by: varungupta <varungup90@gmail.com>
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the system's inference capabilities by integrating TensorRT-LLM. It introduces the necessary backend logic to manage TensorRT-LLM specific request flows, particularly for disaggregated prefill and decode operations. Additionally, it provides practical Kubernetes deployment examples, allowing users to easily set up and configure TensorRT-LLM for their models.

Highlights

  • TensorRT-LLM Integration: Added support for TensorRT-LLM as an inference engine, enabling its use within the system's routing algorithms.
  • Disaggregated Prefill/Decode Logic: Implemented specific handling for TensorRT-LLM's disaggregated prefill and decode requests, including synchronous waiting for disaggregated parameters and updating the routing context.
  • Payload Preparation Adjustments: Modified the prefill payload preparation to include TensorRT-LLM specific parameters like 'disaggregated_params' for context-only requests and to conditionally omit 'max_completion_tokens' due to strict schema validation.
  • Kubernetes Deployment Examples: Introduced new Kubernetes YAML configurations for deploying TensorRT-LLM, supporting both a disaggregated prefill/decode setup and a standard single-instance deployment.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • pkg/plugins/gateway/algorithms/pd_disaggregation.go
    • Added 'TensorRTEngine' constant to identify the new inference engine.
    • Implemented a new case in 'doPrefillRequest' to handle prefill requests specifically for 'TensorRTEngine', including logic to synchronously retrieve and process disaggregated parameters.
    • Modified 'preparePrefillPayload' to inject 'disaggregated_params' for TensorRT-LLM prefill requests and to conditionally set 'max_completion_tokens'.
    • Introduced 'updateRoutingContextWithTRTDisaggParams' function to parse and update the routing context with disaggregated parameters from TensorRT-LLM responses.
  • samples/quickstart/vke/tensor-rt-pd.yaml
    • Added a new Kubernetes StormService definition for a Qwen3-8B model utilizing TensorRT-LLM in a disaggregated prefill/decode architecture.
    • Included associated ConfigMaps for prefill and decode configurations, specifying backend settings and KV cache transfer.
  • samples/quickstart/vke/tensor-rt.yaml
    • Added a new Kubernetes Deployment and Service definition for a single-instance Qwen3-8B model running on TensorRT-LLM.
Activity
  • No human activity has occurred on this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@varungup90 varungup90 requested review from DwyaneShi and Jeffwan March 12, 2026 21:46
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for the TensorRT inference engine, including integration with the PD disaggregation router and new sample Kubernetes configurations. The implementation looks solid, but I've identified a few areas for improvement. My feedback focuses on refactoring duplicated code in the Go implementation to enhance maintainability, and aligning the sample YAML files with Kubernetes best practices regarding image tags, pull policies, and label consistency.

Signed-off-by: varungupta <varungup90@gmail.com>
Signed-off-by: varungupta <varungup90@gmail.com>
Signed-off-by: varungupta <varungup90@gmail.com>
Signed-off-by: varungupta <varungup90@gmail.com>
Signed-off-by: varungupta <varungup90@gmail.com>
Signed-off-by: varungupta <varungup90@gmail.com>
Copy link
Collaborator

@Jeffwan Jeffwan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@varungup90 varungup90 merged commit 0a8b0c9 into vllm-project:main Mar 13, 2026
14 checks passed
@varungup90 varungup90 deleted the feat/add-trt-support branch March 13, 2026 18:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants