Skip to content

net: report actual MANA adapter link speed to guest#2878

Open
yuqiong6 wants to merge 9 commits intomicrosoft:mainfrom
yuqiong6:user/yuqliu/linkspeed
Open

net: report actual MANA adapter link speed to guest#2878
yuqiong6 wants to merge 9 commits intomicrosoft:mainfrom
yuqiong6:user/yuqliu/linkspeed

Conversation

@yuqiong6
Copy link
Contributor

@yuqiong6 yuqiong6 commented Mar 4, 2026

Problem
The MANA virtual network adapter always reports a hardcoded link speed of 200 Gbps to the guest, regardless of the actual physical adapter link speed

Root Cause
The link_speed() method in net_mana/src/lib.rs returned a constant 200 * 1000 * 1000 * 1000 because the MANA driver's QUERY_DEV_CONFIG request used an older response format that did not include the adapter_link_speed_mbps field.

Fix

  1. Extended ManaQueryDeviceCfgResp with new fields: adapter_mtu, reserved2, and adapter_link_speed_mbps.
  2. Updated query_dev_config() to use request_version() with MANA_QUERY_DEV_CONFIG_RESPONSE_V4, which returns the link speed.
  3. Added link_speed_bps() on ManaQueryDeviceCfgResp that converts Mbps to bps, falling back to 200 Gbps when the hardware reports 0
  4. Exposed link_speed_bps() on ManaDevice and wired it through Vport::link_speed_bps() and ManaEndpoint::link_speed() so netvsp returns the real adapter speed instead of a hardcoded 200 Gbps in OID_GEN_LINK_SPEED, OID_GEN_LINK_STATE, and OID_GEN_MAX_LINK_SPEED queries

Test
TIP node on OVL3 cluster PHX71PrdApp48
On Windows guest:
PS C:\Users\LocalAdminUsr> get-netadapter

Name InterfaceDescription ifIndex Status MacAddress LinkSpeed
Ethernet Microsoft Hyper-V Network Adapter 2 Up 70-A8-A5-19-77-98 400 Gbps

On Linux guest:
root@ANA-0:/sys/bus/pci/drivers/mana# echo 7870:00:00.0 > unbind
root@ANA-0:/sys/bus/pci/drivers/mana# ethtool eth0
Settings for eth0:
......
Speed: 400000Mb/s
......
Link detected: yes

Query the adapter link speed from the MANA device configuration instead
of hardcoding 200 Gbps. The QUERY_DEV_CONFIG request now uses the v4
response format which includes the adapter_link_speed_mbps field. When
the hardware does not report a link speed (field is 0), fall back to
the previous 200 Gbps default.
@yuqiong6 yuqiong6 requested a review from a team as a code owner March 4, 2026 14:13
Copilot AI review requested due to automatic review settings March 4, 2026 14:13
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR updates the MANA networking stack to report the actual physical adapter link speed to guests (instead of a hardcoded 200 Gbps), by querying a newer MANA QUERY_DEV_CONFIG response format and plumbing the value through to netvsp’s OID responses.

Changes:

  • Extend ManaQueryDeviceCfgResp to include adapter_link_speed_mbps (and related fields) and add a helper to convert Mbps → bps with a fallback default.
  • Update query_dev_config() to request dev-config response version v4 so the link-speed field is available.
  • Expose/plumb link speed through Vport and ManaEndpoint::link_speed() so netvsp can report it to the guest.

Reviewed changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
vm/devices/net/net_mana/src/lib.rs Use vport-reported link speed instead of a hardcoded constant.
vm/devices/net/mana_driver/src/mana.rs Add Vport::link_speed_bps() accessor sourced from device config.
vm/devices/net/mana_driver/src/bnic_driver.rs Switch QUERY_DEV_CONFIG to use request_version() with response v4.
vm/devices/net/gdma_defs/src/bnic.rs Add dev-config response v4 constant, new response fields, and Mbps→bps conversion helper.
vm/devices/net/gdma/src/bnic.rs Populate the new response fields in the GDMA/MANA emulator response.
Comments suppressed due to low confidence (3)

vm/devices/net/gdma_defs/src/bnic.rs:113

  • The doc comment mentions "OVL2", which is internal jargon and isn’t defined in this codebase. Consider rewording to something like "older firmware/response versions may report 0" so the behavior is understandable without external context.
    /// Returns the adapter link speed in bits per second.
    /// Falls back to a default of 200 Gbps when the hardware does not report
    /// a link speed (OVL2 or lower responses return 0).
    pub fn link_speed_bps(&self) -> u64 {

vm/devices/net/mana_driver/src/bnic_driver.rs:73

  • Switching from .request() (GDMA message v1) to .request_version(..., v4, ..., v4, ...) changes compatibility: if a device/firmware doesn’t support dev-config message version 4, ManaDevice::new() will now fail early rather than falling back to the previous behavior. If backward compatibility is still required, consider attempting v4 first and retrying with v1 on UnsupportedVersion/size errors (while keeping the 200 Gbps fallback when the field is present but zero).
    pub async fn query_dev_config(&mut self) -> anyhow::Result<ManaQueryDeviceCfgResp> {
        let (resp, _activity_id): (ManaQueryDeviceCfgResp, u32) = self
            .gdma
            .request_version(
                ManaCommandCode::MANA_QUERY_DEV_CONFIG.0,
                MANA_QUERY_DEV_CONFIG_RESPONSE_V4,
                ManaCommandCode::MANA_QUERY_DEV_CONFIG.0,
                MANA_QUERY_DEV_CONFIG_RESPONSE_V4,
                self.dev_id,
                ManaQueryDeviceCfgReq {
                    mn_drv_cap_flags1: 0,
                    mn_drv_cap_flags2: 0,
                    mn_drv_cap_flags3: 0,
                    mn_drv_cap_flags4: 0,
                    proto_major_ver: 1,
                    proto_minor_ver: 0,
                    proto_micro_ver: 0,
                    reserved: 0,
                },
            )
            .await?;
        Ok(resp)

vm/devices/net/mana_driver/src/bnic_driver.rs:73

  • New link-speed behavior is now part of the device-reported configuration and affects guest-visible OIDs. There are existing Rust tests for the MANA driver/end-to-end flow; please add a focused test that covers adapter_link_speed_mbps -> bps conversion and the 0-value fallback to 200 Gbps to prevent regressions.
    pub async fn query_dev_config(&mut self) -> anyhow::Result<ManaQueryDeviceCfgResp> {
        let (resp, _activity_id): (ManaQueryDeviceCfgResp, u32) = self
            .gdma
            .request_version(
                ManaCommandCode::MANA_QUERY_DEV_CONFIG.0,
                MANA_QUERY_DEV_CONFIG_RESPONSE_V4,
                ManaCommandCode::MANA_QUERY_DEV_CONFIG.0,
                MANA_QUERY_DEV_CONFIG_RESPONSE_V4,
                self.dev_id,
                ManaQueryDeviceCfgReq {
                    mn_drv_cap_flags1: 0,
                    mn_drv_cap_flags2: 0,
                    mn_drv_cap_flags3: 0,
                    mn_drv_cap_flags4: 0,
                    proto_major_ver: 1,
                    proto_minor_ver: 0,
                    proto_micro_ver: 0,
                    reserved: 0,
                },
            )
            .await?;
        Ok(resp)

erfrimod
erfrimod previously approved these changes Mar 4, 2026
Copy link
Contributor

@erfrimod erfrimod left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

max_num_eqs: 64,
adapter_mtu: 0,
reserved2: 0,
adapter_link_speed_mbps: 0,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we have a variation of the test where this is getting populated with a non-zero value and then later it checks that the query returns the right link speed on the adapter?

Zero the response page buffer for the expected response size prior to
sending the HWC request. This ensures that fields added in newer
response versions (e.g. adapter_link_speed_mbps in v4) reliably read
as zero when an older socmana does not populate them, removing any
dependency on whether the SoC memzeros the response page.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants