Skip to content

Conversation

@gangwgr
Copy link
Contributor

@gangwgr gangwgr commented Feb 8, 2026

fixing oc cli failures

@coderabbitai
Copy link

coderabbitai bot commented Feb 8, 2026

Walkthrough

Adds a large Ginkgo-based oc CLI e2e test (test/e2e/cli.go), a comprehensive e2e compatibility and CLI utility layer (test/e2e/util.go), an embedded fixture manager (test/testdata/fixtures.go), and multiple testdata YAML manifests for CRDs, custom resources, pods, quotas, deploymentconfigs, and image-mirroring. Removes a small placeholder test (test/e2e/e2e.go).

Changes

Cohort / File(s) Summary
E2E Tests & Utilities
test/e2e/cli.go, test/e2e/util.go, test/e2e/e2e.go
Adds a large Ginkgo end-to-end test file covering many oc CLI scenarios; introduces public structs ClientVersion, ServerVersion, VersionInfo; adds a comprehensive CLI abstraction (CLI, CLICommand) and many cluster/test helper functions. Removes a small placeholder e2e test file.
Fixture Management
test/testdata/fixtures.go
Adds embedded-fixtures support and runtime extraction helpers: FixturePath(), GetFixtureDir(), CleanupFixtures(), ListFixtures().
CRD Manifests
test/testdata/oc_cli/case72217/crd-cattoy-72217.yaml, .../crd-crontab-72217.yaml, .../crd-customtask-72217.yaml
Adds three namespaced CustomResourceDefinition manifests with OpenAPI v3 schemas and shortNames.
Custom Resource Instances
test/testdata/oc_cli/case72217/cr-cat-72217.yaml, .../cr-cron-72217.yaml, .../cr-custom-72217.yaml
Adds three corresponding custom resource YAML instances for the CRDs.
Pod / Init Container Templates
test/testdata/oc_cli/debugpod_48681.yaml, test/testdata/oc_cli/initContainer.yaml, test/testdata/oc_cli/initContainer66989.yaml
Adds pod and template manifests for debug and init-container test scenarios, including securityContext and resource requests/limits.
Configuration & Quota Manifests
test/testdata/oc_cli/clusterresource_for_user.yaml, test/testdata/oc_cli/deploymentconfig_with_quota.yaml, test/testdata/oc_cli/idmsFile64921.yaml
Adds ClusterResourceQuota, DeploymentConfig with resource specs, and an ImageDigestMirrorSet manifest for quota, deployment, and image mirroring tests.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord.


Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci openshift-ci bot requested review from ingvagabund and tchap February 8, 2026 06:31
@gangwgr
Copy link
Contributor Author

gangwgr commented Feb 8, 2026

/pj-rehearse e2e-aws-ovn-microshift

@gangwgr
Copy link
Contributor Author

gangwgr commented Feb 8, 2026

/pj-rehearse pull-ci-openshift-origin-main-e2e-aws-ovn-microshift

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

🤖 Fix all issues with AI agents
In `@test/e2e/cli.go`:
- Around line 528-554: The test is treating the event command output as a
filename and calling bash/cat on it; instead, take the string returned by
oc.AsAdmin().WithoutNamespace().Run("get").Args("events", "-n",
projectN).Output(), split it into lines, skip any header/empty lines, extract
the first whitespace-separated column from each line (the same value awk '{print
$1}' would produce) and then run the regex check for "unknown" against those
extracted tokens; update the loop in the g.It test (where output is used and
result is computed) to parse output in Go (strings.Split, strings.Fields) rather
than using exec.Command("bash", "-c", "cat ... | awk ...").
- Around line 436-517: The branch conditions use o.Expect(...) which triggers
Gomega failures immediately; change the if/else-if conditions to plain string
checks (use strings.Contains(architectureStr, "...") on architectureStr) and
keep o.Expect/assertions only inside the matching branch when checking imageInfo
for the expected architecture; update the branches that reference
architectureStr and imageInfo and remove o.Expect from the if/else-if conditions
so the code checks amd64/arm64/ppc64le/s390x only after the correct branch is
chosen.
- Around line 225-275: The test uses hard-coded /tmp paths (e.g.
extractTmpDirName = "/tmp/case51018" and string constants used in exec commands
like macocheckcmd/windowocheckcmd) which will collide in parallel runs; change
these to create a unique temp dir with os.MkdirTemp (e.g. tmpDir, err :=
os.MkdirTemp("", "case51018-*")), defer os.RemoveAll(tmpDir), and build all
child paths with filepath.Join(tmpDir, "mac", "oc") and filepath.Join(tmpDir,
".dockerconfigjson") instead of string concatenation; update usages in the
Ginkgo It block (variables extractTmpDirName, macocheckcmd, windowocheckcmd and
any other occurrences mentioned in the review) and uses of
exec.Command("bash","-c", ...) to reference the constructed paths so tests are
isolated and safe for parallel execution.
- Around line 278-321: The defer of cmd.Process.Kill() is registered before
checking the error from Background(), which can cause a panic if cmd is nil; for
each Background() call (e.g., the oc.Run("debug").Args(...).Background() that
assigns cmd, _, _, err), move the defer behind the err check and only register
it when err == nil (and optionally when cmd and cmd.Process are non-nil) so you
don't dereference a nil Process on failure.

In `@test/e2e/util.go`:
- Around line 498-504: The constants X86, ARM64, PPC64LE, and S390X are
currently untyped string constants instead of being declared as
ArchitectureType, breaking the typed-constant pattern; change their declarations
to give them the explicit type ArchitectureType (e.g., X86 ArchitectureType =
"amd64") so they carry the type information and preserve type-safety, and apply
the same change to the PlatformType group of constants (give each PlatformType
constant an explicit PlatformType type) to keep both constant groups
consistently typed.
- Around line 622-630: Remove the duplicate unexported function
getRandomString(): keep the existing exported GetRandomString() implementation
and delete the getRandomString() definition, then update any callers that
reference getRandomString() to call GetRandomString() instead so there is a
single canonical function for generating the random string.
- Around line 913-930: The getLatestPayload function calls http.Get with no
timeout, which can hang tests; replace the use of http.Get with an explicit
http.Client that sets a reasonable Timeout, use client.Get(url) instead of
http.Get, ensure res.Body is closed via defer immediately after checking for a
non-nil response, and add a check for non-200 HTTP status to return a clear
failure via e2e.Failf; update references in this function (getLatestPayload)
accordingly.

In `@test/testdata/fixtures.go`:
- Around line 3-101: FixturePath has a race when multiple goroutines call it and
both pass the os.Stat check then extract the same embedded files; add
synchronization by introducing a package-level mutex (e.g., fixtureMu
sync.Mutex) and lock around the check-and-extract critical section in
FixturePath (from the os.Stat check through the end of extraction/write steps)
to serialize extraction, ensuring you import "sync" and use defer
fixtureMu.Unlock() after locking.
🧹 Nitpick comments (3)
test/e2e/util.go (3)

178-322: Significant code duplication across Output(), Outputs(), and Background().

These three methods share identical logic for building args, handling namespace flags, and constructing log messages. Consider extracting the common logic into a private helper.

♻️ Suggested refactor to reduce duplication
+// buildArgs constructs the final argument list and log message
+func (cmd *CLICommand) buildArgs() ([]string, string) {
+	args := []string{}
+	if cmd.verb != "" {
+		args = append(args, cmd.verb)
+	}
+	args = append(args, cmd.args...)
+
+	if cmd.cli.withNamespace && cmd.cli.namespace != "" {
+		args = append(args, "-n", cmd.cli.namespace)
+	}
+
+	var logParts []string
+	logParts = append(logParts, "oc")
+	if cmd.cli.withNamespace && cmd.cli.namespace != "" {
+		logParts = append(logParts, fmt.Sprintf("--namespace=%s", cmd.cli.namespace))
+	}
+	if cmd.cli.kubeconfig != "" {
+		logParts = append(logParts, fmt.Sprintf("--kubeconfig=%s", cmd.cli.kubeconfig))
+	}
+	logParts = append(logParts, strings.Join(args, " "))
+
+	return args, strings.Join(logParts, " ")
+}

Then use it in Output(), Outputs(), and Background().


477-488: wait.Poll is deprecated; prefer wait.PollUntilContextTimeout.

The codebase already uses the newer API at line 933. Consider updating other poll calls for consistency.


1094-1096: Unnecessary shell command; use os.ReadFile directly.

Using bash -c cat adds complexity and potential shell interpretation issues when os.ReadFile works directly.

♻️ Simplify file reading
 	oauthFilePath := dirName + "/.dockerconfigjson"
-	secretContent, err := exec.Command("bash", "-c", fmt.Sprintf("cat %v", oauthFilePath)).Output()
-	o.Expect(err).NotTo(o.HaveOccurred())
+	secretContent, err := os.ReadFile(oauthFilePath)
+	o.Expect(err).NotTo(o.HaveOccurred())

test/e2e/cli.go Outdated
Comment on lines 225 to 275
g.It("ROSA-OSD_CCS-ARO-ConnectedOnly-Author:yinzhou-Medium-51018-oc adm release extract support manifest list", func() {
skipIfMicroShift(oc)
skipIfDisconnected(oc)

if !assertPullSecret(oc) {
skipMsg := "The cluster does not have pull secret for public registry hence skipping..."
e2e.Warningf("SKIPPING TEST: %s", skipMsg)
g.Skip(skipMsg)
}
extractTmpDirName := "/tmp/case51018"
err := os.MkdirAll(extractTmpDirName, 0700)
o.Expect(err).NotTo(o.HaveOccurred())
defer os.RemoveAll(extractTmpDirName)

_, err = oc.AsAdmin().WithoutNamespace().Run("extract").Args("secret/pull-secret", "-n", "openshift-config", fmt.Sprintf("--to=%s", extractTmpDirName), "--confirm").Output()
o.Expect(err).NotTo(o.HaveOccurred())
pullSpec := getLatestPayload("https://multi.ocp.releases.ci.openshift.org/api/v1/releasestream/4-stable-multi/latest")
e2e.Logf("The pullSpec is %s \n", pullSpec)
if len(pullSpec) == 0 || strings.TrimSpace(pullSpec) == "" {
skipMsg := "pullSpec is empty, so skipping the test"
e2e.Warningf("SKIPPING TEST: %s", skipMsg)
g.Skip(skipMsg)
}
cmdOutput, err := oc.WithoutNamespace().WithoutKubeconf().Run("adm").Args("release", "extract", "-a", extractTmpDirName+"/.dockerconfigjson", "--command=oc.rhel8", "--to="+extractTmpDirName, pullSpec).Output()
skipOnTransientNetworkError(cmdOutput, err)
o.Expect(err).NotTo(o.HaveOccurred())
e2e.Logf("Check oc executable to make sure match the platform")
_, err = exec.Command("bash", "-c", "/tmp/case51018/oc version").Output()
o.Expect(err).NotTo(o.HaveOccurred())
cmdOutput, err = oc.WithoutNamespace().WithoutKubeconf().Run("adm").Args("release", "extract", "-a", extractTmpDirName+"/.dockerconfigjson", "--command=oc", "--to="+extractTmpDirName+"/mac", pullSpec, "--command-os=mac/amd64").Output()
skipOnTransientNetworkError(cmdOutput, err)
o.Expect(err).NotTo(o.HaveOccurred())
macocheckcmd := "file /tmp/case51018/mac/oc"
output, err := exec.Command("bash", "-c", macocheckcmd).Output()
o.Expect(err).NotTo(o.HaveOccurred())
o.Expect(output).To(o.ContainSubstring("Mach-O"))
cmdOutput, err = oc.WithoutNamespace().WithoutKubeconf().Run("adm").Args("release", "extract", "-a", extractTmpDirName+"/.dockerconfigjson", "--command=oc", "--to="+extractTmpDirName+"/macarm", pullSpec, "--command-os=mac/arm64").Output()
skipOnTransientNetworkError(cmdOutput, err)
o.Expect(err).NotTo(o.HaveOccurred())
macocheckcmd = "file /tmp/case51018/macarm/oc"
output, err = exec.Command("bash", "-c", macocheckcmd).Output()
o.Expect(err).NotTo(o.HaveOccurred())
o.Expect(output).To(o.ContainSubstring("Mach-O 64-bit arm64 executable"))
cmdOutput, err = oc.WithoutNamespace().WithoutKubeconf().Run("adm").Args("release", "extract", "-a", extractTmpDirName+"/.dockerconfigjson", "--command=oc", "--to="+extractTmpDirName+"/windows", pullSpec, "--command-os=windows").Output()
skipOnTransientNetworkError(cmdOutput, err)
o.Expect(err).NotTo(o.HaveOccurred())
windowocheckcmd := "file /tmp/case51018/windows/oc"
output, err = exec.Command("bash", "-c", windowocheckcmd).Output()
o.Expect(err).NotTo(o.HaveOccurred())
o.Expect(output).To(o.ContainSubstring("Windows"))
})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, check if the file exists and view the problematic lines
git ls-files test/e2e/cli.go

Repository: openshift/oc

Length of output: 71


🏁 Script executed:

# Get line count to understand file size
wc -l test/e2e/cli.go

Repository: openshift/oc

Length of output: 76


🏁 Script executed:

# View the specific lines mentioned (225-275)
sed -n '225,275p' test/e2e/cli.go

Repository: openshift/oc

Length of output: 3268


🏁 Script executed:

# Search for all occurrences of hard-coded /tmp paths
rg '/tmp/' test/e2e/cli.go -n

Repository: openshift/oc

Length of output: 1087


🏁 Script executed:

# View the other occurrences mentioned (lines 338-433)
sed -n '338,433p' test/e2e/cli.go | grep -E '^\s*(extractTmpDirName|dockerTmpDirName|.*="/tmp/)' -n

Repository: openshift/oc

Length of output: 158


🏁 Script executed:

# View lines 792-831
sed -n '792,831p' test/e2e/cli.go | grep -E '^\s*(extractTmpDirName|.*="/tmp/|os\.RemoveAll)' -n

Repository: openshift/oc

Length of output: 128


🏁 Script executed:

# View lines 834-904 to see the third occurrence
sed -n '834,904p' test/e2e/cli.go | head -30

Repository: openshift/oc

Length of output: 1608


🏁 Script executed:

# Count total occurrences of hard-coded /tmp paths to assess scope
rg '/tmp/case[0-9]+' test/e2e/cli.go -c

Repository: openshift/oc

Length of output: 57


🏁 Script executed:

# Verify os.MkdirTemp is imported and available in the file
grep -E 'import|os\.' test/e2e/cli.go | head -20

Repository: openshift/oc

Length of output: 1741


🏁 Script executed:

# Check if filepath package is imported (needed for filepath.Join)
rg 'import \(' test/e2e/cli.go -A 20

Repository: openshift/oc

Length of output: 395


Replace hard-coded /tmp paths with os.MkdirTemp to prevent parallel-test collisions.

Lines 234, 380, 802, and 843 hard-code shared /tmp paths that will collide when tests run in parallel (Ginkgo nodes) or retry. Use os.MkdirTemp to generate unique temporary directories and build child paths with filepath.Join.

🛠️ Example fix for the first occurrence
-		extractTmpDirName := "/tmp/case51018"
-		err := os.MkdirAll(extractTmpDirName, 0700)
+		extractTmpDirName, err := os.MkdirTemp("", "case51018-")
 		o.Expect(err).NotTo(o.HaveOccurred())
 		defer os.RemoveAll(extractTmpDirName)
@@
-		_, err = exec.Command("bash", "-c", "/tmp/case51018/oc version").Output()
+		_, err = exec.Command("bash", "-c", filepath.Join(extractTmpDirName, "oc")+" version").Output()
@@
-		macocheckcmd := "file /tmp/case51018/mac/oc"
+		macocheckcmd := "file " + filepath.Join(extractTmpDirName, "mac", "oc")
@@
-		macocheckcmd = "file /tmp/case51018/macarm/oc"
+		macocheckcmd = "file " + filepath.Join(extractTmpDirName, "macarm", "oc")
@@
-		windowocheckcmd := "file /tmp/case51018/windows/oc"
+		windowocheckcmd := "file " + filepath.Join(extractTmpDirName, "windows", "oc")

Also applies to lines 380 (/tmp/case64921), 802 (/tmp/d71178), and 843 (/tmp/case71273).

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
g.It("ROSA-OSD_CCS-ARO-ConnectedOnly-Author:yinzhou-Medium-51018-oc adm release extract support manifest list", func() {
skipIfMicroShift(oc)
skipIfDisconnected(oc)
if !assertPullSecret(oc) {
skipMsg := "The cluster does not have pull secret for public registry hence skipping..."
e2e.Warningf("SKIPPING TEST: %s", skipMsg)
g.Skip(skipMsg)
}
extractTmpDirName := "/tmp/case51018"
err := os.MkdirAll(extractTmpDirName, 0700)
o.Expect(err).NotTo(o.HaveOccurred())
defer os.RemoveAll(extractTmpDirName)
_, err = oc.AsAdmin().WithoutNamespace().Run("extract").Args("secret/pull-secret", "-n", "openshift-config", fmt.Sprintf("--to=%s", extractTmpDirName), "--confirm").Output()
o.Expect(err).NotTo(o.HaveOccurred())
pullSpec := getLatestPayload("https://multi.ocp.releases.ci.openshift.org/api/v1/releasestream/4-stable-multi/latest")
e2e.Logf("The pullSpec is %s \n", pullSpec)
if len(pullSpec) == 0 || strings.TrimSpace(pullSpec) == "" {
skipMsg := "pullSpec is empty, so skipping the test"
e2e.Warningf("SKIPPING TEST: %s", skipMsg)
g.Skip(skipMsg)
}
cmdOutput, err := oc.WithoutNamespace().WithoutKubeconf().Run("adm").Args("release", "extract", "-a", extractTmpDirName+"/.dockerconfigjson", "--command=oc.rhel8", "--to="+extractTmpDirName, pullSpec).Output()
skipOnTransientNetworkError(cmdOutput, err)
o.Expect(err).NotTo(o.HaveOccurred())
e2e.Logf("Check oc executable to make sure match the platform")
_, err = exec.Command("bash", "-c", "/tmp/case51018/oc version").Output()
o.Expect(err).NotTo(o.HaveOccurred())
cmdOutput, err = oc.WithoutNamespace().WithoutKubeconf().Run("adm").Args("release", "extract", "-a", extractTmpDirName+"/.dockerconfigjson", "--command=oc", "--to="+extractTmpDirName+"/mac", pullSpec, "--command-os=mac/amd64").Output()
skipOnTransientNetworkError(cmdOutput, err)
o.Expect(err).NotTo(o.HaveOccurred())
macocheckcmd := "file /tmp/case51018/mac/oc"
output, err := exec.Command("bash", "-c", macocheckcmd).Output()
o.Expect(err).NotTo(o.HaveOccurred())
o.Expect(output).To(o.ContainSubstring("Mach-O"))
cmdOutput, err = oc.WithoutNamespace().WithoutKubeconf().Run("adm").Args("release", "extract", "-a", extractTmpDirName+"/.dockerconfigjson", "--command=oc", "--to="+extractTmpDirName+"/macarm", pullSpec, "--command-os=mac/arm64").Output()
skipOnTransientNetworkError(cmdOutput, err)
o.Expect(err).NotTo(o.HaveOccurred())
macocheckcmd = "file /tmp/case51018/macarm/oc"
output, err = exec.Command("bash", "-c", macocheckcmd).Output()
o.Expect(err).NotTo(o.HaveOccurred())
o.Expect(output).To(o.ContainSubstring("Mach-O 64-bit arm64 executable"))
cmdOutput, err = oc.WithoutNamespace().WithoutKubeconf().Run("adm").Args("release", "extract", "-a", extractTmpDirName+"/.dockerconfigjson", "--command=oc", "--to="+extractTmpDirName+"/windows", pullSpec, "--command-os=windows").Output()
skipOnTransientNetworkError(cmdOutput, err)
o.Expect(err).NotTo(o.HaveOccurred())
windowocheckcmd := "file /tmp/case51018/windows/oc"
output, err = exec.Command("bash", "-c", windowocheckcmd).Output()
o.Expect(err).NotTo(o.HaveOccurred())
o.Expect(output).To(o.ContainSubstring("Windows"))
})
g.It("ROSA-OSD_CCS-ARO-ConnectedOnly-Author:yinzhou-Medium-51018-oc adm release extract support manifest list", func() {
skipIfMicroShift(oc)
skipIfDisconnected(oc)
if !assertPullSecret(oc) {
skipMsg := "The cluster does not have pull secret for public registry hence skipping..."
e2e.Warningf("SKIPPING TEST: %s", skipMsg)
g.Skip(skipMsg)
}
extractTmpDirName, err := os.MkdirTemp("", "case51018-")
o.Expect(err).NotTo(o.HaveOccurred())
defer os.RemoveAll(extractTmpDirName)
_, err = oc.AsAdmin().WithoutNamespace().Run("extract").Args("secret/pull-secret", "-n", "openshift-config", fmt.Sprintf("--to=%s", extractTmpDirName), "--confirm").Output()
o.Expect(err).NotTo(o.HaveOccurred())
pullSpec := getLatestPayload("https://multi.ocp.releases.ci.openshift.org/api/v1/releasestream/4-stable-multi/latest")
e2e.Logf("The pullSpec is %s \n", pullSpec)
if len(pullSpec) == 0 || strings.TrimSpace(pullSpec) == "" {
skipMsg := "pullSpec is empty, so skipping the test"
e2e.Warningf("SKIPPING TEST: %s", skipMsg)
g.Skip(skipMsg)
}
cmdOutput, err := oc.WithoutNamespace().WithoutKubeconf().Run("adm").Args("release", "extract", "-a", extractTmpDirName+"/.dockerconfigjson", "--command=oc.rhel8", "--to="+extractTmpDirName, pullSpec).Output()
skipOnTransientNetworkError(cmdOutput, err)
o.Expect(err).NotTo(o.HaveOccurred())
e2e.Logf("Check oc executable to make sure match the platform")
_, err = exec.Command("bash", "-c", filepath.Join(extractTmpDirName, "oc")+" version").Output()
o.Expect(err).NotTo(o.HaveOccurred())
cmdOutput, err = oc.WithoutNamespace().WithoutKubeconf().Run("adm").Args("release", "extract", "-a", extractTmpDirName+"/.dockerconfigjson", "--command=oc", "--to="+extractTmpDirName+"/mac", pullSpec, "--command-os=mac/amd64").Output()
skipOnTransientNetworkError(cmdOutput, err)
o.Expect(err).NotTo(o.HaveOccurred())
macocheckcmd := "file " + filepath.Join(extractTmpDirName, "mac", "oc")
output, err := exec.Command("bash", "-c", macocheckcmd).Output()
o.Expect(err).NotTo(o.HaveOccurred())
o.Expect(output).To(o.ContainSubstring("Mach-O"))
cmdOutput, err = oc.WithoutNamespace().WithoutKubeconf().Run("adm").Args("release", "extract", "-a", extractTmpDirName+"/.dockerconfigjson", "--command=oc", "--to="+extractTmpDirName+"/macarm", pullSpec, "--command-os=mac/arm64").Output()
skipOnTransientNetworkError(cmdOutput, err)
o.Expect(err).NotTo(o.HaveOccurred())
macocheckcmd = "file " + filepath.Join(extractTmpDirName, "macarm", "oc")
output, err = exec.Command("bash", "-c", macocheckcmd).Output()
o.Expect(err).NotTo(o.HaveOccurred())
o.Expect(output).To(o.ContainSubstring("Mach-O 64-bit arm64 executable"))
cmdOutput, err = oc.WithoutNamespace().WithoutKubeconf().Run("adm").Args("release", "extract", "-a", extractTmpDirName+"/.dockerconfigjson", "--command=oc", "--to="+extractTmpDirName+"/windows", pullSpec, "--command-os=windows").Output()
skipOnTransientNetworkError(cmdOutput, err)
o.Expect(err).NotTo(o.HaveOccurred())
windowocheckcmd := "file " + filepath.Join(extractTmpDirName, "windows", "oc")
output, err = exec.Command("bash", "-c", windowocheckcmd).Output()
o.Expect(err).NotTo(o.HaveOccurred())
o.Expect(output).To(o.ContainSubstring("Windows"))
})
🤖 Prompt for AI Agents
In `@test/e2e/cli.go` around lines 225 - 275, The test uses hard-coded /tmp paths
(e.g. extractTmpDirName = "/tmp/case51018" and string constants used in exec
commands like macocheckcmd/windowocheckcmd) which will collide in parallel runs;
change these to create a unique temp dir with os.MkdirTemp (e.g. tmpDir, err :=
os.MkdirTemp("", "case51018-*")), defer os.RemoveAll(tmpDir), and build all
child paths with filepath.Join(tmpDir, "mac", "oc") and filepath.Join(tmpDir,
".dockerconfigjson") instead of string concatenation; update usages in the
Ginkgo It block (variables extractTmpDirName, macocheckcmd, windowocheckcmd and
any other occurrences mentioned in the review) and uses of
exec.Command("bash","-c", ...) to reference the constructed paths so tests are
isolated and safe for parallel execution.

test/e2e/cli.go Outdated
Comment on lines 278 to 321
g.It("ROSA-OSD_CCS-ARO-ConnectedOnly-Author:knarra-Medium-66989-Workloads oc debug with or without init container for pod", func() {
skipIfMicroShift(oc)
skipIfDisconnected(oc)

oc.SetupProject()
testBaseDir := FixturePath("testdata", "oc_cli")
initContainerFile := filepath.Join(testBaseDir, "initContainer66989.yaml")
SetNamespacePrivileged(oc, oc.Namespace())
g.By("Create pod with InitContainer")
err := oc.Run("create").Args("-f", initContainerFile).Execute()
o.Expect(err).NotTo(o.HaveOccurred())
g.By("Make sure pod with init container running well")
checkPodStatus(oc, "name=hello-pod", oc.Namespace(), "Running")
g.By("Run debug command with init container")
cmd, _, _, err := oc.Run("debug").Args("pod/hello-pod", "--keep-init-containers=true").Background()
defer cmd.Process.Kill()
o.Expect(err).NotTo(o.HaveOccurred())
err = wait.Poll(5*time.Second, 100*time.Second, func() (bool, error) {
debugPodName, err := oc.WithoutNamespace().Run("get").Args("pods", "-n", oc.Namespace()).Output()
if err != nil {
e2e.Logf("debug failed with error: %s. Trying again", err)
return false, nil
}
if matched, _ := regexp.MatchString("hello-pod-debug", debugPodName); matched {
e2e.Logf("Check the debug pod command succeeded\n")
return true, nil
}
return false, nil
})
AssertWaitPollNoErr(err, fmt.Sprintf("Cannot get debug with init container"))

g.By("Check if Init Containers present in debug pod output")
debugPodName, err := oc.WithoutNamespace().Run("get").Args("pods", "-n", oc.Namespace(), "-o=jsonpath={.items[1].metadata.name}").Output()
o.Expect(err).NotTo(o.HaveOccurred())

Output, err := oc.WithoutNamespace().Run("describe").Args("pods", debugPodName, "-n", oc.Namespace()).Output()
o.Expect(err).NotTo(o.HaveOccurred())

if matched, _ := regexp.MatchString("Init Containers", Output); !matched {
e2e.Failf("Init Containers are not seen in the output when run with keep init containers true")
}
_, err = oc.WithoutNamespace().Run("delete").Args("pods", debugPodName, "-n", oc.Namespace(), "--wait=false").Output()
o.Expect(err).NotTo(o.HaveOccurred())
})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's find the file and check its size
find . -name "cli.go" -path "*/e2e/*" -type f | head -5

Repository: openshift/oc

Length of output: 73


🏁 Script executed:

# Find the Background method definition
fd -e go . | xargs rg "func.*\(.*\).*Background\(\)" -A 5 | head -60

Repository: openshift/oc

Length of output: 1574


🏁 Script executed:

# Let's get the structure - first check if file exists and get basic info
wc -l test/e2e/cli.go

Repository: openshift/oc

Length of output: 76


🏁 Script executed:

# Read the code around line 292
cat -n test/e2e/cli.go | sed -n '285,310p'

Repository: openshift/oc

Length of output: 1527


🏁 Script executed:

# Let's check both sections at once
echo "=== Lines 285-310 ===" && sed -n '285,310p' test/e2e/cli.go && echo "" && echo "=== Lines 750-770 ===" && sed -n '750,770p' test/e2e/cli.go

Repository: openshift/oc

Length of output: 2699


🏁 Script executed:

# Let's find the Background method in the codebase
find . -name "*.go" -type f | xargs grep -l "func.*Background" | head -10

Repository: openshift/oc

Length of output: 306


🏁 Script executed:

# Direct read without line numbers first to see the code
sed -n '290,298p' test/e2e/cli.go

Repository: openshift/oc

Length of output: 594


🏁 Script executed:

# Let's explore the repo structure first
ls -la | head -20

Repository: openshift/oc

Length of output: 1179


🏁 Script executed:

# Check if test/e2e exists and list cli.go
ls -la test/e2e/ | grep -i cli

Repository: openshift/oc

Length of output: 112


🏁 Script executed:

# Let's find all cli.go files recursively
find . -name "cli.go" -type f

Repository: openshift/oc

Length of output: 90


🏁 Script executed:

# List all go files in test directories
find test -name "*.go" 2>/dev/null | head -20

Repository: openshift/oc

Length of output: 114


🏁 Script executed:

# Check git to see what files are in the repo
git ls-files | grep -i cli.go

Repository: openshift/oc

Length of output: 224


🏁 Script executed:

# Check the current git root and list test files
git ls-files | grep "test.*\.go$" | head -20

Repository: openshift/oc

Length of output: 869


🏁 Script executed:

# Let's check for e2e directory files
git ls-files | grep e2e

Repository: openshift/oc

Length of output: 88


🏁 Script executed:

# Verify git status and check what's available
pwd && git log -1 --oneline && echo "---" && git ls-files | head -30

Repository: openshift/oc

Length of output: 788


🏁 Script executed:

# Search across the entire codebase for Background() method definition
rg "Background\s*\(\)" -A 5 -B 2 | head -100

Repository: openshift/oc

Length of output: 7135


🏁 Script executed:

# Look for any file that contains "cmd, _, _, err := " pattern with Background
rg "Background\(\)" -B 5 --context=10 | head -150

Repository: openshift/oc

Length of output: 12747


🏁 Script executed:

# Search for method definitions with defer and Background
rg "defer.*Process\.Kill" -B 3 -A 1

Repository: openshift/oc

Length of output: 1414


🏁 Script executed:

# Check which files contain this pattern
rg "defer.*Kill" --files-with-matches

Repository: openshift/oc

Length of output: 71


🏁 Script executed:

# Get the full context of defer Kill patterns
rg "defer.*Kill" -B 5 -A 2 | head -200

Repository: openshift/oc

Length of output: 2152


🏁 Script executed:

# Search for the exact pattern: Background() followed by defer and error check
rg "\.Background\(\)" -A 3 | head -150

Repository: openshift/oc

Length of output: 11511


🏁 Script executed:

# Let's try a different approach - search for .Run("debug") pattern with Background
rg 'Run\("debug"\).*Background' -A 5 -B 2

Repository: openshift/oc

Length of output: 659


🏁 Script executed:

# Let's find oc.Run definitions or interface
rg "type.*Run.*func" -A 3 | head -50

Repository: openshift/oc

Length of output: 798


🏁 Script executed:

# Search for the exact snippet from the review comment
rg "ROSA-OSD_CCS-ARO-ConnectedOnly.*66989" -A 20

Repository: openshift/oc

Length of output: 1509


🏁 Script executed:

# Let's search more broadly for the pattern with any test mentioning "debug"
rg 'Run\("debug"\).*Background' -B 3 -A 8 | head -80

Repository: openshift/oc

Length of output: 870


🏁 Script executed:

# Count how many times this unsafe pattern appears
rg 'defer.*Process\.Kill' | wc -l

Repository: openshift/oc

Length of output: 57


🏁 Script executed:

# Check all instances and their context
rg -n 'defer.*Kill' -B 2

Repository: openshift/oc

Length of output: 1021


Guard Background() results before deferring Process.Kill().

Lines 293, 753, and 766 register defer cmd.Process.Kill() before the error check. If Background() returns an error and cmd.Process isn't initialized, dereferencing it will panic during defer execution and mask the actual failure.

🛠️ Suggested fix (apply to each Background() call)
-		cmd, _, _, err := oc.Run("debug").Args("pod/hello-pod", "--keep-init-containers=true").Background()
-		defer cmd.Process.Kill()
-		o.Expect(err).NotTo(o.HaveOccurred())
+		cmd, _, _, err := oc.Run("debug").Args("pod/hello-pod", "--keep-init-containers=true").Background()
+		o.Expect(err).NotTo(o.HaveOccurred())
+		if cmd != nil && cmd.Process != nil {
+			defer cmd.Process.Kill()
+		}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
g.It("ROSA-OSD_CCS-ARO-ConnectedOnly-Author:knarra-Medium-66989-Workloads oc debug with or without init container for pod", func() {
skipIfMicroShift(oc)
skipIfDisconnected(oc)
oc.SetupProject()
testBaseDir := FixturePath("testdata", "oc_cli")
initContainerFile := filepath.Join(testBaseDir, "initContainer66989.yaml")
SetNamespacePrivileged(oc, oc.Namespace())
g.By("Create pod with InitContainer")
err := oc.Run("create").Args("-f", initContainerFile).Execute()
o.Expect(err).NotTo(o.HaveOccurred())
g.By("Make sure pod with init container running well")
checkPodStatus(oc, "name=hello-pod", oc.Namespace(), "Running")
g.By("Run debug command with init container")
cmd, _, _, err := oc.Run("debug").Args("pod/hello-pod", "--keep-init-containers=true").Background()
defer cmd.Process.Kill()
o.Expect(err).NotTo(o.HaveOccurred())
err = wait.Poll(5*time.Second, 100*time.Second, func() (bool, error) {
debugPodName, err := oc.WithoutNamespace().Run("get").Args("pods", "-n", oc.Namespace()).Output()
if err != nil {
e2e.Logf("debug failed with error: %s. Trying again", err)
return false, nil
}
if matched, _ := regexp.MatchString("hello-pod-debug", debugPodName); matched {
e2e.Logf("Check the debug pod command succeeded\n")
return true, nil
}
return false, nil
})
AssertWaitPollNoErr(err, fmt.Sprintf("Cannot get debug with init container"))
g.By("Check if Init Containers present in debug pod output")
debugPodName, err := oc.WithoutNamespace().Run("get").Args("pods", "-n", oc.Namespace(), "-o=jsonpath={.items[1].metadata.name}").Output()
o.Expect(err).NotTo(o.HaveOccurred())
Output, err := oc.WithoutNamespace().Run("describe").Args("pods", debugPodName, "-n", oc.Namespace()).Output()
o.Expect(err).NotTo(o.HaveOccurred())
if matched, _ := regexp.MatchString("Init Containers", Output); !matched {
e2e.Failf("Init Containers are not seen in the output when run with keep init containers true")
}
_, err = oc.WithoutNamespace().Run("delete").Args("pods", debugPodName, "-n", oc.Namespace(), "--wait=false").Output()
o.Expect(err).NotTo(o.HaveOccurred())
})
g.It("ROSA-OSD_CCS-ARO-ConnectedOnly-Author:knarra-Medium-66989-Workloads oc debug with or without init container for pod", func() {
skipIfMicroShift(oc)
skipIfDisconnected(oc)
oc.SetupProject()
testBaseDir := FixturePath("testdata", "oc_cli")
initContainerFile := filepath.Join(testBaseDir, "initContainer66989.yaml")
SetNamespacePrivileged(oc, oc.Namespace())
g.By("Create pod with InitContainer")
err := oc.Run("create").Args("-f", initContainerFile).Execute()
o.Expect(err).NotTo(o.HaveOccurred())
g.By("Make sure pod with init container running well")
checkPodStatus(oc, "name=hello-pod", oc.Namespace(), "Running")
g.By("Run debug command with init container")
cmd, _, _, err := oc.Run("debug").Args("pod/hello-pod", "--keep-init-containers=true").Background()
o.Expect(err).NotTo(o.HaveOccurred())
if cmd != nil && cmd.Process != nil {
defer cmd.Process.Kill()
}
err = wait.Poll(5*time.Second, 100*time.Second, func() (bool, error) {
debugPodName, err := oc.WithoutNamespace().Run("get").Args("pods", "-n", oc.Namespace()).Output()
if err != nil {
e2e.Logf("debug failed with error: %s. Trying again", err)
return false, nil
}
if matched, _ := regexp.MatchString("hello-pod-debug", debugPodName); matched {
e2e.Logf("Check the debug pod command succeeded\n")
return true, nil
}
return false, nil
})
AssertWaitPollNoErr(err, fmt.Sprintf("Cannot get debug with init container"))
g.By("Check if Init Containers present in debug pod output")
debugPodName, err := oc.WithoutNamespace().Run("get").Args("pods", "-n", oc.Namespace(), "-o=jsonpath={.items[1].metadata.name}").Output()
o.Expect(err).NotTo(o.HaveOccurred())
Output, err := oc.WithoutNamespace().Run("describe").Args("pods", debugPodName, "-n", oc.Namespace()).Output()
o.Expect(err).NotTo(o.HaveOccurred())
if matched, _ := regexp.MatchString("Init Containers", Output); !matched {
e2e.Failf("Init Containers are not seen in the output when run with keep init containers true")
}
_, err = oc.WithoutNamespace().Run("delete").Args("pods", debugPodName, "-n", oc.Namespace(), "--wait=false").Output()
o.Expect(err).NotTo(o.HaveOccurred())
})
🤖 Prompt for AI Agents
In `@test/e2e/cli.go` around lines 278 - 321, The defer of cmd.Process.Kill() is
registered before checking the error from Background(), which can cause a panic
if cmd is nil; for each Background() call (e.g., the
oc.Run("debug").Args(...).Background() that assigns cmd, _, _, err), move the
defer behind the err check and only register it when err == nil (and optionally
when cmd and cmd.Process are non-nil) so you don't dereference a nil Process on
failure.

test/e2e/cli.go Outdated
Comment on lines 436 to 517
g.It("ROSA-OSD_CCS-ARO-ConnectedOnly-Author:yinzhou-High-67013-oc image mirror with multi-arch images and --filter-by-os", func() {
skipIfMicroShift(oc)
skipIfDisconnected(oc)

if !assertPullSecret(oc) {
skipMsg := "The cluster does not have pull secret for public registry hence skipping..."
e2e.Warningf("SKIPPING TEST: %s", skipMsg)
g.Skip(skipMsg)
}
g.By("Skip if the cluster is AzureStackCloud")
azureStackCloud, azureErr := oc.AsAdmin().WithoutNamespace().Run("get").Args("infrastructure", "cluster", "-o=jsonpath={.status.platformStatus.azure.cloudName}").Output()
o.Expect(azureErr).NotTo(o.HaveOccurred())
if azureStackCloud == "AzureStackCloud" {
skipMsg := "Skip for cluster with AzureStackCloud!"
e2e.Warningf("SKIPPING TEST: %s", skipMsg)
g.Skip(skipMsg)
}
g.By("create new namespace")
oc.SetupProject()
registry := registry{
dockerImage: "quay.io/openshifttest/registry@sha256:1106aedc1b2e386520bc2fb797d9a7af47d651db31d8e7ab472f2352da37d1b3",
namespace: oc.Namespace(),
}

g.By("Trying to launch a registry app")
defer registry.deleteregistry(oc)
serInfo := registry.createregistry(oc)

err := wait.Poll(30*time.Second, 180*time.Second, func() (bool, error) {
err := oc.WithoutNamespace().Run("image").Args("mirror", "quay.io/openshifttest/base-alpine@sha256:3126e4eed4a3ebd8bf972b2453fa838200988ee07c01b2251e3ea47e4b1f245c"+"="+serInfo.serviceName+"/testimage:ppc64", "--filter-by-os=linux/ppc64le", "--insecure").Execute()
if err != nil {
e2e.Logf("mirror failed, retrying...")
return false, nil
}
return true, nil
})
AssertWaitPollNoErr(err, fmt.Sprintf("max time reached but mirror still falied"))
out, err := oc.WithoutNamespace().Run("image").Args("info", serInfo.serviceName+"/testimage:ppc64", "--insecure").Output()
o.Expect(err).NotTo(o.HaveOccurred())
o.Expect(strings.Contains(out, "ppc64le")).To(o.BeTrue())
err = wait.Poll(30*time.Second, 180*time.Second, func() (bool, error) {
err := oc.WithoutNamespace().Run("image").Args("mirror", "quay.io/openshifttest/base-alpine@sha256:3126e4eed4a3ebd8bf972b2453fa838200988ee07c01b2251e3ea47e4b1f245c"+"="+serInfo.serviceName+"/testimage:default", "--insecure").Execute()
if err != nil {
e2e.Logf("mirror failed, retrying...")
return false, nil
}
return true, nil
})
AssertWaitPollNoErr(err, fmt.Sprintf("max time reached but mirror still falied"))
o.Expect(err).NotTo(o.HaveOccurred())
imageInfo, err := oc.WithoutNamespace().Run("image").Args("info", serInfo.serviceName+"/testimage:default", "--insecure").Output()
o.Expect(err).NotTo(o.HaveOccurred())
architecture, err := exec.Command("bash", "-c", "uname -a").Output()
o.Expect(err).NotTo(o.HaveOccurred())
architectureStr := string(architecture)
if o.Expect(strings.Contains(architectureStr, "x86_64")).To(o.BeTrue()) {
if o.Expect(strings.Contains(imageInfo, "amd64")).To(o.BeTrue()) {
e2e.Logf("Found the expected Arch amd64")
} else {
e2e.Failf("Failed to find the expected Arch for mirrored image")
}
} else if o.Expect(strings.Contains(architectureStr, "aarch64")).To(o.BeTrue()) {
if o.Expect(strings.Contains(imageInfo, "arm64")).To(o.BeTrue()) {
e2e.Logf("Found the expected Arch aarch64")
} else {
e2e.Failf("Failed to find the expected Arch for mirrored image")
}
} else if o.Expect(strings.Contains(architectureStr, "ppc64le")).To(o.BeTrue()) {
if o.Expect(strings.Contains(imageInfo, "ppc64le")).To(o.BeTrue()) {
e2e.Logf("Found the expected Arch ppc64le")
} else {
e2e.Failf("Failed to find the expected Arch for mirrored image")
}
} else {
if o.Expect(strings.Contains(imageInfo, "s390x")).To(o.BeTrue()) {
e2e.Logf("Found the expected Arch s390x")
} else {
e2e.Failf("Failed to find the expected Arch for mirrored image")
}
}

})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, examine the actual code at the specified lines
cat -n test/e2e/cli.go | sed -n '485,510p'

Repository: openshift/oc

Length of output: 1540


🏁 Script executed:

# Also check the broader context to understand the code structure
wc -l test/e2e/cli.go

Repository: openshift/oc

Length of output: 76


🌐 Web query:

Gomega Expect().To(BeTrue()) in if condition behavior test assertion

💡 Result:

Expect(x).To(BeTrue()) does return a bool (true if the assertion passes, false if it fails). However, on failure Gomega still triggers the configured fail handler (i.e., it “fails the test”)—so using it as an if condition is usually misleading because execution may not proceed the way a normal boolean check would. In Ginkgo, a failed assertion calls the registered GomegaFailHandler (typically ginkgo.Fail). (pkg.go.dev)

So this pattern is generally a bad idea:

if Expect(cond).To(BeTrue()) {   // not a normal predicate
    // ...
}

What to do instead

  • If you just need branching logic: use the boolean directly:
    if cond {
        // ...
    }
  • If you want to check a matcher without failing the test, call the matcher directly (no Expect):
    ok, err := BeTrue().Match(cond) // ok==true/false, err if type mismatch

Citations:


Architecture branching asserts too early, causing test failure on non-x86_64 nodes.

Using Expect(...).To(BeTrue()) in if conditions (lines 491, 497, 503) causes the Gomega fail handler to immediately fail the test when the condition is false, rather than proceeding to the next branch. On systems with architectures other than x86_64, the test fails at line 491 before checking the correct branch.

Switch to plain string checks and assert only within the matching branch:

🛠️ Suggested fix
-		if o.Expect(strings.Contains(architectureStr, "x86_64")).To(o.BeTrue()) {
-			if o.Expect(strings.Contains(imageInfo, "amd64")).To(o.BeTrue()) {
-				e2e.Logf("Found the expected Arch amd64")
-			} else {
-				e2e.Failf("Failed to find the expected Arch for mirrored image")
-			}
-		} else if o.Expect(strings.Contains(architectureStr, "aarch64")).To(o.BeTrue()) {
-			if o.Expect(strings.Contains(imageInfo, "arm64")).To(o.BeTrue()) {
-				e2e.Logf("Found the expected Arch aarch64")
-			} else {
-				e2e.Failf("Failed to find the expected Arch for mirrored image")
-			}
-		} else if o.Expect(strings.Contains(architectureStr, "ppc64le")).To(o.BeTrue()) {
-			if o.Expect(strings.Contains(imageInfo, "ppc64le")).To(o.BeTrue()) {
-				e2e.Logf("Found the expected Arch ppc64le")
-			} else {
-				e2e.Failf("Failed to find the expected Arch for mirrored image")
-			}
-		} else {
-			if o.Expect(strings.Contains(imageInfo, "s390x")).To(o.BeTrue()) {
-				e2e.Logf("Found the expected Arch s390x")
-			} else {
-				e2e.Failf("Failed to find the expected Arch for mirrored image")
-			}
-		}
+		switch {
+		case strings.Contains(architectureStr, "x86_64"):
+			o.Expect(strings.Contains(imageInfo, "amd64")).To(o.BeTrue())
+			e2e.Logf("Found the expected Arch amd64")
+		case strings.Contains(architectureStr, "aarch64"):
+			o.Expect(strings.Contains(imageInfo, "arm64")).To(o.BeTrue())
+			e2e.Logf("Found the expected Arch aarch64")
+		case strings.Contains(architectureStr, "ppc64le"):
+			o.Expect(strings.Contains(imageInfo, "ppc64le")).To(o.BeTrue())
+			e2e.Logf("Found the expected Arch ppc64le")
+		default:
+			o.Expect(strings.Contains(imageInfo, "s390x")).To(o.BeTrue())
+			e2e.Logf("Found the expected Arch s390x")
+		}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
g.It("ROSA-OSD_CCS-ARO-ConnectedOnly-Author:yinzhou-High-67013-oc image mirror with multi-arch images and --filter-by-os", func() {
skipIfMicroShift(oc)
skipIfDisconnected(oc)
if !assertPullSecret(oc) {
skipMsg := "The cluster does not have pull secret for public registry hence skipping..."
e2e.Warningf("SKIPPING TEST: %s", skipMsg)
g.Skip(skipMsg)
}
g.By("Skip if the cluster is AzureStackCloud")
azureStackCloud, azureErr := oc.AsAdmin().WithoutNamespace().Run("get").Args("infrastructure", "cluster", "-o=jsonpath={.status.platformStatus.azure.cloudName}").Output()
o.Expect(azureErr).NotTo(o.HaveOccurred())
if azureStackCloud == "AzureStackCloud" {
skipMsg := "Skip for cluster with AzureStackCloud!"
e2e.Warningf("SKIPPING TEST: %s", skipMsg)
g.Skip(skipMsg)
}
g.By("create new namespace")
oc.SetupProject()
registry := registry{
dockerImage: "quay.io/openshifttest/registry@sha256:1106aedc1b2e386520bc2fb797d9a7af47d651db31d8e7ab472f2352da37d1b3",
namespace: oc.Namespace(),
}
g.By("Trying to launch a registry app")
defer registry.deleteregistry(oc)
serInfo := registry.createregistry(oc)
err := wait.Poll(30*time.Second, 180*time.Second, func() (bool, error) {
err := oc.WithoutNamespace().Run("image").Args("mirror", "quay.io/openshifttest/base-alpine@sha256:3126e4eed4a3ebd8bf972b2453fa838200988ee07c01b2251e3ea47e4b1f245c"+"="+serInfo.serviceName+"/testimage:ppc64", "--filter-by-os=linux/ppc64le", "--insecure").Execute()
if err != nil {
e2e.Logf("mirror failed, retrying...")
return false, nil
}
return true, nil
})
AssertWaitPollNoErr(err, fmt.Sprintf("max time reached but mirror still falied"))
out, err := oc.WithoutNamespace().Run("image").Args("info", serInfo.serviceName+"/testimage:ppc64", "--insecure").Output()
o.Expect(err).NotTo(o.HaveOccurred())
o.Expect(strings.Contains(out, "ppc64le")).To(o.BeTrue())
err = wait.Poll(30*time.Second, 180*time.Second, func() (bool, error) {
err := oc.WithoutNamespace().Run("image").Args("mirror", "quay.io/openshifttest/base-alpine@sha256:3126e4eed4a3ebd8bf972b2453fa838200988ee07c01b2251e3ea47e4b1f245c"+"="+serInfo.serviceName+"/testimage:default", "--insecure").Execute()
if err != nil {
e2e.Logf("mirror failed, retrying...")
return false, nil
}
return true, nil
})
AssertWaitPollNoErr(err, fmt.Sprintf("max time reached but mirror still falied"))
o.Expect(err).NotTo(o.HaveOccurred())
imageInfo, err := oc.WithoutNamespace().Run("image").Args("info", serInfo.serviceName+"/testimage:default", "--insecure").Output()
o.Expect(err).NotTo(o.HaveOccurred())
architecture, err := exec.Command("bash", "-c", "uname -a").Output()
o.Expect(err).NotTo(o.HaveOccurred())
architectureStr := string(architecture)
if o.Expect(strings.Contains(architectureStr, "x86_64")).To(o.BeTrue()) {
if o.Expect(strings.Contains(imageInfo, "amd64")).To(o.BeTrue()) {
e2e.Logf("Found the expected Arch amd64")
} else {
e2e.Failf("Failed to find the expected Arch for mirrored image")
}
} else if o.Expect(strings.Contains(architectureStr, "aarch64")).To(o.BeTrue()) {
if o.Expect(strings.Contains(imageInfo, "arm64")).To(o.BeTrue()) {
e2e.Logf("Found the expected Arch aarch64")
} else {
e2e.Failf("Failed to find the expected Arch for mirrored image")
}
} else if o.Expect(strings.Contains(architectureStr, "ppc64le")).To(o.BeTrue()) {
if o.Expect(strings.Contains(imageInfo, "ppc64le")).To(o.BeTrue()) {
e2e.Logf("Found the expected Arch ppc64le")
} else {
e2e.Failf("Failed to find the expected Arch for mirrored image")
}
} else {
if o.Expect(strings.Contains(imageInfo, "s390x")).To(o.BeTrue()) {
e2e.Logf("Found the expected Arch s390x")
} else {
e2e.Failf("Failed to find the expected Arch for mirrored image")
}
}
})
g.It("ROSA-OSD_CCS-ARO-ConnectedOnly-Author:yinzhou-High-67013-oc image mirror with multi-arch images and --filter-by-os", func() {
skipIfMicroShift(oc)
skipIfDisconnected(oc)
if !assertPullSecret(oc) {
skipMsg := "The cluster does not have pull secret for public registry hence skipping..."
e2e.Warningf("SKIPPING TEST: %s", skipMsg)
g.Skip(skipMsg)
}
g.By("Skip if the cluster is AzureStackCloud")
azureStackCloud, azureErr := oc.AsAdmin().WithoutNamespace().Run("get").Args("infrastructure", "cluster", "-o=jsonpath={.status.platformStatus.azure.cloudName}").Output()
o.Expect(azureErr).NotTo(o.HaveOccurred())
if azureStackCloud == "AzureStackCloud" {
skipMsg := "Skip for cluster with AzureStackCloud!"
e2e.Warningf("SKIPPING TEST: %s", skipMsg)
g.Skip(skipMsg)
}
g.By("create new namespace")
oc.SetupProject()
registry := registry{
dockerImage: "quay.io/openshifttest/registry@sha256:1106aedc1b2e386520bc2fb797d9a7af47d651db31d8e7ab472f2352da37d1b3",
namespace: oc.Namespace(),
}
g.By("Trying to launch a registry app")
defer registry.deleteregistry(oc)
serInfo := registry.createregistry(oc)
err := wait.Poll(30*time.Second, 180*time.Second, func() (bool, error) {
err := oc.WithoutNamespace().Run("image").Args("mirror", "quay.io/openshifttest/base-alpine@sha256:3126e4eed4a3ebd8bf972b2453fa838200988ee07c01b2251e3ea47e4b1f245c"+"="+serInfo.serviceName+"/testimage:ppc64", "--filter-by-os=linux/ppc64le", "--insecure").Execute()
if err != nil {
e2e.Logf("mirror failed, retrying...")
return false, nil
}
return true, nil
})
AssertWaitPollNoErr(err, fmt.Sprintf("max time reached but mirror still falied"))
out, err := oc.WithoutNamespace().Run("image").Args("info", serInfo.serviceName+"/testimage:ppc64", "--insecure").Output()
o.Expect(err).NotTo(o.HaveOccurred())
o.Expect(strings.Contains(out, "ppc64le")).To(o.BeTrue())
err = wait.Poll(30*time.Second, 180*time.Second, func() (bool, error) {
err := oc.WithoutNamespace().Run("image").Args("mirror", "quay.io/openshifttest/base-alpine@sha256:3126e4eed4a3ebd8bf972b2453fa838200988ee07c01b2251e3ea47e4b1f245c"+"="+serInfo.serviceName+"/testimage:default", "--insecure").Execute()
if err != nil {
e2e.Logf("mirror failed, retrying...")
return false, nil
}
return true, nil
})
AssertWaitPollNoErr(err, fmt.Sprintf("max time reached but mirror still falied"))
o.Expect(err).NotTo(o.HaveOccurred())
imageInfo, err := oc.WithoutNamespace().Run("image").Args("info", serInfo.serviceName+"/testimage:default", "--insecure").Output()
o.Expect(err).NotTo(o.HaveOccurred())
architecture, err := exec.Command("bash", "-c", "uname -a").Output()
o.Expect(err).NotTo(o.HaveOccurred())
architectureStr := string(architecture)
switch {
case strings.Contains(architectureStr, "x86_64"):
o.Expect(strings.Contains(imageInfo, "amd64")).To(o.BeTrue())
e2e.Logf("Found the expected Arch amd64")
case strings.Contains(architectureStr, "aarch64"):
o.Expect(strings.Contains(imageInfo, "arm64")).To(o.BeTrue())
e2e.Logf("Found the expected Arch aarch64")
case strings.Contains(architectureStr, "ppc64le"):
o.Expect(strings.Contains(imageInfo, "ppc64le")).To(o.BeTrue())
e2e.Logf("Found the expected Arch ppc64le")
default:
o.Expect(strings.Contains(imageInfo, "s390x")).To(o.BeTrue())
e2e.Logf("Found the expected Arch s390x")
}
})
🤖 Prompt for AI Agents
In `@test/e2e/cli.go` around lines 436 - 517, The branch conditions use
o.Expect(...) which triggers Gomega failures immediately; change the if/else-if
conditions to plain string checks (use strings.Contains(architectureStr, "...")
on architectureStr) and keep o.Expect/assertions only inside the matching branch
when checking imageInfo for the expected architecture; update the branches that
reference architectureStr and imageInfo and remove o.Expect from the if/else-if
conditions so the code checks amd64/arm64/ppc64le/s390x only after the correct
branch is chosen.

test/e2e/cli.go Outdated
Comment on lines 528 to 554
g.It("MicroShiftBoth-ROSA-OSD_CCS-ARO-Author:yinzhou-High-43030-oc get events always show the timestamp as LAST SEEN", func() {
By("Get all the namespaces")
var output string
var err error
if isMicroShiftCluster(oc) {
output, err = oc.AsAdmin().Run("get").Args("ns", "-o=custom-columns=NAME:.metadata.name", "--no-headers").Output()
o.Expect(err).NotTo(o.HaveOccurred())
} else {
output, err = oc.AsAdmin().Run("get").Args("projects", "-o=custom-columns=NAME:.metadata.name", "--no-headers").Output()
o.Expect(err).NotTo(o.HaveOccurred())
}
projectList := strings.Fields(output)

g.By("check the events per project")
for _, projectN := range projectList {
output, _ = oc.AsAdmin().WithoutNamespace().Run("get").Args("events", "-n", projectN).Output()
if match, _ := regexp.MatchString("No resources found", string(output)); match {
e2e.Logf("No events in project: %v", projectN)
} else {
result, _ := exec.Command("bash", "-c", "cat "+output+" | awk '{print $1}'").Output()
if match, _ := regexp.MatchString("unknown", string(result)); match {
e2e.Failf("Does not show timestamp as expected: %v", result)
}
}
}

})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cd test/e2e && wc -l cli.go

Repository: openshift/oc

Length of output: 67


🏁 Script executed:

cd test/e2e && sed -n '528,554p' cli.go

Repository: openshift/oc

Length of output: 1237


Parse event output directly instead of trying to cat it as a filename.

Line 547 constructs a bash command that treats the event output string as a filename argument to cat (e.g., cat LAST SEEN 5m ...), causing the command to fail or return unexpected results. The subsequent regex check for "unknown" becomes ineffective. Parse the output directly in Go.

🛠️ Suggested fix (parse output lines)
-			result, _ := exec.Command("bash", "-c", "cat "+output+" | awk '{print $1}'").Output()
-			if match, _ := regexp.MatchString("unknown", string(result)); match {
-				e2e.Failf("Does not show timestamp as expected: %v", result)
-			}
+			for _, line := range strings.Split(strings.TrimSpace(output), "\n") {
+				fields := strings.Fields(line)
+				if len(fields) == 0 {
+					continue
+				}
+				if fields[0] == "unknown" {
+					e2e.Failf("Does not show timestamp as expected: %v", line)
+				}
+			}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
g.It("MicroShiftBoth-ROSA-OSD_CCS-ARO-Author:yinzhou-High-43030-oc get events always show the timestamp as LAST SEEN", func() {
By("Get all the namespaces")
var output string
var err error
if isMicroShiftCluster(oc) {
output, err = oc.AsAdmin().Run("get").Args("ns", "-o=custom-columns=NAME:.metadata.name", "--no-headers").Output()
o.Expect(err).NotTo(o.HaveOccurred())
} else {
output, err = oc.AsAdmin().Run("get").Args("projects", "-o=custom-columns=NAME:.metadata.name", "--no-headers").Output()
o.Expect(err).NotTo(o.HaveOccurred())
}
projectList := strings.Fields(output)
g.By("check the events per project")
for _, projectN := range projectList {
output, _ = oc.AsAdmin().WithoutNamespace().Run("get").Args("events", "-n", projectN).Output()
if match, _ := regexp.MatchString("No resources found", string(output)); match {
e2e.Logf("No events in project: %v", projectN)
} else {
result, _ := exec.Command("bash", "-c", "cat "+output+" | awk '{print $1}'").Output()
if match, _ := regexp.MatchString("unknown", string(result)); match {
e2e.Failf("Does not show timestamp as expected: %v", result)
}
}
}
})
g.It("MicroShiftBoth-ROSA-OSD_CCS-ARO-Author:yinzhou-High-43030-oc get events always show the timestamp as LAST SEEN", func() {
By("Get all the namespaces")
var output string
var err error
if isMicroShiftCluster(oc) {
output, err = oc.AsAdmin().Run("get").Args("ns", "-o=custom-columns=NAME:.metadata.name", "--no-headers").Output()
o.Expect(err).NotTo(o.HaveOccurred())
} else {
output, err = oc.AsAdmin().Run("get").Args("projects", "-o=custom-columns=NAME:.metadata.name", "--no-headers").Output()
o.Expect(err).NotTo(o.HaveOccurred())
}
projectList := strings.Fields(output)
g.By("check the events per project")
for _, projectN := range projectList {
output, _ = oc.AsAdmin().WithoutNamespace().Run("get").Args("events", "-n", projectN).Output()
if match, _ := regexp.MatchString("No resources found", string(output)); match {
e2e.Logf("No events in project: %v", projectN)
} else {
for _, line := range strings.Split(strings.TrimSpace(output), "\n") {
fields := strings.Fields(line)
if len(fields) == 0 {
continue
}
if fields[0] == "unknown" {
e2e.Failf("Does not show timestamp as expected: %v", line)
}
}
}
}
})
🤖 Prompt for AI Agents
In `@test/e2e/cli.go` around lines 528 - 554, The test is treating the event
command output as a filename and calling bash/cat on it; instead, take the
string returned by oc.AsAdmin().WithoutNamespace().Run("get").Args("events",
"-n", projectN).Output(), split it into lines, skip any header/empty lines,
extract the first whitespace-separated column from each line (the same value awk
'{print $1}' would produce) and then run the regex check for "unknown" against
those extracted tokens; update the loop in the g.It test (where output is used
and result is computed) to parse output in Go (strings.Split, strings.Fields)
rather than using exec.Command("bash", "-c", "cat ... | awk ...").

Comment on lines +498 to +504
const (
MULTI ArchitectureType = "Multi"
X86 = "amd64"
ARM64 = "arm64"
PPC64LE = "ppc64le"
S390X = "s390x"
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Constants lack explicit type annotation, losing type safety.

X86, ARM64, etc. are untyped string constants, not ArchitectureType. This defeats the purpose of the typed constant pattern.

🔧 Add explicit types
 const (
 	MULTI   ArchitectureType = "Multi"
-	X86                      = "amd64"
-	ARM64                    = "arm64"
-	PPC64LE                  = "ppc64le"
-	S390X                    = "s390x"
+	X86     ArchitectureType = "amd64"
+	ARM64   ArchitectureType = "arm64"
+	PPC64LE ArchitectureType = "ppc64le"
+	S390X   ArchitectureType = "s390x"
 )

Apply the same fix to PlatformType constants at lines 549-557.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const (
MULTI ArchitectureType = "Multi"
X86 = "amd64"
ARM64 = "arm64"
PPC64LE = "ppc64le"
S390X = "s390x"
)
const (
MULTI ArchitectureType = "Multi"
X86 ArchitectureType = "amd64"
ARM64 ArchitectureType = "arm64"
PPC64LE ArchitectureType = "ppc64le"
S390X ArchitectureType = "s390x"
)
🤖 Prompt for AI Agents
In `@test/e2e/util.go` around lines 498 - 504, The constants X86, ARM64, PPC64LE,
and S390X are currently untyped string constants instead of being declared as
ArchitectureType, breaking the typed-constant pattern; change their declarations
to give them the explicit type ArchitectureType (e.g., X86 ArchitectureType =
"amd64") so they carry the type information and preserve type-safety, and apply
the same change to the PlatformType group of constants (give each PlatformType
constant an explicit PlatformType type) to keep both constant groups
consistently typed.

Comment on lines +622 to +630
func getRandomString() string {
chars := "abcdefghijklmnopqrstuvwxyz0123456789"
seed := rand.New(rand.NewSource(time.Now().UnixNano()))
buffer := make([]byte, 8)
for index := range buffer {
buffer[index] = chars[seed.Intn(len(chars))]
}
return string(buffer)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Duplicate function: getRandomString() duplicates GetRandomString().

GetRandomString() at line 363 provides identical functionality. Remove this duplicate to avoid confusion.

♻️ Remove duplicate function
-func getRandomString() string {
-	chars := "abcdefghijklmnopqrstuvwxyz0123456789"
-	seed := rand.New(rand.NewSource(time.Now().UnixNano()))
-	buffer := make([]byte, 8)
-	for index := range buffer {
-		buffer[index] = chars[seed.Intn(len(chars))]
-	}
-	return string(buffer)
-}

Use GetRandomString() instead wherever getRandomString() is called.

🤖 Prompt for AI Agents
In `@test/e2e/util.go` around lines 622 - 630, Remove the duplicate unexported
function getRandomString(): keep the existing exported GetRandomString()
implementation and delete the getRandomString() definition, then update any
callers that reference getRandomString() to call GetRandomString() instead so
there is a single canonical function for generating the random string.

Comment on lines +913 to +930
func getLatestPayload(url string) string {
res, err := http.Get(url)
if err != nil {
e2e.Failf("unable to get http with error: %v", err)
}
body, err := ioutil.ReadAll(res.Body)
defer res.Body.Close()
if err != nil {
e2e.Failf("unable to parse the http result with error: %v", err)
}

var data map[string]interface{}
if err := json.Unmarshal(body, &data); err != nil {
e2e.Failf("unable to parse JSON with error: %v", err)
}
pullSpec, _ := data["pullSpec"].(string)
return pullSpec
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

HTTP request without timeout can hang tests indefinitely.

http.Get uses the default client with no timeout. If the server is unresponsive, this will block forever.

🛡️ Proposed fix: add timeout
 func getLatestPayload(url string) string {
-	res, err := http.Get(url)
+	client := &http.Client{Timeout: 30 * time.Second}
+	res, err := client.Get(url)
 	if err != nil {
 		e2e.Failf("unable to get http with error: %v", err)
 	}
+	defer res.Body.Close()
-	body, err := ioutil.ReadAll(res.Body)
-	defer res.Body.Close()
+	body, err := io.ReadAll(res.Body)
 	if err != nil {
 		e2e.Failf("unable to parse the http result with error: %v", err)
 	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func getLatestPayload(url string) string {
res, err := http.Get(url)
if err != nil {
e2e.Failf("unable to get http with error: %v", err)
}
body, err := ioutil.ReadAll(res.Body)
defer res.Body.Close()
if err != nil {
e2e.Failf("unable to parse the http result with error: %v", err)
}
var data map[string]interface{}
if err := json.Unmarshal(body, &data); err != nil {
e2e.Failf("unable to parse JSON with error: %v", err)
}
pullSpec, _ := data["pullSpec"].(string)
return pullSpec
}
func getLatestPayload(url string) string {
client := &http.Client{Timeout: 30 * time.Second}
res, err := client.Get(url)
if err != nil {
e2e.Failf("unable to get http with error: %v", err)
}
defer res.Body.Close()
body, err := io.ReadAll(res.Body)
if err != nil {
e2e.Failf("unable to parse the http result with error: %v", err)
}
var data map[string]interface{}
if err := json.Unmarshal(body, &data); err != nil {
e2e.Failf("unable to parse JSON with error: %v", err)
}
pullSpec, _ := data["pullSpec"].(string)
return pullSpec
}
🤖 Prompt for AI Agents
In `@test/e2e/util.go` around lines 913 - 930, The getLatestPayload function calls
http.Get with no timeout, which can hang tests; replace the use of http.Get with
an explicit http.Client that sets a reasonable Timeout, use client.Get(url)
instead of http.Get, ensure res.Body is closed via defer immediately after
checking for a non-nil response, and add a check for non-200 HTTP status to
return a clear failure via e2e.Failf; update references in this function
(getLatestPayload) accordingly.

Comment on lines +3 to +101
import (
"embed"
"fmt"
"io/fs"
"os"
"path/filepath"
"strings"
)

//go:embed oc_cli
var embeddedFixtures embed.FS

var fixtureDir string

func init() {
var err error
// Create a temporary directory for extracted fixtures
fixtureDir, err = os.MkdirTemp("", "oc-testdata-fixtures-")
if err != nil {
panic(fmt.Sprintf("failed to create fixture directory: %v", err))
}
}

// FixturePath returns the filesystem path to a fixture file or directory, extracting it from
// embedded files if necessary. The relativePath should be like "testdata/oc_cli/file.yaml" or "oc_cli/file.yaml"
func FixturePath(elem ...string) string {
relativePath := filepath.Join(elem...)

// Normalize the path for embed.FS (always use forward slashes, remove testdata/ prefix)
embedPath := strings.ReplaceAll(relativePath, string(filepath.Separator), "/")
embedPath = strings.TrimPrefix(embedPath, "testdata/")

// Target path in temp directory
targetPath := filepath.Join(fixtureDir, relativePath)

// Check if already extracted
if _, err := os.Stat(targetPath); err == nil {
return targetPath
}

// Check if this is a directory or file in embed.FS
info, err := fs.Stat(embeddedFixtures, embedPath)
if err != nil {
panic(fmt.Sprintf("failed to stat embedded path %s: %v", embedPath, err))
}

if info.IsDir() {
// It's a directory - extract all files recursively
err := fs.WalkDir(embeddedFixtures, embedPath, func(path string, d fs.DirEntry, err error) error {
if err != nil {
return err
}

// Calculate target path
relPath := strings.TrimPrefix(path, embedPath)
relPath = strings.TrimPrefix(relPath, "/")
target := filepath.Join(targetPath, relPath)

if d.IsDir() {
// Create directory
return os.MkdirAll(target, 0700)
}

// Create parent directory
if err := os.MkdirAll(filepath.Dir(target), 0700); err != nil {
return err
}

// Read and write file
data, err := embeddedFixtures.ReadFile(path)
if err != nil {
return err
}
return os.WriteFile(target, data, 0644)
})
if err != nil {
panic(fmt.Sprintf("failed to extract directory %s: %v", embedPath, err))
}
} else {
// It's a file
// Create parent directory
if err := os.MkdirAll(filepath.Dir(targetPath), 0700); err != nil {
panic(fmt.Sprintf("failed to create directory for %s: %v", relativePath, err))
}

// Read from embedded FS
data, err := embeddedFixtures.ReadFile(embedPath)
if err != nil {
panic(fmt.Sprintf("failed to read embedded file %s: %v", embedPath, err))
}

// Write to temp directory
if err := os.WriteFile(targetPath, data, 0644); err != nil {
panic(fmt.Sprintf("failed to write fixture file %s: %v", targetPath, err))
}
}

return targetPath
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Find all usages of FixturePath to understand calling patterns
rg "FixturePath" --type go -B 2 -A 2

Repository: openshift/oc

Length of output: 5081


🏁 Script executed:

# Check if there are any test files using t.Parallel or similar concurrency patterns
rg "t\.Parallel|t\.Run" test/ --type go | head -20

Repository: openshift/oc

Length of output: 38


🏁 Script executed:

# Look at the complete fixtures.go to see if there are other synchronization mechanisms
cat -n test/testdata/fixtures.go

Repository: openshift/oc

Length of output: 4328


🏁 Script executed:

# Search for any existing sync.Once or mutex usage in the codebase
rg "sync\.(Mutex|Once)" --type go

Repository: openshift/oc

Length of output: 42997


Add synchronization to FixturePath to prevent concurrent extraction races.

The function lacks synchronization when extracting fixtures to a shared temp directory. If parallel tests call FixturePath with the same path, both can pass the existence check at line 39 and proceed to extract concurrently, causing filesystem write races. The suggested mutex fix is appropriate.

🛠️ Suggested fix (serialize extraction)
 import (
 	"embed"
 	"fmt"
 	"io/fs"
 	"os"
 	"path/filepath"
 	"strings"
+	"sync"
 )
 
 var fixtureDir string
+var fixtureMu sync.Mutex
 
 // FixturePath returns the filesystem path to a fixture file or directory, extracting it from
 // embedded files if necessary. The relativePath should be like "testdata/oc_cli/file.yaml" or "oc_cli/file.yaml"
 func FixturePath(elem ...string) string {
 	relativePath := filepath.Join(elem...)
@@
 	// Target path in temp directory
 	targetPath := filepath.Join(fixtureDir, relativePath)
 
+	fixtureMu.Lock()
+	defer fixtureMu.Unlock()
 	// Check if already extracted
 	if _, err := os.Stat(targetPath); err == nil {
 		return targetPath
 	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import (
"embed"
"fmt"
"io/fs"
"os"
"path/filepath"
"strings"
)
//go:embed oc_cli
var embeddedFixtures embed.FS
var fixtureDir string
func init() {
var err error
// Create a temporary directory for extracted fixtures
fixtureDir, err = os.MkdirTemp("", "oc-testdata-fixtures-")
if err != nil {
panic(fmt.Sprintf("failed to create fixture directory: %v", err))
}
}
// FixturePath returns the filesystem path to a fixture file or directory, extracting it from
// embedded files if necessary. The relativePath should be like "testdata/oc_cli/file.yaml" or "oc_cli/file.yaml"
func FixturePath(elem ...string) string {
relativePath := filepath.Join(elem...)
// Normalize the path for embed.FS (always use forward slashes, remove testdata/ prefix)
embedPath := strings.ReplaceAll(relativePath, string(filepath.Separator), "/")
embedPath = strings.TrimPrefix(embedPath, "testdata/")
// Target path in temp directory
targetPath := filepath.Join(fixtureDir, relativePath)
// Check if already extracted
if _, err := os.Stat(targetPath); err == nil {
return targetPath
}
// Check if this is a directory or file in embed.FS
info, err := fs.Stat(embeddedFixtures, embedPath)
if err != nil {
panic(fmt.Sprintf("failed to stat embedded path %s: %v", embedPath, err))
}
if info.IsDir() {
// It's a directory - extract all files recursively
err := fs.WalkDir(embeddedFixtures, embedPath, func(path string, d fs.DirEntry, err error) error {
if err != nil {
return err
}
// Calculate target path
relPath := strings.TrimPrefix(path, embedPath)
relPath = strings.TrimPrefix(relPath, "/")
target := filepath.Join(targetPath, relPath)
if d.IsDir() {
// Create directory
return os.MkdirAll(target, 0700)
}
// Create parent directory
if err := os.MkdirAll(filepath.Dir(target), 0700); err != nil {
return err
}
// Read and write file
data, err := embeddedFixtures.ReadFile(path)
if err != nil {
return err
}
return os.WriteFile(target, data, 0644)
})
if err != nil {
panic(fmt.Sprintf("failed to extract directory %s: %v", embedPath, err))
}
} else {
// It's a file
// Create parent directory
if err := os.MkdirAll(filepath.Dir(targetPath), 0700); err != nil {
panic(fmt.Sprintf("failed to create directory for %s: %v", relativePath, err))
}
// Read from embedded FS
data, err := embeddedFixtures.ReadFile(embedPath)
if err != nil {
panic(fmt.Sprintf("failed to read embedded file %s: %v", embedPath, err))
}
// Write to temp directory
if err := os.WriteFile(targetPath, data, 0644); err != nil {
panic(fmt.Sprintf("failed to write fixture file %s: %v", targetPath, err))
}
}
return targetPath
}
import (
"embed"
"fmt"
"io/fs"
"os"
"path/filepath"
"strings"
"sync"
)
//go:embed oc_cli
var embeddedFixtures embed.FS
var fixtureDir string
var fixtureMu sync.Mutex
func init() {
var err error
// Create a temporary directory for extracted fixtures
fixtureDir, err = os.MkdirTemp("", "oc-testdata-fixtures-")
if err != nil {
panic(fmt.Sprintf("failed to create fixture directory: %v", err))
}
}
// FixturePath returns the filesystem path to a fixture file or directory, extracting it from
// embedded files if necessary. The relativePath should be like "testdata/oc_cli/file.yaml" or "oc_cli/file.yaml"
func FixturePath(elem ...string) string {
relativePath := filepath.Join(elem...)
// Normalize the path for embed.FS (always use forward slashes, remove testdata/ prefix)
embedPath := strings.ReplaceAll(relativePath, string(filepath.Separator), "/")
embedPath = strings.TrimPrefix(embedPath, "testdata/")
// Target path in temp directory
targetPath := filepath.Join(fixtureDir, relativePath)
fixtureMu.Lock()
defer fixtureMu.Unlock()
// Check if already extracted
if _, err := os.Stat(targetPath); err == nil {
return targetPath
}
// Check if this is a directory or file in embed.FS
info, err := fs.Stat(embeddedFixtures, embedPath)
if err != nil {
panic(fmt.Sprintf("failed to stat embedded path %s: %v", embedPath, err))
}
if info.IsDir() {
// It's a directory - extract all files recursively
err := fs.WalkDir(embeddedFixtures, embedPath, func(path string, d fs.DirEntry, err error) error {
if err != nil {
return err
}
// Calculate target path
relPath := strings.TrimPrefix(path, embedPath)
relPath = strings.TrimPrefix(relPath, "/")
target := filepath.Join(targetPath, relPath)
if d.IsDir() {
// Create directory
return os.MkdirAll(target, 0700)
}
// Create parent directory
if err := os.MkdirAll(filepath.Dir(target), 0700); err != nil {
return err
}
// Read and write file
data, err := embeddedFixtures.ReadFile(path)
if err != nil {
return err
}
return os.WriteFile(target, data, 0644)
})
if err != nil {
panic(fmt.Sprintf("failed to extract directory %s: %v", embedPath, err))
}
} else {
// It's a file
// Create parent directory
if err := os.MkdirAll(filepath.Dir(targetPath), 0700); err != nil {
panic(fmt.Sprintf("failed to create directory for %s: %v", relativePath, err))
}
// Read from embedded FS
data, err := embeddedFixtures.ReadFile(embedPath)
if err != nil {
panic(fmt.Sprintf("failed to read embedded file %s: %v", embedPath, err))
}
// Write to temp directory
if err := os.WriteFile(targetPath, data, 0644); err != nil {
panic(fmt.Sprintf("failed to write fixture file %s: %v", targetPath, err))
}
}
return targetPath
}
🤖 Prompt for AI Agents
In `@test/testdata/fixtures.go` around lines 3 - 101, FixturePath has a race when
multiple goroutines call it and both pass the os.Stat check then extract the
same embedded files; add synchronization by introducing a package-level mutex
(e.g., fixtureMu sync.Mutex) and lock around the check-and-extract critical
section in FixturePath (from the os.Stat check through the end of
extraction/write steps) to serialize extraction, ensuring you import "sync" and
use defer fixtureMu.Unlock() after locking.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Feb 8, 2026

@gangwgr: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@gangwgr
Copy link
Contributor Author

gangwgr commented Feb 9, 2026

/payload 4.22 nightly blocking

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Feb 9, 2026

@gangwgr: trigger 14 job(s) of type blocking for the nightly release of OCP 4.22

  • periodic-ci-openshift-release-master-ci-4.22-e2e-aws-upgrade-ovn-single-node
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-aws-ovn-upgrade-fips
  • periodic-ci-openshift-release-master-ci-4.22-e2e-azure-ovn-upgrade
  • periodic-ci-openshift-release-master-ci-4.22-upgrade-from-stable-4.21-e2e-gcp-ovn-rt-upgrade
  • periodic-ci-openshift-hypershift-release-4.22-periodics-e2e-aws-ovn-conformance
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-aws-ovn-serial-1of2
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-aws-ovn-serial-2of2
  • periodic-ci-openshift-release-master-ci-4.22-e2e-aws-ovn-techpreview
  • periodic-ci-openshift-release-master-ci-4.22-e2e-aws-ovn-techpreview-serial-1of3
  • periodic-ci-openshift-release-master-ci-4.22-e2e-aws-ovn-techpreview-serial-2of3
  • periodic-ci-openshift-release-master-ci-4.22-e2e-aws-ovn-techpreview-serial-3of3
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-aws-ovn-upgrade-fips-no-nat-instance
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-metal-ipi-ovn-ipv4
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-metal-ipi-ovn-ipv6

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/7f043b40-05d6-11f1-94f8-c9bc7b1e2c8c-0

@gangwgr
Copy link
Contributor Author

gangwgr commented Feb 9, 2026

/payload 4.22 nightly informing

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Feb 9, 2026

@gangwgr: trigger 68 job(s) of type informing for the nightly release of OCP 4.22

  • periodic-ci-openshift-release-master-nightly-4.22-e2e-agent-compact-fips
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-agent-ha-dualstack-conformance
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-agent-single-node-ipv6-conformance
  • periodic-ci-openshift-hypershift-release-4.22-periodics-e2e-azure-aks-ovn-conformance
  • periodic-ci-openshift-release-master-nightly-4.22-console-aws
  • periodic-ci-openshift-cluster-control-plane-machine-set-operator-release-4.22-periodics-e2e-aws
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-aws-csi
  • periodic-ci-openshift-release-master-ci-4.22-e2e-aws-ovn
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-aws-ovn-cgroupsv2
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-aws-ovn-fips
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-aws-ovn-single-node
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-aws-ovn-single-node-csi
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-aws-ovn-single-node-serial
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-aws-ovn-single-node-techpreview
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-aws-ovn-single-node-techpreview-serial
  • periodic-ci-openshift-release-master-nightly-4.22-upgrade-from-stable-4.21-e2e-aws-upgrade-ovn-single-node
  • periodic-ci-openshift-release-master-ci-4.22-e2e-aws-ovn-upgrade-out-of-change
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-aws-ovn-upi
  • periodic-ci-openshift-cluster-control-plane-machine-set-operator-release-4.22-periodics-e2e-azure
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-azure-csi
  • periodic-ci-openshift-release-master-ci-4.22-e2e-azure-ovn
  • periodic-ci-openshift-release-master-ci-4.22-e2e-azure-ovn-serial
  • periodic-ci-openshift-release-master-ci-4.22-e2e-azure-ovn-techpreview
  • periodic-ci-openshift-release-master-ci-4.22-e2e-azure-ovn-techpreview-serial
  • periodic-ci-openshift-release-master-ci-4.22-e2e-azure-ovn-upgrade-out-of-change
  • periodic-ci-openshift-release-master-cnv-nightly-4.22-deploy-azure-kubevirt-ovn
  • periodic-ci-openshift-cluster-control-plane-machine-set-operator-release-4.22-periodics-e2e-gcp
  • periodic-ci-openshift-release-master-ci-4.22-e2e-gcp-ovn
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-gcp-ovn-csi
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-gcp-ovn-rt
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-gcp-ovn-serial
  • periodic-ci-openshift-release-master-ci-4.22-e2e-gcp-ovn-techpreview
  • periodic-ci-openshift-release-master-ci-4.22-e2e-gcp-ovn-techpreview-serial
  • periodic-ci-openshift-release-master-ci-4.22-upgrade-from-stable-4.21-e2e-gcp-ovn-upgrade
  • periodic-ci-openshift-release-master-ci-4.22-e2e-gcp-ovn-upgrade
  • periodic-ci-openshift-hypershift-release-4.22-periodics-e2e-azure-kubevirt-ovn
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-metal-ipi-ovn-dualstack
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-metal-ipi-ovn-dualstack-techpreview
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-metal-ipi-ovn-ipv6-techpreview
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-metal-ipi-ovn-serial-ipv4
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-metal-ipi-ovn-serial-virtualmedia-1of2
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-metal-ipi-ovn-serial-virtualmedia-2of2
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-metal-ipi-ovn-techpreview
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-metal-ipi-ovn-upgrade
  • periodic-ci-openshift-release-master-nightly-4.22-upgrade-from-stable-4.21-e2e-metal-ipi-ovn-upgrade
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-metal-ipi-serial-ovn-ipv6
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-metal-ipi-serial-ovn-dualstack
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-metal-ipi-upgrade-ovn-ipv6
  • periodic-ci-openshift-release-master-nightly-4.22-upgrade-from-stable-4.21-e2e-metal-ipi-upgrade-ovn-ipv6
  • periodic-ci-openshift-release-master-nightly-4.22-metal-ovn-single-node-recert-cluster-rename
  • periodic-ci-openshift-osde2e-main-nightly-4.22-osd-aws
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-osd-ccs-gcp
  • periodic-ci-openshift-osde2e-main-nightly-4.22-osd-gcp
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-aws-ovn-proxy
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-metal-ovn-single-node-live-iso
  • periodic-ci-openshift-eng-ocp-qe-perfscale-ci-main-aws-4.22-nightly-x86-payload-control-plane-6nodes
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-telco5g
  • periodic-ci-openshift-release-master-ci-4.22-upgrade-from-stable-4.21-e2e-aws-ovn-upgrade
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-vsphere-ovn
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-vsphere-ovn-csi
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-vsphere-ovn-serial
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-vsphere-ovn-techpreview
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-vsphere-ovn-techpreview-serial
  • periodic-ci-openshift-release-master-ci-4.22-e2e-vsphere-ovn-upgrade
  • periodic-ci-openshift-release-master-ci-4.22-upgrade-from-stable-4.21-e2e-vsphere-ovn-upgrade
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-vsphere-ovn-upi
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-vsphere-ovn-upi-serial
  • periodic-ci-openshift-release-master-nightly-4.22-e2e-vsphere-static-ovn

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/8b70c600-05d6-11f1-9cc8-dbf4a2f2889d-0

@gangwgr
Copy link
Contributor Author

gangwgr commented Feb 11, 2026

/payload 4.22 nightly informing

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Feb 11, 2026

@gangwgr: trigger 68 job(s) of type informing for the nightly release of OCP 4.22

  • periodic-ci-openshift-release-main-nightly-4.22-e2e-agent-compact-fips
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-agent-ha-dualstack-conformance
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-agent-single-node-ipv6-conformance
  • periodic-ci-openshift-hypershift-release-4.22-periodics-e2e-azure-aks-ovn-conformance
  • periodic-ci-openshift-release-main-nightly-4.22-console-aws
  • periodic-ci-openshift-cluster-control-plane-machine-set-operator-release-4.22-periodics-e2e-aws
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-csi
  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-cgroupsv2
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-fips
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-single-node
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-single-node-csi
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-single-node-serial
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-single-node-techpreview
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-single-node-techpreview-serial
  • periodic-ci-openshift-release-main-nightly-4.22-upgrade-from-stable-4.21-e2e-aws-upgrade-ovn-single-node
  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn-upgrade-out-of-change
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upi
  • periodic-ci-openshift-cluster-control-plane-machine-set-operator-release-4.22-periodics-e2e-azure
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-azure-csi
  • periodic-ci-openshift-release-main-ci-4.22-e2e-azure-ovn
  • periodic-ci-openshift-release-main-ci-4.22-e2e-azure-ovn-serial
  • periodic-ci-openshift-release-main-ci-4.22-e2e-azure-ovn-techpreview
  • periodic-ci-openshift-release-main-ci-4.22-e2e-azure-ovn-techpreview-serial
  • periodic-ci-openshift-release-main-ci-4.22-e2e-azure-ovn-upgrade-out-of-change
  • periodic-ci-openshift-release-main-cnv-nightly-4.22-deploy-azure-kubevirt-ovn
  • periodic-ci-openshift-cluster-control-plane-machine-set-operator-release-4.22-periodics-e2e-gcp
  • periodic-ci-openshift-release-main-ci-4.22-e2e-gcp-ovn
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-gcp-ovn-csi
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-gcp-ovn-rt
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-gcp-ovn-serial
  • periodic-ci-openshift-release-main-ci-4.22-e2e-gcp-ovn-techpreview
  • periodic-ci-openshift-release-main-ci-4.22-e2e-gcp-ovn-techpreview-serial
  • periodic-ci-openshift-release-main-ci-4.22-upgrade-from-stable-4.21-e2e-gcp-ovn-upgrade
  • periodic-ci-openshift-release-main-ci-4.22-e2e-gcp-ovn-upgrade
  • periodic-ci-openshift-hypershift-release-4.22-periodics-e2e-azure-kubevirt-ovn
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-dualstack
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-dualstack-techpreview
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-ipv6-techpreview
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-serial-ipv4
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-serial-virtualmedia-1of2
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-serial-virtualmedia-2of2
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-techpreview
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-upgrade
  • periodic-ci-openshift-release-main-nightly-4.22-upgrade-from-stable-4.21-e2e-metal-ipi-ovn-upgrade
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-serial-ovn-ipv6
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-serial-ovn-dualstack
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-upgrade-ovn-ipv6
  • periodic-ci-openshift-release-main-nightly-4.22-upgrade-from-stable-4.21-e2e-metal-ipi-upgrade-ovn-ipv6
  • periodic-ci-openshift-release-main-nightly-4.22-metal-ovn-single-node-recert-cluster-rename
  • periodic-ci-openshift-osde2e-main-nightly-4.22-osd-aws
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-osd-ccs-gcp
  • periodic-ci-openshift-osde2e-main-nightly-4.22-osd-gcp
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-proxy
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ovn-single-node-live-iso
  • periodic-ci-openshift-eng-ocp-qe-perfscale-ci-main-aws-4.22-nightly-x86-payload-control-plane-6nodes
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-telco5g
  • periodic-ci-openshift-release-main-ci-4.22-upgrade-from-stable-4.21-e2e-aws-ovn-upgrade
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-ovn
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-ovn-csi
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-ovn-serial
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-ovn-techpreview
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-ovn-techpreview-serial
  • periodic-ci-openshift-release-main-ci-4.22-e2e-vsphere-ovn-upgrade
  • periodic-ci-openshift-release-main-ci-4.22-upgrade-from-stable-4.21-e2e-vsphere-ovn-upgrade
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-ovn-upi
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-ovn-upi-serial
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-static-ovn

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/b30bb6b0-0730-11f1-812d-7daee8161eb1-0

@gangwgr
Copy link
Contributor Author

gangwgr commented Feb 11, 2026

/payload 4.22 nightly blocking

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Feb 11, 2026

@gangwgr: trigger 14 job(s) of type blocking for the nightly release of OCP 4.22

  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-upgrade-ovn-single-node
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips
  • periodic-ci-openshift-release-main-ci-4.22-e2e-azure-ovn-upgrade
  • periodic-ci-openshift-release-main-ci-4.22-upgrade-from-stable-4.21-e2e-gcp-ovn-rt-upgrade
  • periodic-ci-openshift-hypershift-release-4.22-periodics-e2e-aws-ovn-conformance
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-serial-1of2
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-serial-2of2
  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn-techpreview
  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn-techpreview-serial-1of3
  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn-techpreview-serial-2of3
  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn-techpreview-serial-3of3
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips-no-nat-instance
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-ipv4
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-ipv6

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/b86a60c0-0730-11f1-8c79-35167b6a7105-0

@gangwgr
Copy link
Contributor Author

gangwgr commented Feb 12, 2026

/payload 4.22 nightly blocking

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Feb 12, 2026

@gangwgr: trigger 14 job(s) of type blocking for the nightly release of OCP 4.22

  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-upgrade-ovn-single-node
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips
  • periodic-ci-openshift-release-main-ci-4.22-e2e-azure-ovn-upgrade
  • periodic-ci-openshift-release-main-ci-4.22-upgrade-from-stable-4.21-e2e-gcp-ovn-rt-upgrade
  • periodic-ci-openshift-hypershift-release-4.22-periodics-e2e-aws-ovn-conformance
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-serial-1of2
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-serial-2of2
  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn-techpreview
  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn-techpreview-serial-1of3
  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn-techpreview-serial-2of3
  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn-techpreview-serial-3of3
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips-no-nat-instance
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-ipv4
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-ipv6

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/de9bdbd0-07d9-11f1-9738-ed94a6a0677d-0

@gangwgr
Copy link
Contributor Author

gangwgr commented Feb 12, 2026

/payload 4.22 nightly informing

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Feb 12, 2026

@gangwgr: trigger 68 job(s) of type informing for the nightly release of OCP 4.22

  • periodic-ci-openshift-release-main-nightly-4.22-e2e-agent-compact-fips
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-agent-ha-dualstack-conformance
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-agent-single-node-ipv6-conformance
  • periodic-ci-openshift-hypershift-release-4.22-periodics-e2e-azure-aks-ovn-conformance
  • periodic-ci-openshift-release-main-nightly-4.22-console-aws
  • periodic-ci-openshift-cluster-control-plane-machine-set-operator-release-4.22-periodics-e2e-aws
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-csi
  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-cgroupsv2
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-fips
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-single-node
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-single-node-csi
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-single-node-serial
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-single-node-techpreview
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-single-node-techpreview-serial
  • periodic-ci-openshift-release-main-nightly-4.22-upgrade-from-stable-4.21-e2e-aws-upgrade-ovn-single-node
  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn-upgrade-out-of-change
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upi
  • periodic-ci-openshift-cluster-control-plane-machine-set-operator-release-4.22-periodics-e2e-azure
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-azure-csi
  • periodic-ci-openshift-release-main-ci-4.22-e2e-azure-ovn
  • periodic-ci-openshift-release-main-ci-4.22-e2e-azure-ovn-serial
  • periodic-ci-openshift-release-main-ci-4.22-e2e-azure-ovn-techpreview
  • periodic-ci-openshift-release-main-ci-4.22-e2e-azure-ovn-techpreview-serial
  • periodic-ci-openshift-release-main-ci-4.22-e2e-azure-ovn-upgrade-out-of-change
  • periodic-ci-openshift-release-main-cnv-nightly-4.22-deploy-azure-kubevirt-ovn
  • periodic-ci-openshift-cluster-control-plane-machine-set-operator-release-4.22-periodics-e2e-gcp
  • periodic-ci-openshift-release-main-ci-4.22-e2e-gcp-ovn
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-gcp-ovn-csi
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-gcp-ovn-rt
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-gcp-ovn-serial
  • periodic-ci-openshift-release-main-ci-4.22-e2e-gcp-ovn-techpreview
  • periodic-ci-openshift-release-main-ci-4.22-e2e-gcp-ovn-techpreview-serial
  • periodic-ci-openshift-release-main-ci-4.22-upgrade-from-stable-4.21-e2e-gcp-ovn-upgrade
  • periodic-ci-openshift-release-main-ci-4.22-e2e-gcp-ovn-upgrade
  • periodic-ci-openshift-hypershift-release-4.22-periodics-e2e-azure-kubevirt-ovn
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-dualstack
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-dualstack-techpreview
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-ipv6-techpreview
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-serial-ipv4
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-serial-virtualmedia-1of2
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-serial-virtualmedia-2of2
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-techpreview
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-upgrade
  • periodic-ci-openshift-release-main-nightly-4.22-upgrade-from-stable-4.21-e2e-metal-ipi-ovn-upgrade
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-serial-ovn-ipv6
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-serial-ovn-dualstack
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-upgrade-ovn-ipv6
  • periodic-ci-openshift-release-main-nightly-4.22-upgrade-from-stable-4.21-e2e-metal-ipi-upgrade-ovn-ipv6
  • periodic-ci-openshift-release-main-nightly-4.22-metal-ovn-single-node-recert-cluster-rename
  • periodic-ci-openshift-osde2e-main-nightly-4.22-osd-aws
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-osd-ccs-gcp
  • periodic-ci-openshift-osde2e-main-nightly-4.22-osd-gcp
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-proxy
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ovn-single-node-live-iso
  • periodic-ci-openshift-eng-ocp-qe-perfscale-ci-main-aws-4.22-nightly-x86-payload-control-plane-6nodes
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-telco5g
  • periodic-ci-openshift-release-main-ci-4.22-upgrade-from-stable-4.21-e2e-aws-ovn-upgrade
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-ovn
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-ovn-csi
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-ovn-serial
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-ovn-techpreview
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-ovn-techpreview-serial
  • periodic-ci-openshift-release-main-ci-4.22-e2e-vsphere-ovn-upgrade
  • periodic-ci-openshift-release-main-ci-4.22-upgrade-from-stable-4.21-e2e-vsphere-ovn-upgrade
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-ovn-upi
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-ovn-upi-serial
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-static-ovn

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/3efd7920-07da-11f1-85ea-370162cc0850-0

@gangwgr
Copy link
Contributor Author

gangwgr commented Feb 12, 2026

/payload 4.22 nightly blocking

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Feb 12, 2026

@gangwgr: trigger 14 job(s) of type blocking for the nightly release of OCP 4.22

  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-upgrade-ovn-single-node
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips
  • periodic-ci-openshift-release-main-ci-4.22-e2e-azure-ovn-upgrade
  • periodic-ci-openshift-release-main-ci-4.22-upgrade-from-stable-4.21-e2e-gcp-ovn-rt-upgrade
  • periodic-ci-openshift-hypershift-release-4.22-periodics-e2e-aws-ovn-conformance
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-serial-1of2
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-serial-2of2
  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn-techpreview
  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn-techpreview-serial-1of3
  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn-techpreview-serial-2of3
  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn-techpreview-serial-3of3
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips-no-nat-instance
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-ipv4
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-ipv6

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/97a4b730-07dc-11f1-8b90-4aefb38feacf-0

@gangwgr
Copy link
Contributor Author

gangwgr commented Feb 12, 2026

/payload 4.22 nightly informing

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Feb 12, 2026

@gangwgr: trigger 68 job(s) of type informing for the nightly release of OCP 4.22

  • periodic-ci-openshift-release-main-nightly-4.22-e2e-agent-compact-fips
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-agent-ha-dualstack-conformance
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-agent-single-node-ipv6-conformance
  • periodic-ci-openshift-hypershift-release-4.22-periodics-e2e-azure-aks-ovn-conformance
  • periodic-ci-openshift-release-main-nightly-4.22-console-aws
  • periodic-ci-openshift-cluster-control-plane-machine-set-operator-release-4.22-periodics-e2e-aws
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-csi
  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-cgroupsv2
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-fips
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-single-node
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-single-node-csi
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-single-node-serial
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-single-node-techpreview
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-single-node-techpreview-serial
  • periodic-ci-openshift-release-main-nightly-4.22-upgrade-from-stable-4.21-e2e-aws-upgrade-ovn-single-node
  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn-upgrade-out-of-change
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upi
  • periodic-ci-openshift-cluster-control-plane-machine-set-operator-release-4.22-periodics-e2e-azure
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-azure-csi
  • periodic-ci-openshift-release-main-ci-4.22-e2e-azure-ovn
  • periodic-ci-openshift-release-main-ci-4.22-e2e-azure-ovn-serial
  • periodic-ci-openshift-release-main-ci-4.22-e2e-azure-ovn-techpreview
  • periodic-ci-openshift-release-main-ci-4.22-e2e-azure-ovn-techpreview-serial
  • periodic-ci-openshift-release-main-ci-4.22-e2e-azure-ovn-upgrade-out-of-change
  • periodic-ci-openshift-release-main-cnv-nightly-4.22-deploy-azure-kubevirt-ovn
  • periodic-ci-openshift-cluster-control-plane-machine-set-operator-release-4.22-periodics-e2e-gcp
  • periodic-ci-openshift-release-main-ci-4.22-e2e-gcp-ovn
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-gcp-ovn-csi
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-gcp-ovn-rt
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-gcp-ovn-serial
  • periodic-ci-openshift-release-main-ci-4.22-e2e-gcp-ovn-techpreview
  • periodic-ci-openshift-release-main-ci-4.22-e2e-gcp-ovn-techpreview-serial
  • periodic-ci-openshift-release-main-ci-4.22-upgrade-from-stable-4.21-e2e-gcp-ovn-upgrade
  • periodic-ci-openshift-release-main-ci-4.22-e2e-gcp-ovn-upgrade
  • periodic-ci-openshift-hypershift-release-4.22-periodics-e2e-azure-kubevirt-ovn
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-dualstack
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-dualstack-techpreview
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-ipv6-techpreview
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-serial-ipv4
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-serial-virtualmedia-1of2
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-serial-virtualmedia-2of2
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-techpreview
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-upgrade
  • periodic-ci-openshift-release-main-nightly-4.22-upgrade-from-stable-4.21-e2e-metal-ipi-ovn-upgrade
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-serial-ovn-ipv6
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-serial-ovn-dualstack
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-upgrade-ovn-ipv6
  • periodic-ci-openshift-release-main-nightly-4.22-upgrade-from-stable-4.21-e2e-metal-ipi-upgrade-ovn-ipv6
  • periodic-ci-openshift-release-main-nightly-4.22-metal-ovn-single-node-recert-cluster-rename
  • periodic-ci-openshift-osde2e-main-nightly-4.22-osd-aws
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-osd-ccs-gcp
  • periodic-ci-openshift-osde2e-main-nightly-4.22-osd-gcp
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-proxy
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ovn-single-node-live-iso
  • periodic-ci-openshift-eng-ocp-qe-perfscale-ci-main-aws-4.22-nightly-x86-payload-control-plane-6nodes
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-telco5g
  • periodic-ci-openshift-release-main-ci-4.22-upgrade-from-stable-4.21-e2e-aws-ovn-upgrade
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-ovn
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-ovn-csi
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-ovn-serial
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-ovn-techpreview
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-ovn-techpreview-serial
  • periodic-ci-openshift-release-main-ci-4.22-e2e-vsphere-ovn-upgrade
  • periodic-ci-openshift-release-main-ci-4.22-upgrade-from-stable-4.21-e2e-vsphere-ovn-upgrade
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-ovn-upi
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-ovn-upi-serial
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-static-ovn

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/9d8ab9b0-07dc-11f1-9d7b-d44530b0aec2-0

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@test/e2e/util.go`:
- Around line 973-979: getClusterRegion currently dereferences the first worker
without checking if getWorkersList(oc) returned any nodes, which can panic;
modify getClusterRegion to call getWorkersList(oc), check the slice length
before indexing, and handle the empty case safely (e.g., log a message and
return an empty string or a sentinel value so callers like skipIfDisconnected
can handle "no workers" without a panic). Update the function to only call
oc.AsAdmin().WithoutNamespace().Run(...).Output() when a node is present and
ensure any callers handle the empty-region return appropriately.

Comment on lines +973 to +979
func getClusterRegion(oc *CLI) string {
e2e.Logf("Getting cluster region")
node := getWorkersList(oc)[0]
region, err := oc.AsAdmin().WithoutNamespace().Run("get").Args("node", node, "-o=jsonpath={.metadata.labels.failure-domain\\.beta\\.kubernetes\\.io\\/region}").Output()
o.Expect(err).NotTo(o.HaveOccurred())
e2e.Logf("Cluster region: %s", region)
return region
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid panic when no worker nodes are available.

getWorkersList can return an empty slice; indexing [0] panics and breaks skipIfDisconnected.

🛠️ Suggested fix
 func getClusterRegion(oc *CLI) string {
 	e2e.Logf("Getting cluster region")
-	node := getWorkersList(oc)[0]
+	workers := getWorkersList(oc)
+	if len(workers) == 0 {
+		e2e.Warningf("No worker nodes found; cannot determine cluster region")
+		return "unknown"
+	}
+	node := workers[0]
 	region, err := oc.AsAdmin().WithoutNamespace().Run("get").Args("node", node, "-o=jsonpath={.metadata.labels.failure-domain\\.beta\\.kubernetes\\.io\\/region}").Output()
 	o.Expect(err).NotTo(o.HaveOccurred())
 	e2e.Logf("Cluster region: %s", region)
 	return region
 }
🤖 Prompt for AI Agents
In `@test/e2e/util.go` around lines 973 - 979, getClusterRegion currently
dereferences the first worker without checking if getWorkersList(oc) returned
any nodes, which can panic; modify getClusterRegion to call getWorkersList(oc),
check the slice length before indexing, and handle the empty case safely (e.g.,
log a message and return an empty string or a sentinel value so callers like
skipIfDisconnected can handle "no workers" without a panic). Update the function
to only call oc.AsAdmin().WithoutNamespace().Run(...).Output() when a node is
present and ensure any callers handle the empty-region return appropriately.

@ardaguclu
Copy link
Member

We have already detected problematic tests during our previous two attempts and fixed them. Besides, all the tests are marked as Informing, so they won't block the release payloads.

In my opinion, we can merge this as is. @gangwgr you can remove the Informing tag in a followup PR with some set of sample test runs that prove that tests pass in all expected variants (especially on MicroShift).
/lgtm
/approve

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Feb 12, 2026
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Feb 12, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: ardaguclu, gangwgr

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Feb 12, 2026
project47555 := "project47555"
if isMicroShiftCluster(oc) {
defer oc.AsAdmin().WithoutNamespace().Run("delete").Args("ns", project47555).Execute()
createNSErr := oc.AsAdmin().WithoutNamespace().Run("create").Args("ns", project47555).Execute()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice patch instead of disabling the test on MicroShift. I think we can use this method for all variants rather than creating a project resource.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Other cases not suitable, which cases are suitable for those we have added cases

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants