Skip to content

feat(sdk/agent): Parallel Tool Call Execution#2390

Open
VascoSch92 wants to merge 7 commits intomainfrom
parallel-tool-execution
Open

feat(sdk/agent): Parallel Tool Call Execution#2390
VascoSch92 wants to merge 7 commits intomainfrom
parallel-tool-execution

Conversation

@VascoSch92
Copy link
Contributor

@VascoSch92 VascoSch92 commented Mar 11, 2026

Summary

(fix #2350)

Add ParallelToolExecutor to enable concurrent tool execution within agent steps, controlled by the TOOL_CONCURRENCY_LIMIT environment variable (default: 1, fully backward-compatible).
## Motivation
When an LLM returns multiple tool calls in a single response (e.g., "read these 3 files" or "run these 4 independent searches"), the current agent executes them sequentially. For I/O-bound tools — file reads, HTTP requests, MCP server calls, database queries — this leaves significant performance on the table. Parallel execution turns N × latency into ~1 × latency for independent operations.

Concrete scenarios where this helps:

  • Multi-file reads: Agent asks to read 5 source files to understand a codebase → 5 sequential disk/network reads become 1 parallel batch
  • Web search + fetch: Agent issues parallel web searches or API calls → wall-clock time drops from sum to max of individual latencies
  • MCP tool calls: Multiple independent MCP server requests (e.g., querying different data sources) execute concurrently
  • Subagent orchestration: Parent agent dispatches multiple independent tool calls while subagents can safely use their own parallel executors without deadlocking

What this does NOT help: CPU-bound tools limited by the GIL, or tools with shared mutable state that aren't thread-safe.

Design

  • Per-agent ThreadPoolExecutor: Each ParallelToolExecutor instance owns its own thread pool, so subagents never compete with their parent for pool slots (no deadlocks).
  • Opt-in via TOOL_CONCURRENCY_LIMIT: Default is 1 (sequential), preserving existing behavior. Set to N > 1 to enable parallelism.
  • Side-effect-free execution: _execute_action_event returns list[Event] instead of calling on_event directly, making it safe for concurrent threads. All state mutations (pop_blocked_action, execution_status = FINISHED, event
    emission) happen on the main thread after parallel work completes.
  • FinishTool truncation: If a batch contains finish, all tool calls after it are discarded and logged — they never execute.
  • Error isolation: All exceptions are caught and wrapped in AgentErrorEvent, so one failing tool never crashes the agent or prevents sibling tools from completing. ValueError (expected tool errors) is logged at INFO; unexpected
    exceptions (RuntimeError, AssertionError, etc.) are logged at ERROR with full traceback to aid debugging.
  • Result ordering: Regardless of completion order, events are emitted in the original tool call order.
  • The responsibility split is now:
    • _ActionBatch — owns everything about preparing a batch (truncation, blocked partitioning, execution)
    • Agent — owns what to do with the results (emitting events, handling finish/refinement)
    • ParallelToolExecutor — stays a pure concurrency primitive, passed in as a dependency

Thread safety warning

When TOOL_CONCURRENCY_LIMIT > 1, tools run in parallel threads sharing the same conversation object. Tools are not thread-safe by default. Callers opting into parallelism must ensure their tools are safe for concurrent execution
(no shared mutable filesystem state, no concurrent conversation mutations).

Evaluation

I ran an evaluation with SWE-bench to ensure that the default behavior is the one we already have in the repo [ref]

Report from trace investigation of OpenHands CLI:

No parallel tool calls detected -- the feature is cleanly disabled. Here's the full breakdown:
                                                                                                                                                                                                                     
  Trace Format                                                                                                                                                                                                       
                                                                                                                                                                                                                     
  - Events alternate between ActionEvent (tool call) and ObservationEvent (tool result)                                                                                                                              
  - Tools used: terminal (1150), file_editor (588), think (58), finish (25)                                                                                                                                          
  - 1,821 action events matched exactly 1,821 observation events across all 25 traces                                                                                                                                
                                                                                                                                                                                                                     
  Parallel Tool Call Check: CLEAN                                                                                                                                                                                    
                                                                                                                                                                                                                     
  - Zero shared llm_response_id across events (each LLM turn produced exactly 1 tool call)                                                                                                                           
  - Perfect action-observation interleaving -- no consecutive actions or observations                                                                                                                                
  - No tool_calls arrays, no parallel batching of any kind                                                                                                                                                           
  - All 25 conversations completed normally with a finish action

Agent Server images for this PR

GHCR package: https://github.com/OpenHands/agent-sdk/pkgs/container/agent-server

Variants & Base Images

Variant Architectures Base Image Docs / Tags
java amd64, arm64 eclipse-temurin:17-jdk Link
python amd64, arm64 nikolaik/python-nodejs:python3.13-nodejs22 Link
golang amd64, arm64 golang:1.21-bookworm Link

Pull (multi-arch manifest)

# Each variant is a multi-arch manifest supporting both amd64 and arm64
docker pull ghcr.io/openhands/agent-server:cd09704-python

Run

docker run -it --rm \
  -p 8000:8000 \
  --name agent-server-cd09704-python \
  ghcr.io/openhands/agent-server:cd09704-python

All tags pushed for this build

ghcr.io/openhands/agent-server:cd09704-golang-amd64
ghcr.io/openhands/agent-server:cd09704-golang_tag_1.21-bookworm-amd64
ghcr.io/openhands/agent-server:cd09704-golang-arm64
ghcr.io/openhands/agent-server:cd09704-golang_tag_1.21-bookworm-arm64
ghcr.io/openhands/agent-server:cd09704-java-amd64
ghcr.io/openhands/agent-server:cd09704-eclipse-temurin_tag_17-jdk-amd64
ghcr.io/openhands/agent-server:cd09704-java-arm64
ghcr.io/openhands/agent-server:cd09704-eclipse-temurin_tag_17-jdk-arm64
ghcr.io/openhands/agent-server:cd09704-python-amd64
ghcr.io/openhands/agent-server:cd09704-nikolaik_s_python-nodejs_tag_python3.13-nodejs22-amd64
ghcr.io/openhands/agent-server:cd09704-python-arm64
ghcr.io/openhands/agent-server:cd09704-nikolaik_s_python-nodejs_tag_python3.13-nodejs22-arm64
ghcr.io/openhands/agent-server:cd09704-golang
ghcr.io/openhands/agent-server:cd09704-java
ghcr.io/openhands/agent-server:cd09704-python

About Multi-Architecture Support

  • Each variant tag (e.g., cd09704-python) is a multi-arch manifest supporting both amd64 and arm64
  • Docker automatically pulls the correct architecture for your platform
  • Individual architecture tags (e.g., cd09704-python-amd64) are also available if needed

… tool execution

Add infrastructure for executing multiple tool calls concurrently with a
configurable global concurrency limit.

Classes:
- ToolExecutorSemaphore: Process-global singleton that limits concurrent
  tool executions across all agents and sub-agents. Configured via
  OPENHANDS_TOOL_CONCURRENCY_LIMIT environment variable (default: 8).
- ParallelToolExecutor: Executes batches of tool calls concurrently using
  ThreadPoolExecutor, with concurrency controlled by the semaphore.

Key design decisions:
- Single layer of concurrency control via environment variable
- Singleton pattern using __new__ for ToolExecutorSemaphore
- ThreadPoolExecutor for I/O-bound tool execution
- Results returned in original order regardless of completion order

Related to #2350

Co-authored-by: openhands <openhands@all-hands.dev>
@github-actions
Copy link
Contributor

github-actions bot commented Mar 11, 2026

API breakage checks (Griffe)

Result: Passed

Action log

@github-actions
Copy link
Contributor

github-actions bot commented Mar 11, 2026

Agent server REST API breakage checks (OpenAPI)

Result: Passed

Action log

@github-actions
Copy link
Contributor

github-actions bot commented Mar 11, 2026

Coverage

Coverage Report •
FileStmtsMissCoverMissing
openhands-sdk/openhands/sdk/agent
   agent.py2643785%101, 105, 278, 297, 300, 307–308, 322, 328, 356–358, 360, 390–391, 398–399, 431, 484–485, 487, 527, 675–676, 681, 693–694, 699–700, 719–720, 722, 751, 759–760, 794, 801
TOTAL19988580070% 

all-hands-bot

This comment was marked as outdated.

@VascoSch92 VascoSch92 changed the title DRAFT: feat: add ParallelToolExecutor and ToolExecutorSemaphore for parallel tool execution feat(sdk/agent): Parallel Tool Call Execution Mar 12, 2026
all-hands-bot

This comment was marked as outdated.

all-hands-bot

This comment was marked as outdated.

all-hands-bot

This comment was marked as outdated.

@VascoSch92 VascoSch92 marked this pull request as ready for review March 13, 2026 14:08
Copy link
Collaborator

@all-hands-bot all-hands-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Taste Rating: Acceptable - Requires Eval Verification

Core architecture is excellent. Making _execute_action_event side-effect-free (returns events instead of emitting directly) is exactly the right design — this eliminates the need for locks and makes the special case (parallel execution) become a normal case. Per-agent thread pools elegantly prevent deadlocks without complex detection logic.

The code is clean, tests are comprehensive and test real behavior (not mocks), and default concurrency=1 preserves backward compatibility.

However, this PR changes core agent execution flow (tool calling, event emission, state management). Even with the backward-compatible default, the execution path has been refactored significantly. Per repository policy, PRs that change agent behavior require lightweight eval verification before merge.

KEY INSIGHT

The refactoring turns concurrency from a special case requiring complex coordination into a normal case with side-effect-free functions. This is "good taste" — the right abstraction eliminates the complexity rather than managing it with locks and conditionals.

VERDICT

Code quality is solid — approve from a technical perspective
⚠️ Eval verification required — per repo policy before merge

Move _emit_batch and _handle_finish logic from Agent into _ActionBatch
as emit() and finalize() methods. Agent-specific logic (iterative
refinement check, mark-finished callback) is injected via callables,
keeping _ActionBatch decoupled from the Agent class.

This simplifies Agent._execute_actions to a clean prepare → emit →
finalize pipeline and gives _ActionBatch full ownership of the batch
lifecycle.

Co-authored-by: openhands <openhands@all-hands.dev>
@enyst
Copy link
Collaborator

enyst commented Mar 13, 2026

@OpenHands Do a /codereview-roasted on this PR.

@openhands-ai
Copy link

openhands-ai bot commented Mar 13, 2026

I'm on it! enyst can track my progress at all-hands.dev

Copy link
Collaborator

@enyst enyst left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 Needs improvement

[CRITICAL ISSUES]

  • [openhands-sdk/openhands/sdk/agent/parallel_executor.py, Lines 97-103] Breaking change disguised as the fallback path: TOOL_CONCURRENCY_LIMIT=1 still routes any multi-tool batch through ThreadPoolExecutor(max_workers=1). That is not the old behavior. It changes thread affinity, and because results are buffered until the batch finishes, it also changes when observations hit the conversation. I reproduced this locally with a tiny tool: both calls ran on ThreadPoolExecutor-*, not MainThread, and the second call saw zero prior ObservationEvents. So the PR description's “fully backward-compatible” claim is false. Fix: keep the old for action in action_events: execute + emit path when the limit is 1, and only use the batch executor when the limit is actually >1.
  • [openhands-sdk/openhands/sdk/agent/agent.py, Lines 389-396] Sequential semantics were silently changed: _ActionBatch.prepare() executes the whole batch before batch.emit(), so later tools in the same batch no longer see earlier observations in conversation.state.events. Even with concurrency effectively “off”, you've changed execution from run tool -> emit observation -> run next tool into run everything -> emit later. That's a real semantic regression for tools/hooks that inspect conversation state mid-batch. Fix: preserve incremental emission in the sequential path; don't reuse the buffered parallel path as the fallback.
  • [openhands-sdk/openhands/sdk/agent/agent.py, Lines 389-393] Unsafe by construction for the stock tool set: once TOOL_CONCURRENCY_LIMIT > 1, this code blindly parallelizes whatever tool calls the model produced. But the default tool set includes shared mutable executors like terminal sessions, browser sessions, and file-editor history. A warning in a docstring is not a concurrency model; it just means the env var can turn into garbled terminal I/O or corrupted edits. Fix: parallelize only tools that explicitly declare themselves safe, or serialize known shared-resource tools.

[TESTING GAPS]

  • [tests/sdk/agent/test_parallel_execution_integration.py, Lines 188-213] The default-limit regression isn't actually tested: this test only checks ordering. It doesn't assert same-thread execution or incremental state visibility, so it passes even though the fallback already changed behavior.
  • [tests/sdk/agent/test_parallel_execution_integration.py] Only synthetic tools are exercised: the tests never hit the real risky executors (terminal, file_editor, browser), so they won't catch the races this PR introduces.

VERDICT:
Needs rework: the concurrency primitive is reasonable, but the fallback path is not backward-compatible, and the parallel path has no safety gate for the tools people actually run.

KEY INSIGHT:
The real bug here is that the code collapsed “sequential semantics” and “parallel scheduling” into one path. Those are not the same thing, and when limit=1 they must stay identical to the old behavior.

Because this PR changes agent tool-execution behavior, I'd still want a human maintainer to make the final merge call after lightweight evals even after the above is addressed.

@openhands-ai

This comment was marked as duplicate.

Copy link
Collaborator

enyst commented Mar 13, 2026

OpenHands-GPT-5.4 here. I re-read my earlier review with fresh eyes, section by section, and I think I overstated parts of it. Here's my updated take.

1) TOOL_CONCURRENCY_LIMIT=1 and backward compatibility

  • [openhands-sdk/openhands/sdk/agent/parallel_executor.py, Lines 97-103]
  • [openhands-sdk/openhands/sdk/agent/agent.py, Lines 389-396]

Is it actually the case? Yes.

Multi-tool batches still go through ThreadPoolExecutor(max_workers=1), and observations are emitted after the batch instead of between tool calls. So the fallback is not identical to the old for action: run -> emit -> next behavior.

How major is it? Medium, not catastrophic.

This is a real semantic change, but it is narrow: it only matters when the model emits multiple tool calls in one response. The PR's own trace evidence suggests that is rare in current real usage.

Is it real usage? Rare today, but real enough that the phrase "fully backward-compatible" feels too strong unless this is fixed or the claim is narrowed.

2) Later tools not seeing earlier observations from the same batch

  • [openhands-sdk/openhands/sdk/agent/agent.py, Lines 389-396]

Is it actually the case? Yes.

Because _ActionBatch.prepare() runs the executable subset before batch.emit(), tools that inspect conversation.state.events during execution will not see earlier observations from that same batch.

How major is it? Medium-to-low.

This is the strongest technical concern from my earlier review, but it's still niche rather than broad. Most built-in tools probably do not depend on seeing sibling observations mid-batch. Some custom or state-aware tools could.

Is it real usage? Mostly edge/custom-tool territory today.

So I still think this is worth calling out, but more as a compatibility nuance than as a major architectural failure.

3) Unsafe parallelism with terminal / file editor / browser

  • [openhands-sdk/openhands/sdk/agent/agent.py, Lines 389-393]

Is it actually the case? Yes.

If someone sets TOOL_CONCURRENCY_LIMIT > 1 and allows parallel batches that include shared-state tools, they can absolutely get races.

How major is it? Lower than I first said.

This is an opt-in feature, and the PR is explicit about the thread-safety caveat. That makes it much more of a deliberate first-cut tradeoff than a hidden regression.

Is it real usage? Yes, if somebody flips the env var globally and expects terminal/file-editor/browser calls to be safe. But because they had to opt into it, I would treat this as a scope/documentation clarity issue, not a hard blocker.

If anything, I'd want the docs/PR text to be very clear that the immediate sweet spot is read-only / RPC-like tools, not arbitrary shared-state tools.

4) Testing

  • [tests/sdk/agent/test_parallel_execution_integration.py]

I still think one regression test for the limit=1 fallback semantics would be valuable if backward compatibility is a goal.

That said, I want to explicitly soften one part of my earlier review: I do not think "the tests only use synthetic tools" is a serious problem by itself. Synthetic tools are perfectly reasonable for this kind of concurrency test. I would not ask for flaky terminal/browser integration tests as a merge gate.


Updated bottom line

After re-review, I would downgrade my earlier assessment.

I do not think this PR is fundamentally broken.

I think there is one real but narrow compatibility concern around the TOOL_CONCURRENCY_LIMIT=1 path versus the PR's "fully backward-compatible" framing. If the fallback keeps the old sequential emit semantics, or if the compatibility claim is softened, the rest looks like an acceptable first-cut tradeoff for an opt-in feature.

Because this still changes agent behavior, I'd keep this in COMMENT territory and let a human maintainer make the final call after lightweight evals.

If readers are skimming: my earlier review was too harsh on points (3) and part of (4); point (1) remains real, but narrower than I first framed it.

Copy link
Collaborator

enyst commented Mar 13, 2026

OpenHands-GPT-5.4 follow-up: you're right, and I need to correct my re-review.

I made a reasoning mistake there.

I wrote that the TOOL_CONCURRENCY_LIMIT=1 regression "only matters when the model emits multiple tool calls in one response" and then treated that as narrow because some current traces don't show it often. But that framing is wrong for this PR.

This PR exists precisely to handle the multi-tool-call case better. If multi-tool responses are common in real usage, then the changed fallback semantics are not some corner case — they hit the exact workload this feature is supposed to serve.

So I want to sharpen the conclusion:

  • The limit=1 path is a real compatibility issue if the goal is to preserve existing behavior by default.
  • It matters specifically on the feature's main path: responses containing multiple tool calls.
  • In other words, saying "it's only when there are multiple tool calls" does not make it small here; that's the whole point of the feature.

I would restate the concern more cleanly as:

The PR changes default semantics for multi-tool responses even when concurrency is effectively disabled (TOOL_CONCURRENCY_LIMIT=1), because execution is still batched and observations are emitted after batch completion rather than after each tool call.

I still think my earlier re-review was right to soften one thing: synthetic tools are fine for testing the concurrency primitive itself.

But I over-corrected on severity. If multi-tool responses are common in production, then the fallback-semantics issue is materially more important than I gave it credit for.

I also think this strengthens — not weakens — the concern about parallelizing shared-state tools. If the intended real-world usage is "LLM emits several tool calls and we want them parallel," then correctness for the default tool ecosystem is part of the core feature story, not just a docs footnote.

@enyst
Copy link
Collaborator

enyst commented Mar 14, 2026

HUMAN: Just to note, I do think multiple tool calls are a normal occurrence. I don’t have numbers, but the agent here is confused; they’re often enough that, in practice, it’s like, see codex-cli fire up 12, think “woot was that a parallel tool call?”. Barely finish thought, see a bucket of 3, see another bucket of 5, conclude “yeah obviously”.

We have the eval trajectories we could compute to see the number of batches with the same llm_response?

@VascoSch92
Copy link
Contributor Author

VascoSch92 commented Mar 14, 2026

HUMAN: Just to note, I do think multiple tool calls are a normal occurrence. I don’t have numbers, but the agent here is confused; they’re often enough that, in practice, it’s like, see codex-cli fire up 12, think “woot was that a parallel tool call?”. Barely finish thought, see a bucket of 3, see another bucket of 5, conclude “yeah obviously”.

We have the eval trajectories we could compute to see the number of batches with the same llm_response?

@enyst yes this could be interesting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Proposal: Parallel Tool Execution

5 participants