Skip to content

Conversation

@AztecBot
Copy link
Collaborator

@AztecBot AztecBot commented Feb 10, 2026

BEGIN_COMMIT_OVERRIDE
chore(ci3): add optional local cache for bootstrap artifacts (#20305)
fix: Fix p2p integration test (#20331)
chore: reduce fee log severity (#20336)
feat: restrict response sizes to expected sizes (#20287)
feat: retry web3signer connection (#20342)
feat(p2p): Integrate TxPoolV2 across codebase (#20172)
feat: review and optimize Claude configuration, agents, and skills (#20270)
fix(prover): handle cross-chain messages when proving mbps (#20354)
chore: retry flakes. if retry pass, is a flake as we know it now. fail both is hard fail (#19322)
chore(p2p): add mock reqresp layer for tests (#20370)
fix: (A-370) don't propagate on tx mempool add failure (#20374)
chore: Skip the HA test (#20376)
feat: Retain pruned transactions until pruned block is finalised (#20237)
END_COMMIT_OVERRIDE

spypsy and others added 4 commits February 10, 2026 12:50
This PR simply fixes the p2p message propagation test
## Summary

- Adds an optional local filesystem cache layer for build artifacts that
sits in front of the S3 remote cache, controlled via the
`CACHE_LOCAL_DIR` env var
- On **download**: checks local cache first (instant), on miss downloads
from S3 and saves to local cache for next time
- On **upload**: saves the artifact to the local cache alongside the S3
upload
- When `CACHE_LOCAL_DIR` is unset, behavior is identical to before (zero
impact on CI or other devs)
- Gracefully falls through to S3 if the local cache directory cannot be
created (e.g. permission issues)

This reduces full bootstrap time from **165s to 77s** by avoiding
redundant S3 downloads for artifacts that haven't changed.

### Usage

```bash
export CACHE_LOCAL_DIR="$HOME/.aztec-cache"
./bootstrap.sh
```

## Test plan

- Added `ci3/cache_local.test.sh` with 8 tests (12 assertions) covering:
local cache hit/miss, upload save, roundtrip, disabled-cache bypass,
inaccessible directory fallthrough
- Verified no `rm -rf` on any path derived from `CACHE_LOCAL_DIR`
- Run `bash ci3/cache_local.test.sh` to execute tests

🤖 Generated with [Claude Code](https://claude.com/claude-code)
This PR simply fixes the p2p message propagation test
PhilWindle and others added 21 commits February 10, 2026 13:13
# P2P ReqResp: Restrict response sizes to expected sizes

## Summary

P2P request responses were previously limited to a default max size of
10MB for all ReqResp protocols. This change makes the limits dynamic
based on what was actually requested. For example, if requesting 8
transactions, the limit is now `8 × MAX_TX_SIZE_KB + 1 KB` instead of a
blanket 10MB.

This reduces the attack surface for oversized response DoS and ensures
predictable memory usage.

## Changes

### Size calculation per protocol

| Protocol | Request Contains | Max Response Size |
|----------|------------------|-------------------|
| TX | `TxHashArray` | `count × 512 KB + 1 KB` |
| BLOCK_TXS | `BitVector` | `requestedCount × 512 KB + 1 KB` |
| BLOCK | `Fr` (block number) | Fixed 3 MB (TxEffects only, no proofs) |
| STATUS | `StatusMessage` | 1 KB |
| PING | minimal | 1 KB |
| AUTH | `AuthRequest` | 1 KB |
| GOODBYE | minimal | 1 KB |

### Files changed

- **`stdlib/src/p2p/constants.ts`** — Added `MAX_L2_BLOCK_SIZE_KB`
constant (3 MB)
- **`p2p/src/services/encoding.ts`** — Added `maxSizeKbOverride`
parameter to `inboundTransformData()` so callers can override
topic-based limits
- **`p2p/src/services/reqresp/protocols/tx.ts`** — Added
`calculateTxResponseSize()` that computes expected size from
`TxHashArray` length
-
**`p2p/src/services/reqresp/protocols/block_txs/block_txs_reqresp.ts`**
— Added `calculateBlockTxsResponseSize()` that computes expected size
from `BitVector` indices
- **`p2p/src/services/reqresp/interface.ts`** — Added
`subProtocolSizeCalculators` map linking each protocol to its size
calculator
- **`p2p/src/services/reqresp/reqresp.ts`** — `sendRequestToPeer()` now
computes expected response size from request payload and passes it
through to decompression validation

### Tests added

- **`protocols/tx.test.ts`** (new) — Unit tests for
`calculateTxResponseSize` covering single hash, multiple hashes, batch
size, raw hash fallback, garbage input, and empty array
- **`protocols/block_txs/block_txs.test.ts`** — Unit tests for
`calculateBlockTxsResponseSize` covering various BitVector
configurations and error cases
- **`encoding.test.ts`** — Tests for `maxSizeKbOverride` parameter
precedence over topic and default limits

## Notes

- Gossip sub topic limits (block_proposal, checkpoint_proposal, etc.)
are **not** changed in this PR — only ReqResp protocols
- The existing `MAX_TX_SIZE_KB` (512 KB) constant is reused for all
transaction size calculations
- Size calculators gracefully handle unparseable request buffers by
falling back to a single transaction size limit

Resolves A-469
## Summary

- Add retry logic with backoff to `KeystoreManager.validateSigners()` so
that transient web3signer unavailability at boot time doesn't crash the
node
- Validate all web3signer URLs in parallel via `Promise.all` (previously
serial), each wrapped in `retry()` with backoff intervals of [1, 2, 4,
8, 16] seconds (~31s total)
- Add two new tests: one verifying errors propagate after retries are
exhausted, one verifying transient failures are retried and eventually
succeed

This is especially useful for test networks where web3signer and
validators start simultaneously — the node no longer crashes if the
signer takes a few seconds to become reachable.

Fixes A-444
## Summary

Migrates all consumers from `TxPool` to `TxPoolV2`, the new event-driven
transaction pool implementation.

## Key API Changes

| Old Method | New Method | Notes |
|------------|------------|-------|
| `addTxs` | `addPendingTxs` | Returns `AddTxsResult` with
accepted/ignored/rejected |
| `markAsMined` | `handleMinedBlock` | Takes full `L2Block` |
| `markMinedAsPending` | `handlePrunedBlocks` | Takes `L2BlockId` |
| `markTxsAsNonEvictable` | `protectTxs` | Requires `BlockHeader` for
slot-based protection |
| `clearNonEvictableTxs` | `prepareForSlot` | Slot-based protection
expiry |
| `deleteTxs` | `handleFailedExecution` / `handleFinalizedBlock` |
Context-specific deletion |
| - | `start()` | New lifecycle method, must be called before use |

## Integration Points

### P2P Client (`p2p_client.ts`)
- Block stream handlers now use pool event methods:
  - `handleLatestL2Blocks` → `handleMinedBlock` per block
  - `handleFinalizedL2Blocks` → `handleFinalizedBlock` per block
  - `handlePruneL2Blocks` → `handlePrunedBlocks` with `L2BlockId`
- `markTxsAsNonEvictable` now requires `BlockHeader` for slot-based
protection
- `getTxStatus` maps `'protected'` → `'pending'` for external API
compatibility
- `getTxs('all')` combines pending + mined hashes (no `getAllTxs` in V2)
- Pool started/stopped with client lifecycle

### Factory (`factory.ts`)
- Creates `AggregateTxValidator` for pending tx validation (without
proof verification)
- Instantiates `AztecKVTxPoolV2` with dependencies:
  - `l2BlockSource` (archiver)
  - `worldStateSynchronizer`
  - `pendingTxValidator`

### Libp2p Service (`libp2p_service.ts`)
- Block proposal handler: `protectTxs(txHashes, block.blockHeader)`
- Checkpoint proposal handler: `protectTxs(txHashes,
checkpoint.lastBlock.blockHeader)`

### Services
- **TxProvider**: `addPendingTxs` for proposal txs
- **TxCollectionSink**: `addPendingTxs` for gossip txs
- **BlockTxsHandler**: Type change only (query methods unchanged)

### Sequencer (`sequencer.ts`)
- TODO added for `prepareForSlot` at slot boundaries

## TODOs for Follow-up
- `TODO(pw/tx-pool)`: Refactor validator creation into
`TxValidatorFactory`
- `TODO(pw/tx-pool)`: Wire `prepareForSlot` calls at slot boundaries
- `TODO(pw/tx-pool)`: Add context on expected tx state when adding txs

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Phil Windle <philip.windle@gmail.com>
We were re-inserting cross chain messages in world state for every block
in the checkpoint, where we only need to insert them on the first block.

This fixes it, and also extends the epochs-mbps e2e test suite to also
assert that the multi-block checkpoints get properly proven.

Builds on #20351
Similar to how we had a MockGossipSub network we used in tests for nodes
to talk to each other, this commit adds a MockReqResp layer to enable
reqresp. Also adds an integration_reqresp test in p2p to test it (we
should be able to migrate other integration tests in p2p to it as
well?).

This allows us to re-enable the mbps assertion for proving blocks with
txs anchored to uncheckpointed blocks. It was failing because the prover
node failed to follow the uncheckpointed chain, so it rejected the txs
anchored to those blocks, and then could not fetch them via reqresp
since it was not enabled.

Proper fix is actually having the prover node follow the uncheckpointed
chain. But that will come in a later PR.

Builds on #20354
- On gossip tx handling, first checks if it can be added to the pool before propagation
- tests to enforce this behavior
- On gossip tx handling, first checks if it can be added to the pool
before propagation
- tests to enforce this behavior
AztecBot and others added 4 commits February 10, 2026 22:03
This PR skips the HA full test until it can be fixed.
## Summary

Implements soft deletion for transactions from pruned blocks in TxPoolV2:

- **Transactions from pruned blocks are soft-deleted** - kept in DB for later re-execution
- **Transactions NOT from pruned blocks are hard-deleted** - removed from DB immediately as before
- **Soft-deleted txs are retrievable** via `getTxByHash` and `hasTxs`, with status `'deleted'` from `getTxStatus`
- **Hard deletion on finalization** - soft-deleted txs are permanently removed when their original mined block is finalized

### Key Design Decisions

1. **Track mined block, not prune point**: When a tx is un-mined due to a reorg, we track the block it was *mined* in, not the block we pruned to. This ensures the tx is kept until that block is finalized on the new chain.

2. **Handle re-mining**: If a tx is mined at block 4, pruned, re-mined at block 5, then pruned again, we track block 5 (the higher value). The tx is only hard-deleted when block 5 is finalized.

3. **Single source of truth**: `DeletedPool` is responsible for ALL deletion decisions. It determines whether to soft-delete or hard-delete based on whether the tx is from a pruned block.

### Example Scenario

```
1. Tx mined at block 10
2. Chain prunes to block 5 (tx un-mined, tracked as minedAtBlock=10)
3. Tx fails validation and is soft-deleted
4. Block 9 finalized → tx still in DB
5. Block 10 finalized → tx hard-deleted
```

## Test plan

- 177 tx_pool_v2 tests pass
- 17 deleted_pool tests pass
- Test: tx mined, pruned, soft-deleted, finalized at correct block
- Test: tx re-mined at higher block, tracked correctly
- Test: multiple txs with different mined blocks finalize at correct times
- Test: persistence across restarts

🤖 Generated with [Claude Code](https://claude.com/claude-code)
)

## Summary

Implements soft deletion for transactions from pruned blocks in
TxPoolV2:

- **Transactions from pruned blocks are soft-deleted** - kept in DB for
later re-execution
- **Transactions NOT from pruned blocks are hard-deleted** - removed
from DB immediately as before
- **Soft-deleted txs are retrievable** via `getTxByHash` and `hasTxs`,
with status `'deleted'` from `getTxStatus`
- **Hard deletion on finalization** - soft-deleted txs are permanently
removed when their original mined block is finalized

### Key Design Decisions

1. **Track mined block, not prune point**: When a tx is un-mined due to
a reorg, we track the block it was *mined* in, not the block we pruned
to. This ensures the tx is kept until that block is finalized on the new
chain.

2. **Handle re-mining**: If a tx is mined at block 4, pruned, re-mined
at block 5, then pruned again, we track block 5 (the higher value). The
tx is only hard-deleted when block 5 is finalized.

3. **Single source of truth**: `DeletedPool` is responsible for ALL
deletion decisions. It determines whether to soft-delete or hard-delete
based on whether the tx is from a pruned block.

### Example Scenario

```
1. Tx mined at block 10
2. Chain prunes to block 5 (tx un-mined, tracked as minedAtBlock=10)
3. Tx fails validation and is soft-deleted
4. Block 9 finalized → tx still in DB
5. Block 10 finalized → tx hard-deleted
```

## Test plan

- 177 tx_pool_v2 tests pass
- 17 deleted_pool tests pass
- Test: tx mined, pruned, soft-deleted, finalized at correct block
- Test: tx re-mined at higher block, tracked correctly
- Test: multiple txs with different mined blocks finalize at correct
times
- Test: persistence across restarts

🤖 Generated with [Claude Code](https://claude.com/claude-code)
@ludamad ludamad enabled auto-merge February 10, 2026 22:43
@ludamad ludamad added this pull request to the merge queue Feb 10, 2026
@AztecBot
Copy link
Collaborator Author

Flakey Tests

🤖 says: This CI run detected 1 tests that failed, but were tolerated due to a .test_patterns.yml entry.

\033FLAKED\033 (8;;http://ci.aztec-labs.com/5276fcf7d118659d�5276fcf7d118659d8;;�):  yarn-project/end-to-end/scripts/run_test.sh simple src/e2e_p2p/preferred_gossip_network.test.ts (120s) (code: 0) group:e2e-p2p-epoch-flakes

Merged via the queue into next with commit 02172e1 Feb 11, 2026
23 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants