Conversation
786450f to
29db332
Compare
travis
left a comment
There was a problem hiding this comment.
looks great! I have a couple suggestions but generally very excited about this change!!
|
|
||
| **Proposed solution** | ||
|
|
||
| * Add a **GSI with a timestamp-based sort key** |
rfc/refactor-space-diff-table.md
Outdated
| * Correct PK design | ||
| * `cause` as SK | ||
| * GSI for timestamp-based queries | ||
| 2. Export data from the existing table |
There was a problem hiding this comment.
I think we might even be able to just skip importing the old data - we can spin this system up alongside it, use it in parallel until february, and then cut over to the new table for usage reporting and billing and leave the old diffs in the old table
rfc/refactor-space-diff-table.md
Outdated
|
|
||
| **Considerations** | ||
|
|
||
| - The accumulator MUST process diffs for a space in **ascending** `receiptAt` order. If the write path can deliver out-of-order events and strict ordering cannot be guaranteed, this solution SHOULD be revisited. Pragmatic mitigations include: |
There was a problem hiding this comment.
is this strictly necessary? I think we could just choose the later of "current accumulator date" or "new diff receiptAt" - this will ensure we always have the "latest" date for a particular accumulator, even in the case where we process diffs out of order
There was a problem hiding this comment.
I reviewed this again later and don’t think it’s a problem, especially since receiptAt is generated server-side when we add the diff entry to the table.
rfc/refactor-space-diff-table.md
Outdated
| * Keep `space-diff` entries for N months using TTL | ||
| * Archive older diffs to S3 (TBD) | ||
|
|
||
| **Considerations** |
There was a problem hiding this comment.
my only major concern here is the potential race condition if two diffs for the same space read the current total at the same time - we could mitigate this by having a single queue reader processing the diffs - it should go pretty fast and uploads aren't THAT high frequency so having a single queue reader process diffs and update the accumulator should be plenty - this would solve race conditions pretty conclusively I think
There was a problem hiding this comment.
my only major concern here is the potential race condition if two diffs for the same space read the current total at the same time
For incrementing? You'd use an update command with ADD which would consistently increment.
There was a problem hiding this comment.
I am unsure how this works for the transition from one month to the next though...
hannahhoward
left a comment
There was a problem hiding this comment.
Left one comment with some thoughts and things to consider
rfc/refactor-space-diff-table.md
Outdated
| - Alternative when strict ordering is infeasible: | ||
| - Use time-bucketed diffs (hour/day): persist per-bucket, order-independent aggregates (e.g., Σdelta and Σ(delta × (bucketEnd − receiptAt))). At billing time, iterate buckets in chronological order to compute exact monthly usage, where no event sorting required. | ||
| - Maintain a size-only monthly state (track `lastSize` and `lastChangeAt`) to accelerate space usage report. Note: this does NOT remove the need to iterate diffs for the billing run. | ||
| To avoid potential race conditions when two diffs for the same space read the current total at the same time, one option is to process diffs through a queue. This would also help preserve the correct ordering of diffs. |
There was a problem hiding this comment.
@alanshaw can confirm and you should chat briefly with him about this:
my understanding is:
- the existing diffs are written in batches of 25 from a kinesis stream with enhanced fan-out. Enhanced fan-out means up to 20 invocations of 25 can be called in parallel to the lambda that writes space diffs
- While you could apply a FIFO SQS Queue for a post process step to get maintain this running total, note that part of the heavy batching and parallelism for the kinesis stream, at least to my understanding, is make sure the lambda doesn't fall behind.
I think it may make sense to pursue a kind of daily auto-compaction instead, so that you process in aggregate once a day. Lambda invocations are non-zero cost per invocation.
Second, please make sure the usage/report & account-usage/report maintain the ability to pass a time range -- this is neccesary for Forge
There was a problem hiding this comment.
My understanding is that the scenario you’re describing only applies to store/add and store/remove receipts. Blob protocol receipts are written through a different path, via the Blob Registry register and deregister flows.
Since we’re planning to deprecate the Store Protocol, this shouldn’t be an issue. Can you confirm, @alanshaw ?
There was a problem hiding this comment.
Yes in the blob protocol the blob registry writes to the space diff table - the ucan stream is not used for this any more. My original idea was to remove it post store protocol removal.
The space snapshot table consolidates space diffs, could you use the existing code/infra (with small modification) to calculate a snapshot on a daily/weekly basis for spaces so that when you want to know current usage you don't have to look so far back?
rfc/refactor-space-diff-table.md
Outdated
|
|
||
| ### Fix for problem 2: Usage calculation timeouts | ||
|
|
||
| Introduce a new table (e.g. `space-usage-month`) keyed by `provider#space#YYYY-MM` that is updated atomically on each diff write, making billing reads **O(1)**. |
There was a problem hiding this comment.
This is basically the space metrics table - could you use that instead?
There was a problem hiding this comment.
So yes, we could combine the space-metrics and snapshots with more frequent runs. However, we would still need to query the full list of customers and spaces every time to iterate though the diffs. This would reduce the time spent calculating usage and billing, but the proposed new table would consolidate currently dispersed data into a single place (snapshots, usage, and size) while also providing near real-time visibility into each space. This feels like a better approach for scalability to me, but I'd like to know if you have a different thought since you have a better view of the system. @travis might also have good input here.
@alanshaw I have a question about space-metrics: is it still actively used? I only see writes to it and no reads, does it still make sense to keep it around?
There was a problem hiding this comment.
Can you speak to "updated atomically on each diff write" - how is this applied? Is it lambda triggered on an insert event to the space diff table?
There was a problem hiding this comment.
How do we transition from one month to the next, retaining the current total, so that concurrent updates succeed and are all applied as expected?
There was a problem hiding this comment.
but the proposed new table would consolidate currently dispersed data into a single place (snapshots, usage, and size)
Yes, but at the expense of another dynamo table (storage, writes) and lambda invocation per space diff insert. It's just more infra to run, maintain and (potentially) go wrong, whereas upping the cron frequency for calculating snapshots basically gets you everything you need...unless I'm missing something?
while also providing near real-time visibility into each space
We already have this though...right? Latest snapshot + diffs since is the real time.
There was a problem hiding this comment.
Latest snapshot + diffs since is the real time.
yea I think the issue here is that not all spaces can calculate this in a reasonable amount of time - if there are 200k diffs to add to a space snapshot it generally just doesn't work
rfc/refactor-space-diff-table.md
Outdated
| - Alternative when strict ordering is infeasible: | ||
| - Use time-bucketed diffs (hour/day): persist per-bucket, order-independent aggregates (e.g., Σdelta and Σ(delta × (bucketEnd − receiptAt))). At billing time, iterate buckets in chronological order to compute exact monthly usage, where no event sorting required. | ||
| - Maintain a size-only monthly state (track `lastSize` and `lastChangeAt`) to accelerate space usage report. Note: this does NOT remove the need to iterate diffs for the billing run. | ||
| To avoid potential race conditions when two diffs for the same space read the current total at the same time, one option is to process diffs through a queue. This would also help preserve the correct ordering of diffs. |
There was a problem hiding this comment.
Yes in the blob protocol the blob registry writes to the space diff table - the ucan stream is not used for this any more. My original idea was to remove it post store protocol removal.
The space snapshot table consolidates space diffs, could you use the existing code/infra (with small modification) to calculate a snapshot on a daily/weekly basis for spaces so that when you want to know current usage you don't have to look so far back?
…solutions for space-diff duplication
According to this storacha/RFC#78 (related issue storacha/project-tracking#307) This PR implements the short term solution, adding a **GSI on `cause`** to the existing `space-diff` table. It then uses that GSI to check if a diff with the same `cause` already exists before writing a new one. If it does, we just skip the write. This approach: * Helps prevent new duplicates without changing the table schema * Is quick to roll out with minimal risk * Doesn’t require a migration or any dual-write setup **Limitation:** this is a best-effort guard. It adds a read-before-write step and doesn’t fully guarantee uniqueness (there’s still a small chance of a race condition).
According to this [RFC#78](storacha/RFC#78) This PR updates the billing process to run daily, enabling more frequent snapshot generation, which should help reduce usage report timeouts, and allow for more regular reporting to Stripe.
This RFC proposes a refactor of the
space-difftable to eliminate space usage calculation timeouts, prevent duplicate diffs, and improve the efficiency of billing usage calculations.📖 Preview