Skip to content

Conversation

@vanessavmac
Copy link
Collaborator

@vanessavmac vanessavmac commented Jan 20, 2026

Summary

If a pipeline doesn't return detections but still successfully processed the image, it will have a "null" detection.

List of Changes

  • Add null detections test
  • Update save_results
  • Display timeline of processed images
  • Display updated collection stats

Related Issues

Closes #484

How to Test the Changes

docker compose run --rm django python manage.py test -k test_image_with_null_detection --failfast --pdb

Screenshots

Screenshot 2026-02-11 at 11 19 35 AM Screenshot 2026-02-11 at 11 24 10 AM

Deployment Notes

Include instructions if this PR requires specific steps for its deployment (database migrations, config changes, etc.)

Checklist

  • I have tested these changes appropriately.
  • I have added and/or modified relevant tests.
  • I updated relevant documentation or comments.
  • I have verified that this PR follows the project's coding standards.
  • Any dependent changes have already been merged to main.

Summary by CodeRabbit

Release Notes

  • New Features

    • Added image processing status tracking and metrics for collections
    • New "Total Processed Captures" column in the collections table
    • Processing status indicator displayed in activity timeline visualizations
  • Bug Fixes

    • Enhanced detection data validation by excluding invalid detections from counts

@netlify
Copy link

netlify bot commented Jan 20, 2026

Deploy Preview for antenna-preview ready!

Name Link
🔨 Latest commit 3f06833
🔍 Latest deploy log https://app.netlify.com/projects/antenna-preview/deploys/698cadaa6c42db0007d31988
😎 Deploy Preview https://deploy-preview-1093--antenna-preview.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.
Lighthouse
Lighthouse
1 paths audited
Performance: 73 (🟢 up 7 from production)
Accessibility: 89 (🟢 up 9 from production)
Best Practices: 92 (🔴 down 8 from production)
SEO: 100 (🟢 up 8 from production)
PWA: 80 (no change from production)
View the detailed breakdown and full score reports

To edit notification comments on pull requests, go to your Netlify project configuration.

@netlify
Copy link

netlify bot commented Jan 20, 2026

Deploy Preview for antenna-ssec ready!

Name Link
🔨 Latest commit 3f06833
🔍 Latest deploy log https://app.netlify.com/projects/antenna-ssec/deploys/698cadaa94adb00008f4fc67
😎 Deploy Preview https://deploy-preview-1093--antenna-ssec.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 20, 2026

📝 Walkthrough

Walkthrough

This PR adds comprehensive tracking and display of which source images have been processed by ML pipelines. Changes span backend annotations, API serialization, ML pipeline logic enhancements, and frontend UI additions to expose processing status across collections, timelines, and activity visualizations.

Changes

Cohort / File(s) Summary
Models and Admin
ami/main/models.py, ami/main/admin.py
Added queryset method with_was_processed() to annotate images with detection existence; added get_was_processed() method to SourceImage; extended SourceImageCollectionQuerySet with with_source_images_processed_count() and improved with_source_images_with_detections_count() to exclude invalid bboxes; updated admin list display to show processed status.
API Layer
ami/main/api/serializers.py, ami/main/api/views.py
Exposed source_images_processed_count and was_processed fields in serializers; updated SourceImage timeline queryset to use with_was_processed() and propagate processing status to interval data; extended SourceImageCollectionViewSet queryset with processed count; narrowed DetectionViewSet to exclude detections with invalid/null bounding boxes.
ML Pipeline
ami/ml/models/pipeline.py, ami/ml/schemas.py, ami/ml/tests.py
Made bbox field optional in DetectionRequest/DetectionResponse schemas; enhanced save_results() to create null detections for images missing detections; added test coverage for null detection handling and algorithm filtering on processed state.
Frontend Data Models
ui/src/data-services/models/collection.ts, ui/src/data-services/models/timeline-tick.ts
Added numImagesProcessed and numImagesProccessed getters to collection model; added was_processed field and wasProcessed getter to TimelineTick model.
Frontend UI
ui/src/pages/project/collections/collection-columns.tsx, ui/src/pages/project/collections/collections.tsx, ui/src/pages/session-details/playback/activity-plot/activity-plot.tsx, ui/src/utils/language.ts
Added new table column for total processed captures with percentage display; added column visibility toggle; introduced "Was processed" data series to activity plot visualization; added translation key for processed captures label.

Sequence Diagram

sequenceDiagram
    participant Pipeline as ML Pipeline
    participant Backend as Backend Model<br/>(save_results)
    participant DB as Database<br/>(Annotations)
    participant API as API Layer<br/>(Serializers)
    participant Frontend as Frontend<br/>(UI Display)

    Pipeline->>Backend: return results (images + detections)
    note over Backend: For images without detections,<br/>create null Detection<br/>(bbox=None)
    Backend->>DB: save detections and<br/>annotate with_was_processed()
    DB->>DB: query with Exists(Detection)<br/>to mark was_processed=True
    
    Frontend->>API: request collection/timeline data
    API->>DB: execute queryset with<br/>with_was_processed() &<br/>with_source_images_processed_count()
    DB->>API: return annotated results
    API->>API: serialize was_processed &<br/>processed count fields
    API->>Frontend: return JSON (processed status)
    
    Frontend->>Frontend: render table columns<br/>& activity plot with<br/>processing status visualization
Loading

Estimated Code Review Effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly Related PRs

Suggested Reviewers

  • mihow
  • annavik

Poem

🐰 A rabbit hops through fields of data flow,

Marking which images the pipelines know,

From null detections to counts so bright,

Processing status now shines in sight! ✨

🚥 Pre-merge checks | ✅ 4 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 55.56% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title 'Implement null detections' is concise and directly related to the main functionality added in the changeset, which handles null detections when pipelines process images without returning detections.
Description check ✅ Passed The description includes most required sections: Summary, List of Changes, Related Issues, How to Test, Screenshots, and Checklist. However, the Deployment Notes section is incomplete and only two checklist items are addressed.
Linked Issues check ✅ Passed The PR successfully implements tracking of processed images (with null detection support), adds test coverage, displays processing status in timelines, and shows collection statistics. UI updates include collection columns, timeline visualization, and processing status indicators, addressing core objectives [#484].
Out of Scope Changes check ✅ Passed All changes align with issue #484 objectives: null detection handling in pipeline, model methods for tracking processed state, serializer/view updates for API, and UI components for displaying processing status. No unrelated changes detected.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch 484-make-it-clear-what-images-have-not-been-processed

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@vanessavmac vanessavmac changed the title Add null detections test; update save_results Implement null detections Feb 11, 2026
@vanessavmac vanessavmac marked this pull request as ready for review February 11, 2026 16:27
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
ami/main/api/views.py (1)

378-439: ⚠️ Potential issue | 🟠 Major

Fix was_processed for empty intervals (and avoid stale values).

Line 439 uses image after the loop; for intervals with no captures (or events with zero captures) this either reuses the prior interval’s image or can crash. Accumulate during the loop and keep False when no captures.

🐛 Suggested fix
@@
             while image_index < len(source_images) and source_images[image_index]["timestamp"] <= interval_end:
                 image = source_images[image_index]
@@
                 if image["detections_count"] >= max(interval_data["detection_counts"]):
                     interval_data["top_capture"] = SourceImage(pk=image["id"])
+                interval_data["was_processed"] = interval_data["was_processed"] or image["was_processed"]
                 image_index += 1
@@
-            interval_data["was_processed"] = image["was_processed"]
+            # was_processed already accumulated; remain False when no captures
🤖 Fix all issues with AI agents
In `@ami/main/models.py`:
- Around line 1839-1841: Remove the redundant Q(bbox=None) in
get_detections_count() (keep Q(bbox__isnull=True) and the empty-list check
Q(bbox=[])) and then make update_detection_counts() use the same exclusion
filter as get_detections_count() so the cached detections_count is computed
consistently; specifically, update the counting logic in
update_detection_counts() to apply .exclude(Q(bbox__isnull=True) | Q(bbox=[]))
on self.detections before calling .count() so both methods return the same
value.

In `@ami/ml/models/pipeline.py`:
- Around line 882-888: The loop that assigns detector_algorithm_reference over
algorithms_known currently overwrites when multiple detection algorithms exist;
update the logic in the section that iterates algorithms_known to make selection
deterministic or to emit a null detection per detector: either pick one
deterministic algorithm (e.g., lowest key or sorted by name) and log/warn when
multiple AlgorithmTaskType.DETECTION entries are present, or instead create an
AlgorithmReference for each known_algorithm with task_type DETECTION (rather
than a single detector_algorithm_reference) so downstream checks use
per-detector null detections; modify the code around
detector_algorithm_reference, algorithms_known and AlgorithmReference
accordingly and add a warning using the existing logger when multiple detectors
are found.
- Around line 896-902: The appended null detection currently uses a naive
timestamp via datetime.datetime.now(); update the DetectionResponse creation
inside the null_detections_to_add.append call to use the timezone-aware now()
function (already imported and used elsewhere) instead of
datetime.datetime.now(), ensuring the timestamp field for DetectionResponse is
timezone-aware and consistent with other parts of the pipeline.

In `@ui/src/data-services/models/collection.ts`:
- Around line 93-101: Rename the getter numImagesProccessed to
numImagesProcessedLabel in the Collection model so the name is spelled correctly
and follows the "*Label" convention; update the getter implementation (currently
computing numProcessed, pct and returning the formatted string) to the new name,
then find and replace all usages (e.g., any occurrences of
item.numImagesProccessed such as in collection-columns.tsx) to
item.numImagesProcessedLabel, and update any type declarations/exports or tests
that reference the old symbol to the new one to ensure no unresolved references
remain.

In `@ui/src/pages/session-details/playback/activity-plot/activity-plot.tsx`:
- Around line 71-96: The "Was processed" trace currently maps wasProcessed→0/1
onto yaxis 'y2' (shared with detectionsMaxCount) so it becomes invisible when
detectionsMaxCount is large and the mapping is inverted; fix by either (A)
giving this trace its own y-axis (e.g., set yaxis: 'y3' and add a yaxis config
with fixed range [0,1]) and keep y as wasProcessed ? 0 : 1 but rename the trace
to "Not processed" (or add a clarifying comment about the inversion), or (B)
scale the y values to the detections axis by mapping y: timeline.map(t =>
t.numCaptures>0 ? (t.wasProcessed ? 0 : detectionsMaxCount) : 0) and keep the
name "Not processed" (or invert the boolean mapping and rename to "Was
processed" accordingly); update hover/customdata text to match the chosen
semantics.
🧹 Nitpick comments (3)
ui/src/data-services/models/timeline-tick.ts (1)

35-37: Consider adding a nullish fallback for wasProcessed, consistent with other getters.

Other getters in this class (e.g., numDetections, numCaptures) use ?? 0 / ?? false to guard against missing server data. If was_processed is absent from the API response (e.g., during rollout or with an older API version), this will return undefined instead of false.

Suggested fix
   get wasProcessed(): boolean {
-    return this._timelineTick.was_processed
+    return this._timelineTick.was_processed ?? false
   }
ami/main/admin.py (1)

256-269: Reduce admin list N+1 by annotating was_processed.

get_was_processed in list_display can trigger a query per row. Consider annotating in get_queryset and reading the annotation in an admin method.

♻️ Suggested adjustment
@@
     def get_queryset(self, request: HttpRequest) -> QuerySet[Any]:
-        return super().get_queryset(request).select_related("event", "deployment", "deployment__data_source")
+        return (
+            super()
+            .get_queryset(request)
+            .select_related("event", "deployment", "deployment__data_source")
+            .with_was_processed()
+        )
+
+    `@admin.display`(boolean=True, ordering="was_processed", description="Was processed")
+    def get_was_processed(self, obj: SourceImage) -> bool:
+        return getattr(obj, "was_processed", obj.get_was_processed())
ami/main/models.py (1)

3725-3734: Same bbox=None / bbox__isnull=True redundancy as in get_detections_count.

~models.Q(images__detections__bbox__isnull=True) and ~models.Q(images__detections__bbox=None) produce identical SQL. One can be removed for clarity.

Simplify the filter
             source_images_with_detections_count=models.Count(
                 "images",
                 filter=(
                     models.Q(images__detections__isnull=False)
                     & ~models.Q(images__detections__bbox__isnull=True)
-                    & ~models.Q(images__detections__bbox=None)
                     & ~models.Q(images__detections__bbox=[])
                 ),
                 distinct=True,
             )

Comment on lines +1839 to +1841
# Detections count excludes detections without bounding boxes
# Detections with null bounding boxes are valid and indicates the image was successfully processed
return self.detections.exclude(Q(bbox__isnull=True) | Q(bbox=None) | Q(bbox=[])).count()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

rg -n 'update_detection_counts' --type=py -B2 -A10

Repository: RolnickLab/antenna

Length of output: 2227


🏁 Script executed:

cat -n ami/main/models.py | sed -n '1835,1850p'

Repository: RolnickLab/antenna

Length of output: 856


🏁 Script executed:

cat -n ami/main/models.py | sed -n '2000,2035p'

Repository: RolnickLab/antenna

Length of output: 1634


Remove redundant Q(bbox=None) — it's equivalent to Q(bbox__isnull=True) in Django.

In Django ORM, Q(bbox=None) and Q(bbox__isnull=True) both compile to IS NULL, making one redundant.

More importantly, update_detection_counts() (line 2015) counts all detections without any bbox filtering, while get_detections_count() explicitly excludes null and empty bboxes. This creates an inconsistency: the cached detections_count field will diverge from get_detections_count() for images with null-bbox detections. Since update_detection_counts() is actively used (views.py:381), apply the same bbox exclusion filter to keep counts consistent.

Proposed fix
     def get_detections_count(self) -> int:
         # Detections count excludes detections without bounding boxes
         # Detections with null bounding boxes are valid and indicates the image was successfully processed
-        return self.detections.exclude(Q(bbox__isnull=True) | Q(bbox=None) | Q(bbox=[])).count()
+        return self.detections.exclude(Q(bbox__isnull=True) | Q(bbox=[])).count()

And update update_detection_counts() at line 2016 to filter detections similarly before counting.

🤖 Prompt for AI Agents
In `@ami/main/models.py` around lines 1839 - 1841, Remove the redundant
Q(bbox=None) in get_detections_count() (keep Q(bbox__isnull=True) and the
empty-list check Q(bbox=[])) and then make update_detection_counts() use the
same exclusion filter as get_detections_count() so the cached detections_count
is computed consistently; specifically, update the counting logic in
update_detection_counts() to apply .exclude(Q(bbox__isnull=True) | Q(bbox=[]))
on self.detections before calling .count() so both methods return the same
value.

Comment on lines +882 to +888
detector_algorithm_reference = None
for known_algorithm in algorithms_known.values():
if known_algorithm.task_type == AlgorithmTaskType.DETECTION:
detector_algorithm_reference = AlgorithmReference(
name=known_algorithm.name, key=known_algorithm.key
)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Make detector selection deterministic (or create one null detection per detector).

If a pipeline has multiple detection algorithms, the loop currently overwrites the reference, which can make algorithm-specific processed checks unreliable. Consider selecting deterministically and warning, or emitting one null detection per detector.

🛠️ Example deterministic selection
-            detector_algorithm_reference = None
-            for known_algorithm in algorithms_known.values():
-                if known_algorithm.task_type == AlgorithmTaskType.DETECTION:
-                    detector_algorithm_reference = AlgorithmReference(
-                        name=known_algorithm.name, key=known_algorithm.key
-                    )
+            detector_algorithms = [
+                algo for algo in algorithms_known.values() if algo.task_type == AlgorithmTaskType.DETECTION
+            ]
+            if len(detector_algorithms) > 1:
+                job_logger.warning(
+                    "Multiple detection algorithms found; using the first for null detections."
+                )
+            detector_algorithm_reference = (
+                AlgorithmReference(name=detector_algorithms[0].name, key=detector_algorithms[0].key)
+                if detector_algorithms
+                else None
+            )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
detector_algorithm_reference = None
for known_algorithm in algorithms_known.values():
if known_algorithm.task_type == AlgorithmTaskType.DETECTION:
detector_algorithm_reference = AlgorithmReference(
name=known_algorithm.name, key=known_algorithm.key
)
detector_algorithms = [
algo for algo in algorithms_known.values() if algo.task_type == AlgorithmTaskType.DETECTION
]
if len(detector_algorithms) > 1:
job_logger.warning(
"Multiple detection algorithms found; using the first for null detections."
)
detector_algorithm_reference = (
AlgorithmReference(name=detector_algorithms[0].name, key=detector_algorithms[0].key)
if detector_algorithms
else None
)
🤖 Prompt for AI Agents
In `@ami/ml/models/pipeline.py` around lines 882 - 888, The loop that assigns
detector_algorithm_reference over algorithms_known currently overwrites when
multiple detection algorithms exist; update the logic in the section that
iterates algorithms_known to make selection deterministic or to emit a null
detection per detector: either pick one deterministic algorithm (e.g., lowest
key or sorted by name) and log/warn when multiple AlgorithmTaskType.DETECTION
entries are present, or instead create an AlgorithmReference for each
known_algorithm with task_type DETECTION (rather than a single
detector_algorithm_reference) so downstream checks use per-detector null
detections; modify the code around detector_algorithm_reference,
algorithms_known and AlgorithmReference accordingly and add a warning using the
existing logger when multiple detectors are found.

Comment on lines +896 to +902
null_detections_to_add.append(
DetectionResponse(
source_image_id=source_img.id,
bbox=None,
algorithm=detector_algorithm_reference,
timestamp=datetime.datetime.now(),
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, let's check the file exists and read the relevant section
wc -l ami/ml/models/pipeline.py

Repository: RolnickLab/antenna

Length of output: 92


🏁 Script executed:

# Read the section around lines 896-902
sed -n '890,910p' ami/ml/models/pipeline.py

Repository: RolnickLab/antenna

Length of output: 818


🏁 Script executed:

# Check the imports at the top of the file to see what's available
head -50 ami/ml/models/pipeline.py

Repository: RolnickLab/antenna

Length of output: 1336


🏁 Script executed:

# Search for all timestamp usages in the pipeline file
rg 'timestamp' ami/ml/models/pipeline.py -n -A 2 -B 2

Repository: RolnickLab/antenna

Length of output: 1108


Use a timezone-aware timestamp for null detections.

datetime.datetime.now() is naïve; the Django timezone-aware now() function is already imported and used elsewhere in this pipeline (e.g., line 679). Replace with now() for consistency.

🛠️ Suggested fix
-                    timestamp=datetime.datetime.now(),
+                    timestamp=now(),
🤖 Prompt for AI Agents
In `@ami/ml/models/pipeline.py` around lines 896 - 902, The appended null
detection currently uses a naive timestamp via datetime.datetime.now(); update
the DetectionResponse creation inside the null_detections_to_add.append call to
use the timezone-aware now() function (already imported and used elsewhere)
instead of datetime.datetime.now(), ensuring the timestamp field for
DetectionResponse is timezone-aware and consistent with other parts of the
pipeline.

Comment on lines +93 to +101
get numImagesProccessed(): string {
const numProcessed = this.numImagesProcessed ?? 0
const pct =
this.numImages && this.numImages > 0
? (numProcessed / this.numImages) * 100
: 0

return `${numProcessed.toLocaleString()} (${pct.toFixed(0)}%)`
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Typo in getter name: numImagesProccessednumImagesProcessedLabel.

The getter has a misspelling ("Proccessed" with double 'c'). Also, for consistency with the adjacent numImagesWithDetectionsLabel getter (line 82), this should follow the same *Label naming convention since it returns a formatted display string, not a raw number.

Suggested fix
-  get numImagesProccessed(): string {
+  get numImagesProcessedLabel(): string {
     const numProcessed = this.numImagesProcessed ?? 0
     const pct =
       this.numImages && this.numImages > 0
         ? (numProcessed / this.numImages) * 100
         : 0
 
     return `${numProcessed.toLocaleString()} (${pct.toFixed(0)}%)`
   }

Note: This rename must also be applied in collection-columns.tsx (line 114) where item.numImagesProccessed is referenced.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
get numImagesProccessed(): string {
const numProcessed = this.numImagesProcessed ?? 0
const pct =
this.numImages && this.numImages > 0
? (numProcessed / this.numImages) * 100
: 0
return `${numProcessed.toLocaleString()} (${pct.toFixed(0)}%)`
}
get numImagesProcessedLabel(): string {
const numProcessed = this.numImagesProcessed ?? 0
const pct =
this.numImages && this.numImages > 0
? (numProcessed / this.numImages) * 100
: 0
return `${numProcessed.toLocaleString()} (${pct.toFixed(0)}%)`
}
🤖 Prompt for AI Agents
In `@ui/src/data-services/models/collection.ts` around lines 93 - 101, Rename the
getter numImagesProccessed to numImagesProcessedLabel in the Collection model so
the name is spelled correctly and follows the "*Label" convention; update the
getter implementation (currently computing numProcessed, pct and returning the
formatted string) to the new name, then find and replace all usages (e.g., any
occurrences of item.numImagesProccessed such as in collection-columns.tsx) to
item.numImagesProcessedLabel, and update any type declarations/exports or tests
that reference the old symbol to the new one to ensure no unresolved references
remain.

Comment on lines +71 to +96
{
x: timeline.map(
(timelineTick) => new Date(timelineTick.startDate)
),
y: timeline.map((timelineTick) =>
timelineTick.numCaptures > 0
? timelineTick.wasProcessed
? 0
: 1
: 0
),
customdata: timeline.map((timelineTick) =>
timelineTick.numCaptures > 0
? timelineTick.wasProcessed
? 'Yes'
: 'No'
: 'N/A'
),
hovertemplate: 'Was processed: %{customdata}<extra></extra>',
fill: 'tozeroy',
type: 'scatter',
mode: 'lines',
line: { color: lineColorProcessed, width: 1 },
name: 'Was processed',
yaxis: 'y2',
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Inverted y-values will be nearly invisible when detection counts are high.

The "Was processed" trace uses y values of 0 or 1, but shares yaxis2 whose range is [0, max(detectionsMaxCount, 1)]. When detectionsMaxCount is large (e.g., 50), the red fill will be scaled to 1/50th of the axis height — effectively invisible.

If the intent is to highlight unprocessed intervals, consider either:

  1. Giving this trace its own y-axis (e.g., y3) with a fixed [0, 1] range, or
  2. Scaling the y-value to detectionsMaxCount so it fills the axis.

Additionally, the y-mapping is inverted (wasProcessed → 0, not processed → 1), which combined with the series name "Was processed" may confuse maintainers. A name like "Not processed" or a comment explaining the inversion would improve clarity.

🤖 Prompt for AI Agents
In `@ui/src/pages/session-details/playback/activity-plot/activity-plot.tsx` around
lines 71 - 96, The "Was processed" trace currently maps wasProcessed→0/1 onto
yaxis 'y2' (shared with detectionsMaxCount) so it becomes invisible when
detectionsMaxCount is large and the mapping is inverted; fix by either (A)
giving this trace its own y-axis (e.g., set yaxis: 'y3' and add a yaxis config
with fixed range [0,1]) and keep y as wasProcessed ? 0 : 1 but rename the trace
to "Not processed" (or add a clarifying comment about the inversion), or (B)
scale the y values to the detections axis by mapping y: timeline.map(t =>
t.numCaptures>0 ? (t.wasProcessed ? 0 : detectionsMaxCount) : 0) and keep the
name "Not processed" (or invert the boolean mapping and rename to "Was
processed" accordingly); update hover/customdata text to match the chosen
semantics.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Make it clear what images have not been processed

1 participant