Skip to content

Tighten screws by making all checks harder + fix any issues#27

Merged
spoorcc merged 17 commits intomainfrom
tighten-screws
Mar 6, 2026
Merged

Tighten screws by making all checks harder + fix any issues#27
spoorcc merged 17 commits intomainfrom
tighten-screws

Conversation

@spoorcc
Copy link
Contributor

@spoorcc spoorcc commented Mar 2, 2026

Summary by CodeRabbit

  • New Features

    • Publish now validates outputs, minifies catalog JSON, and copies site assets for a cleaner build.
    • Serve uses packaged site resources and improved logging.
    • Catalog write returns added/updated counts per run.
  • Improvements

    • Richer metadata extraction and safer path resolution produce higher-quality, more robust catalog entries.
    • README and recipe parsing simplified and more reliable.
    • Public API surface clarified and consolidated for easier imports.
  • Documentation

    • Expanded developer workflow, tooling configs, and streamlined CI/lint commands.
  • Tests

    • Tests updated to reflect API renames and stylistic changes.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 2, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Promotes several private helpers to public APIs, centralizes parsing/enrichment logic, modernizes I/O to Path.open and importlib.resources, adds publish validation/asset-minification, tightens recipe/path parsing safety, expands tooling/CI configs, and updates tests to use the new public names.

Changes

Cohort / File(s) Summary
API exposure & source entrypoints
dfetch_hub/catalog/sources/__init__.py, dfetch_hub/catalog/sources/*
Renames private symbols to public (_RAW_BRANCHESRAW_BRANCHES, _raw_urlraw_url, _fetch_rawfetch_raw) and updates imports/call sites and tests.
CLib parsing & enrichment
dfetch_hub/catalog/sources/clib.py
Adds package.json enrichment helpers, keyword flattening, canonical URL resolution, wiki-line parsing abstraction, and refactors package parsing to use these helpers.
Conan parsing safety & URL helpers
dfetch_hub/catalog/sources/conan.py
Adds in-string scanner, safe conanfile resolution (path-traversal protection), recipe-dir scanning, and _build_conan_urls; refactors recipe discovery and URL construction.
Readme description extraction
dfetch_hub/catalog/sources/readme.py
Consolidates skip regexes and centralizes content-line detection with _is_content_line and refactors _extract_description.
vcpkg parser cleanup
dfetch_hub/catalog/sources/vcpkg.py
Adds _extract_str_field helper, uses Path.open, and standardizes homepage/name/license extraction with fallbacks.
Catalog writer refactor & API change
dfetch_hub/catalog/writer.py
Introduces _process_manifest and _write_detail_json, modular merge helpers, standardizes Path.open I/O, and changes write_catalog(...) to return (added, updated).
Publish command: validation, assets, minification
dfetch_hub/commands/publish.py
Switches to importlib.resources for assets, adds _validate_output_dir, _copy_assets, _minify_catalog, and refactors publish flow to validate, copy assets (rewrite index paths), and minify JSON output.
Serve & update: resources, logging, parsing helper
dfetch_hub/commands/serve.py, dfetch_hub/commands/update.py
Use importlib.resources for default paths, add module logger, pass resource dir to HTTP handler, and add _parse_entry_dirs to consolidate manifest parsing and skipped-count reporting.
Config & I/O modernization
dfetch_hub/config.py, dfetch_hub/catalog/cloner.py, dfetch_hub/cli.py
Replace built-in open() with Path.open(), adjust some raised error types, and perform small formatting-only refactors.
Tooling / CI / linters & docs
.pre-commit-config.yaml, pyproject.toml, .github/workflows/test.yml, CLAUDE.md
Adjust xenon thresholds, expand pyproject tool configs (black/isort/ruff/mypy/pyright/pylint/pytest/doc8), simplify CI commands, and update developer guidance and pre-commit settings.
Tests & formatting
test/*
Update tests to patch/import new public names (fetch_raw, raw_url) and apply widespread formatting/line-wrapping changes; most edits are cosmetic.

Sequence Diagram(s)

sequenceDiagram
  participant CLI
  participant Config as ConfigLoader
  participant Publish as PublishCmd
  participant Resources as importlib.resources
  participant FS as Filesystem

  CLI->>Config: load_config_with_data_dir()
  Config-->>CLI: data_dir, config
  CLI->>Publish: _cmd_publish(parsed)
  Publish->>Publish: _validate_output_dir(data_dir, output)
  Publish->>Resources: files("dfetch_hub") -> site assets
  Resources-->>Publish: site_dir
  Publish->>FS: remove/create output dir
  Publish->>FS: _copy_assets(site_dir, output) (rewrite index.html paths)
  Publish->>FS: _minify_catalog(data_dir, output/data) (read/write via Path.open)
  Publish-->>CLI: log completed (count)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

  • Update-gui #10 — Introduced the helper symbols that are being renamed/exposed here (raw/_raw_url/_fetch_raw overlap).
  • Add dev project to main #11 — Direct overlap: earlier PR exposing/renaming _RAW_BRANCHES/_raw_url/_fetch_raw.
  • Devtools #20 — Related tooling/CI configuration changes (pre-commit, xenon thresholds, pyproject adjustments).

Poem

🐰 I hopped from private shade to public sun,

helpers shining, tests now run.
Paths and JSON snug in my paw,
assets copied — index fixed with awe.
A tidy burrow: catalog complete.

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main objective: increasing code quality check strictness (xenon complexity limits, linter/formatter configurations) and fixing issues across the codebase.
Docstring Coverage ✅ Passed Docstring coverage is 90.98% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch tighten-screws

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Nitpick comments (1)
dfetch_hub/catalog/sources/vcpkg.py (1)

99-99: Use idiomatic Path instance method call.

Path.open(manifest_path, ...) calls the unbound method, which works but is unconventional. The standard pattern is to call the method on the instance directly.

♻️ Proposed fix
-        with Path.open(manifest_path, encoding="utf-8") as fh:
+        with manifest_path.open(encoding="utf-8") as fh:
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dfetch_hub/catalog/sources/vcpkg.py` at line 99, The call uses the unbound
method Path.open(manifest_path, encoding="utf-8") which is unconventional;
change it to call the instance method on manifest_path (i.e.,
manifest_path.open(encoding="utf-8")), ensuring manifest_path is a pathlib.Path
instance before use (in the same scope where Path.open is currently invoked).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@CLAUDE.md`:
- Around line 92-96: The fenced code block showing the shell commands (the
triple-backtick block containing "source .venv/bin/activate", "pre-commit run
--all-files", and "pytest") needs a language tag to satisfy markdownlint MD040;
update the opening fence from ``` to ```bash so the block is recognized as
Bash/sh syntax (i.e., change the existing ``` to ```bash for that code block).

In `@dfetch_hub/catalog/sources/readme.py`:
- Around line 24-25: The skip-regex _SKIP_RE is currently tested against the raw
line which allows indented headings/badges (e.g. "  # Title" or "  [![...]") to
slip through; update the logic that uses _SKIP_RE (the place where lines are
checked for descriptions around the README parsing) to normalize the line first
(e.g., call lstrip() to remove leading whitespace or otherwise normalize
indentation) before testing with _SKIP_RE so indented headings, badges, and
blank lines are correctly skipped by _SKIP_RE; keep every other behavior the
same.

In `@dfetch_hub/catalog/writer.py`:
- Line 86: In _save_json replace the incorrect use of the Path class constructor
with the Path instance's open method: change the call that uses Path.open(path,
"w", encoding="utf-8") to use path.open("w", encoding="utf-8") inside the
_save_json function so the file is opened on the provided Path object rather
than calling the class method incorrectly.
- Line 79: The call uses the class method syntax Path.open(path,
encoding="utf-8") which passes the Path object as the mode and raises TypeError;
change it to use the instance method by calling path.open(encoding="utf-8") so
the Path instance is the receiver (ensure the variable named path is a
pathlib.Path and remains unchanged in the surrounding context, e.g., inside the
writer function or method where this with-statement appears).

In `@dfetch_hub/config.py`:
- Around line 93-99: Update the docstring for the public function load_config in
dfetch_hub/config.py to document the new TypeError cases: add entries under the
Raises: section stating that TypeError is raised when the `[settings]` section
is not a TOML table (i.e., when data.get("settings") is not a dict) and when
`[[source]]` is not an array of TOML tables (i.e., when raw_sources_obj from
data.get("source", []) is not a list or any element is not a dict); reference
load_config and mention the specific conditions (non-dict settings and
non-list/invalid-list elements for raw_sources_obj) so callers know when
TypeError will be thrown.

---

Nitpick comments:
In `@dfetch_hub/catalog/sources/vcpkg.py`:
- Line 99: The call uses the unbound method Path.open(manifest_path,
encoding="utf-8") which is unconventional; change it to call the instance method
on manifest_path (i.e., manifest_path.open(encoding="utf-8")), ensuring
manifest_path is a pathlib.Path instance before use (in the same scope where
Path.open is currently invoked).

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a7949da and afe01b0.

📒 Files selected for processing (16)
  • .pre-commit-config.yaml
  • CLAUDE.md
  • dfetch_hub/catalog/sources/__init__.py
  • dfetch_hub/catalog/sources/clib.py
  • dfetch_hub/catalog/sources/conan.py
  • dfetch_hub/catalog/sources/readme.py
  • dfetch_hub/catalog/sources/vcpkg.py
  • dfetch_hub/catalog/writer.py
  • dfetch_hub/commands/__init__.py
  • dfetch_hub/commands/publish.py
  • dfetch_hub/commands/serve.py
  • dfetch_hub/commands/update.py
  • dfetch_hub/config.py
  • pyproject.toml
  • test/test_catalog_clib.py
  • test/test_catalog_sources.py

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@dfetch_hub/catalog/sources/__init__.py`:
- Around line 59-65: Add Google-style docstrings for the two public functions
raw_url and fetch_raw: replace the one-line docstrings with multi-line
docstrings that include Args: (document owner, repo, branch, filename for
raw_url; url for fetch_raw), Returns: (str for raw_url; str | None for
fetch_raw) and Raises: where relevant (e.g., document that fetch_raw may raise
network/requests exceptions or return None on failure). Ensure the docstrings
follow Google style and are placed immediately above the function definitions
for raw_url and fetch_raw.

In `@dfetch_hub/catalog/sources/clib.py`:
- Around line 123-125: The description assignment in _enrich_from_pkg_json
currently prefers the wiki tagline over package.json description; change it so
package.json description takes precedence: use pkg_json.get("description")
first, then fallback to tagline, then empty string, and ensure you still coerce
to str or None consistently with how package_name and license_val are handled
(refer to variables description, tagline, pkg_json and function
_enrich_from_pkg_json).

In `@dfetch_hub/commands/__init__.py`:
- Around line 45-46: Replace the noisy traceback log call
_logger.exception("Config file '%s' not found", config_path) with a
non-exception-level log (e.g., _logger.warning("Config file '%s' not found",
config_path) or _logger.debug(...)) in the same branch that exits via
sys.exit(1); this keeps the exit behavior (sys.exit(1)) but uses
_logger.warning/_logger.debug instead of _logger.exception to avoid emitting an
unnecessary traceback for an expected CLI/config condition.

In `@dfetch_hub/commands/publish.py`:
- Around line 31-34: The calls use the Path.open class method incorrectly;
change the two usages of Path.open(src, ...) and Path.open(dst, ...) to instance
calls on the Path objects (use src.open(... ) and dst.open(...)) so the instance
method on the Path variables src and dst is invoked; update the two with-blocks
in publish.py that currently call Path.open to use src.open and dst.open
respectively.
- Around line 107-110: The code currently calls shutil.rmtree(output) whenever
output.exists(), which will raise if output is a file or symlink; change the
flow to check output.is_dir() and only call shutil.rmtree(output) for
directories, otherwise call output.unlink() (which handles files and symlinks)
so logger.print_info_line("publish", ...) remains accurate and the subsequent
output.mkdir(parents=True) runs under your controlled error flow; update the
block around the output existence check (where shutil.rmtree is used) to perform
this conditional removal.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between afe01b0 and 1c5e05a.

📒 Files selected for processing (24)
  • .github/workflows/test.yml
  • .pre-commit-config.yaml
  • CLAUDE.md
  • dfetch_hub/catalog/cloner.py
  • dfetch_hub/catalog/sources/__init__.py
  • dfetch_hub/catalog/sources/clib.py
  • dfetch_hub/catalog/sources/conan.py
  • dfetch_hub/catalog/sources/readme.py
  • dfetch_hub/catalog/sources/vcpkg.py
  • dfetch_hub/catalog/writer.py
  • dfetch_hub/cli.py
  • dfetch_hub/commands/__init__.py
  • dfetch_hub/commands/publish.py
  • dfetch_hub/commands/serve.py
  • dfetch_hub/commands/update.py
  • dfetch_hub/config.py
  • pyproject.toml
  • test/test_catalog_clib.py
  • test/test_catalog_cloner.py
  • test/test_catalog_conan.py
  • test/test_catalog_readme.py
  • test/test_catalog_sources.py
  • test/test_catalog_vcpkg.py
  • test/test_catalog_writer.py
✅ Files skipped from review due to trivial changes (6)
  • test/test_catalog_conan.py
  • test/test_catalog_cloner.py
  • test/test_catalog_vcpkg.py
  • test/test_catalog_writer.py
  • test/test_catalog_readme.py
  • dfetch_hub/catalog/cloner.py
🚧 Files skipped from review as they are similar to previous changes (6)
  • test/test_catalog_sources.py
  • dfetch_hub/config.py
  • .pre-commit-config.yaml
  • pyproject.toml
  • test/test_catalog_clib.py
  • dfetch_hub/catalog/sources/readme.py

Comment on lines +59 to 65
def raw_url(owner: str, repo: str, branch: str, filename: str) -> str:
"""Build a raw.githubusercontent.com URL for a specific file."""
return f"https://raw.githubusercontent.com/{owner}/{repo}/{branch}/{filename}"


def _fetch_raw(url: str) -> str | None:
def fetch_raw(url: str) -> str | None:
"""GET *url* and return the response body as a string, or ``None`` on failure."""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add Google-style docstrings for the newly public helpers.

raw_url and fetch_raw are now public APIs but still have private-helper style one-line docstrings. Please add Args: and Returns: sections.

Suggested docstring update
 def raw_url(owner: str, repo: str, branch: str, filename: str) -> str:
-    """Build a raw.githubusercontent.com URL for a specific file."""
+    """Build a raw.githubusercontent.com URL for a specific file.
+
+    Args:
+        owner: Repository owner or organization.
+        repo: Repository name.
+        branch: Branch name to fetch from.
+        filename: Filename within the repository root.
+
+    Returns:
+        Raw GitHub content URL for the requested file.
+    """
     return f"https://raw.githubusercontent.com/{owner}/{repo}/{branch}/{filename}"
 
 
 def fetch_raw(url: str) -> str | None:
-    """GET *url* and return the response body as a string, or ``None`` on failure."""
+    """Fetch a URL and return the response body.
+
+    Args:
+        url: Raw content URL to fetch.
+
+    Returns:
+        Response body as text, or ``None`` on network/read failure.
+    """
     try:

As per coding guidelines: Every public module, class, and function must have a Google-style docstring with Args:, Returns:, and Raises: sections where relevant.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dfetch_hub/catalog/sources/__init__.py` around lines 59 - 65, Add
Google-style docstrings for the two public functions raw_url and fetch_raw:
replace the one-line docstrings with multi-line docstrings that include Args:
(document owner, repo, branch, filename for raw_url; url for fetch_raw),
Returns: (str for raw_url; str | None for fetch_raw) and Raises: where relevant
(e.g., document that fetch_raw may raise network/requests exceptions or return
None on failure). Ensure the docstrings follow Google style and are placed
immediately above the function definitions for raw_url and fetch_raw.

Comment on lines +45 to 46
_logger.exception("Config file '%s' not found", config_path)
sys.exit(1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use warning/debug logging for the missing-config path.

This branch handles an expected CLI/config condition; logging a traceback via .exception(...) is overly noisy for this case.

Suggested change
     try:
         config = load_config(config_path)
     except FileNotFoundError:
-        _logger.exception("Config file '%s' not found", config_path)
+        _logger.warning("Config file '%s' not found", config_path)
         sys.exit(1)

As per coding guidelines: Log unexpected conditions using logger.warning(...) or logger.debug(...) from dfetch.log.get_logger(__name__).

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
_logger.exception("Config file '%s' not found", config_path)
sys.exit(1)
_logger.warning("Config file '%s' not found", config_path)
sys.exit(1)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dfetch_hub/commands/__init__.py` around lines 45 - 46, Replace the noisy
traceback log call _logger.exception("Config file '%s' not found", config_path)
with a non-exception-level log (e.g., _logger.warning("Config file '%s' not
found", config_path) or _logger.debug(...)) in the same branch that exits via
sys.exit(1); this keeps the exit behavior (sys.exit(1)) but uses
_logger.warning/_logger.debug instead of _logger.exception to avoid emitting an
unnecessary traceback for an expected CLI/config condition.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@pyproject.toml`:
- Around line 140-143: The exclude glob for doc8 is incorrect: replace the
current "*.dfetch_hub.egg-info/*" entry in the exclude list with a pattern that
matches the actual egg-info directory (e.g., "dfetch_hub.egg-info/*" or
"dfetch_hub.egg-info/**" to also exclude nested files). Update the exclude array
entry so the 'exclude' setting contains the corrected glob string to prevent
egg-info metadata from being linted.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1c5e05a and 11cef99.

📒 Files selected for processing (1)
  • pyproject.toml

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
pyproject.toml (1)

123-128: Identify which file categories actually need exceptions before moving to per-file ignores.

Per-file-ignores is indeed the recommended pattern in Ruff (v0.15.x) for rules that legitimately conflict with specific files or file categories (e.g., tests, migrations, __init__.py). However, the suggestion to move TRY003, TRY300, and RET505 is incomplete without identifying which files or categories actually violate these rules.

Before refactoring, determine:

  • Do tests legitimately need TRY003 / TRY300 / RET505 exceptions?
  • Are there legacy files or specific modules where these patterns are unavoidable?
  • Or do these rules conflict with your team's conventions throughout the codebase?

If specific file categories are identified, use explicit glob patterns:

Example per-file-ignores patterns
[tool.ruff.lint.per-file-ignores]
"tests/**/*.py" = ["TRY003", "TRY300", "RET505"]
"**/migrations/**/*.py" = ["TRY003", "TRY300"]

If no specific categories are the culprit, these rules may legitimately remain global ignores (indicating they're not a fit for your project's style).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyproject.toml` around lines 123 - 128, Audit the codebase for occurrences of
the Ruff rules TRY003, TRY300, and RET505 (e.g., by running ruff selects or a
repo-wide search for patterns that trigger those codes) to determine which file
categories legitimately need exceptions; if they are concentrated in tests,
migrations, generated/legacy modules, or __init__.py files, add per-file ignores
under [tool.ruff.lint.per-file-ignores] using explicit globs (for example
mapping "tests/**/*.py" or "**/migrations/**/*.py" to the offending codes)
instead of keeping those rules globally ignored, otherwise leave the codes in
the global ignore list; ensure you reference the exact rule codes (TRY003,
TRY300, RET505) and the config section name [tool.ruff.lint.per-file-ignores]
when making changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@pyproject.toml`:
- Around line 152-160: The pyright config currently sets typeCheckingMode =
"strict" but disables key diagnostics by setting reportUnknownVariableType,
reportUnknownArgumentType, and reportUnknownMemberType to "none"; change those
three keys to "warning" so strict mode still raises alerts for
unknown/partially-unknown types while preserving strict enforcement (edit the
entries for reportUnknownVariableType, reportUnknownArgumentType, and
reportUnknownMemberType in the pyproject.toml where typeCheckingMode is
defined).

---

Nitpick comments:
In `@pyproject.toml`:
- Around line 123-128: Audit the codebase for occurrences of the Ruff rules
TRY003, TRY300, and RET505 (e.g., by running ruff selects or a repo-wide search
for patterns that trigger those codes) to determine which file categories
legitimately need exceptions; if they are concentrated in tests, migrations,
generated/legacy modules, or __init__.py files, add per-file ignores under
[tool.ruff.lint.per-file-ignores] using explicit globs (for example mapping
"tests/**/*.py" or "**/migrations/**/*.py" to the offending codes) instead of
keeping those rules globally ignored, otherwise leave the codes in the global
ignore list; ensure you reference the exact rule codes (TRY003, TRY300, RET505)
and the config section name [tool.ruff.lint.per-file-ignores] when making
changes.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 11cef99 and ed9b50b.

📒 Files selected for processing (1)
  • pyproject.toml

spoorcc and others added 3 commits March 4, 2026 17:32
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
@spoorcc spoorcc force-pushed the tighten-screws branch 2 times, most recently from 96d5e4d to 1fed494 Compare March 4, 2026 20:52
@spoorcc spoorcc merged commit 70a10b5 into main Mar 6, 2026
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants