Skip to content

Conversation

@srnnkls
Copy link
Contributor

@srnnkls srnnkls commented Oct 26, 2025

coherent cache

  • Recursively freeze collections via _freeze_collections (dict→MappingProxyType, - list→tuple)
  • Treat MappingProxyType as a collection; apply freezing in
  • build/from_obj/refresh
  • Add version-stamped LRU attr/item cache; bump version on refresh and evict stale entries
  • Make refresh copy-on-write and thread-safe; clear cache on swap
  • Introduce slots to reduce footprint Use obj metadata in repr
  • to avoid getattr recursion
  • Minor formatting and doc improvements

coherent cache

- Recursively freeze collections via _freeze_collections (dict→MappingProxyType, - list→tuple)
- Treat MappingProxyType as a collection; apply freezing in
- build/from_obj/refresh
- Add version-stamped LRU attr/item cache; bump version on refresh and evict stale entries
- Make refresh copy-on-write and thread-safe; clear cache on swap
- Introduce __slots__ to reduce footprint Use obj metadata in __repr__
- to avoid __getattr__ recursion
- Minor formatting and doc improvements
@gemini-code-assist
Copy link

Summary of Changes

Hello @srnnkls, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the SerializationProxy by enforcing true immutability of serialized data snapshots and introducing a robust, thread-safe caching mechanism. These changes prevent unintended mutations, improve cache coherence across refresh operations, and optimize memory usage, leading to more predictable and efficient proxy behavior.

Highlights

  • Immutable Snapshots: Introduced _freeze_collections to recursively convert mutable collections (like dicts to MappingProxyType and lists to tuple) into immutable forms, ensuring that serialized data snapshots cannot be accidentally modified.
  • Coherent Version-Stamped Cache: Implemented a version-stamped LRU cache for attributes and items within SerializationProxy. Cache entries are now stored with a version, and stale entries are automatically evicted if the version changes after a refresh operation.
  • Thread-Safe Refresh Mechanism: The refresh() method now employs a copy-on-write strategy with version stamping, making it thread-safe. It computes new serialized data outside a lock, then atomically swaps the data, increments a version counter, and clears the cache.
  • Memory Optimization: Added slots to the SerializationProxy class to reduce its memory footprint, improving efficiency for potentially numerous proxy instances.
  • Improved repr: Modified the repr implementation for Dataclass and BaseModel instances to directly access metadata from the underlying object (self.obj), preventing potential getattr recursion issues.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request significantly improves the SerializationProxy by enforcing snapshot immutability and introducing a coherent, thread-safe caching mechanism. The recursive freezing of collections and the version-stamped LRU cache are excellent additions for robustness in concurrent scenarios. The code is well-structured, and the inclusion of __slots__ and the fix for __repr__ recursion are thoughtful optimizations. My review includes a couple of suggestions to enhance maintainability by removing a now-redundant check and refactoring duplicated cache-handling logic.

Comment on lines 426 to 427
if isinstance(ser, dict) and not isinstance(ser, MappingProxyType):
ser = MappingProxyType(ser)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This check appears to be redundant. The self.serialized attribute is consistently processed by _freeze_collections during proxy creation and refresh, which wraps any dict in a MappingProxyType. Therefore, isinstance(ser, dict) should always be False here for a mapping type. Removing this defensive code would make the implementation cleaner and more reliant on the immutability guarantee provided elsewhere.

Comment on lines 508 to +517
with self._attr_cache_lock:
if cache_key in self._attr_cache:
self._attr_cache.move_to_end(cache_key)
return self._attr_cache[cache_key]

# Hoist to local for micro-optimization
cached_ver, cached_proxy = self._attr_cache[cache_key]
if cached_ver == ver:
# Cache hit with matching version
self._attr_cache.move_to_end(cache_key)
return cached_proxy
else:
# Stale entry from pre-refresh, evict it
self._attr_cache.pop(cache_key, None)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic for checking and retrieving from the versioned cache is duplicated here and in __getattr__ (lines 460-470). Similarly, the logic for adding a new entry to the cache is duplicated (here at lines 550-555 and in __getattr__ at lines 491-496). To improve maintainability and reduce code duplication, consider extracting this cache management logic into private helper methods (e.g., _get_cached_proxy and _cache_proxy).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants