Skip to content

Python: Add Foundry Memory Context Provider#3943

Draft
Copilot wants to merge 4 commits intomainfrom
copilot/add-foundry-memory-provider
Draft

Python: Add Foundry Memory Context Provider#3943
Copilot wants to merge 4 commits intomainfrom
copilot/add-foundry-memory-provider

Conversation

Copy link
Contributor

Copilot AI commented Feb 15, 2026

Motivation and Context

Agents need persistent semantic memory capabilities using Azure AI Foundry Memory Store. Existing context providers (Mem0, Redis) don't integrate with Foundry's memory APIs.

Description

Implements FoundryMemoryProvider as a BaseContextProvider that wraps Azure AI Projects SDK memory operations.

Architecture:

  • before_run: Retrieves static memories (user profiles) once per session, then searches contextual memories per turn using search_memories API
  • after_run: Fires begin_update_memories with configurable delay (default 300s), chains operations via previous_update_id
  • Error handling: Non-critical failures logged, agent continues without memory

Key implementation details:

  • Uses ItemParam for message formatting, compatible with Foundry memory extraction pipeline
  • Session state flag prevents repeated static memory retrieval on failures (via finally block)
  • Memory updates debounced server-side by update_delay parameter
  • Async context manager delegates to AIProjectClient lifecycle

Usage:

from agent_framework.azure import FoundryMemoryProvider
from azure.ai.projects.aio import AIProjectClient

memory_provider = FoundryMemoryProvider(
    source_id="foundry_memory",
    project_client=project_client,
    memory_store_name="my_store",
    scope="user_123",
    update_delay=60
)

agent = Agent(
    client=chat_client,
    context_providers=[memory_provider]
)

Files:

  • Core: _foundry_memory_provider.py (234 lines)
  • Tests: 21 test methods covering init, hooks, edge cases
  • Sample: azure_ai_foundry_memory.py demonstrating memory store creation/cleanup

Contribution Checklist

  • The code builds clean without any errors or warnings
  • The PR follows the Contribution Guidelines
  • All unit tests pass, and I have added new tests where possible
  • Is this a breaking change? No
Original prompt

This section details on the original issue you should resolve

<issue_title>Python: Add Foundry Memory Context Provider</issue_title>
<issue_description>We should add a Context Provider based on Foundry Memory. This can be done using the foundry project client sdk: https://learn.microsoft.com/en-us/python/api/overview/azure/ai-projects-readme?view=azure-python-preview (.memory_store operations), a snippet that can be used to get started (incomplete, not validated and uses verbs from the SK version of context providers):

class FoundryMemoryProvider(ContextProvider):
  client: AzureChatClient
  memory_store_id: str
  scope: str
  _previous_update_id: str
  _previous_search_id: str

  def __init__(
    self, client: AzureChatClient, 
    memory_store_id: str, 
    scope: str
  ) -> None:
    super().__init__(client, memory_store_id, scope)

  async def thread_created(self, thread_id: str | None = None) -> None:
    # Retrieve/update user profile memories
    self._static_memories = self.client.memory_stores.search_memories(
      self.memory_store_id,
      self.scope
    )

  async def messages_adding(
    self, thread_id: str | None, new_messages: ChatMessage | Sequence[ChatMessage]
  ) -> None:
    messages_list = ( [new_messages] if isinstance(new_messages, ChatMessage) else list(new_messages) )
    messages: list[dict[str, str]] = [
      {"role": message.role.value, "content": message.text}
      for message in messages_list
      if message.role.value in {"user", "assistant", "system"} and message.text and message.text.strip()
    ]
    self._new_search_messages.extend(messages)
    self._new_update_messages.extend(messages)

    # Update memories with delay (fire and forget)
    update_poller = client.memory_stores.update_memories(
      self.memory_store_id, self.scope, new_update_messages, self.previous_update_id, update_delay="5m")
    self.previous_update_id = update_poller.update_id
    self._new_update_messages = []


  async def model_invoking(
      self, messages: ChatMessage | MutableSequence[ChatMessage]
  ) -> Context:
    # Awkward that thread_id is part of messages_adding, but not model_invoking
    search_response = self.client.memory_stores.search_memories(
      self.memory_store_id, self.scope, self._new_search_messages, self._previous_search_id)
    self._previous_search_id = search_response.search_id
    self._new_search_messages = []

    context_memories = self._static_memories.concat(search_response.memories)
    line_separated_memories = "\n".join(memory.content for memory in context_memories)
    context_prompt = "## Memories\nConsider the following memories when answering user questions:"
    content = TextContent(f"{context_prompt}\n{line_separated_memories}")
    return Context(contents=[content])
```</issue_description>

## Comments on the Issue (you are @copilot in this section)

<comments>
</comments>

💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.

Copilot AI and others added 3 commits February 15, 2026 20:13
Co-authored-by: eavanvalkenburg <13749212+eavanvalkenburg@users.noreply.github.com>
Co-authored-by: eavanvalkenburg <13749212+eavanvalkenburg@users.noreply.github.com>
Co-authored-by: eavanvalkenburg <13749212+eavanvalkenburg@users.noreply.github.com>
Copilot AI changed the title [WIP] Add Foundry Memory Context Provider Python: Add Foundry Memory Context Provider Feb 15, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Python: Add Foundry Memory Context Provider

2 participants