Skip to content

Added support for custom LLM provider URLs for OpenAI and Anthropic, …#9731

Open
dpage wants to merge 1 commit intopgadmin-org:masterfrom
dpage:configurable_llm_url
Open

Added support for custom LLM provider URLs for OpenAI and Anthropic, …#9731
dpage wants to merge 1 commit intopgadmin-org:masterfrom
dpage:configurable_llm_url

Conversation

@dpage
Copy link
Contributor

@dpage dpage commented Mar 11, 2026

…allowing use of OpenAI-compatible providers such as LM Studio, EXO, and LiteLLM. #9703

  • Add configurable API URL fields for OpenAI and Anthropic providers
  • Make API keys optional when using custom URLs (for local providers)
  • Auto-clear model dropdown when provider settings change
  • Refresh button uses current unsaved form values
  • Update documentation and release notes

Summary by CodeRabbit

  • New Features

    • Added support for custom API URLs for OpenAI and Anthropic providers
    • Enabled compatibility with OpenAI-compatible providers (LM Studio, EXO, LiteLLM, local inference servers)
    • Enabled compatibility with Anthropic-compatible API providers
    • Improved error handling for URL connectivity and configuration issues
  • Documentation

    • Updated documentation for custom API URL configuration and compatible provider usage

…allowing use of OpenAI-compatible providers such as LM Studio, EXO, and LiteLLM. pgadmin-org#9703

- Add configurable API URL fields for OpenAI and Anthropic providers
- Make API keys optional when using custom URLs (for local providers)
- Auto-clear model dropdown when provider settings change
- Refresh button uses current unsaved form values
- Update documentation and release notes

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@coderabbitai
Copy link

coderabbitai bot commented Mar 11, 2026

Walkthrough

Adds support for custom API URLs for Anthropic and OpenAI LLM providers, enabling use of compatible endpoints. Implementation includes new configuration constants, updated provider client constructors accepting api_url parameter, enhanced model-fetching flows, and UI changes for preference dependency tracking.

Changes

Cohort / File(s) Summary
Documentation Updates
docs/en_US/ai_tools.rst, docs/en_US/preferences.rst, docs/en_US/release_notes_9_14.rst
Expanded documentation for Anthropic and OpenAI sections to describe custom API URL support, provider compatibility (LM Studio, LiteLLM, local inference servers), and optional API key behavior with custom endpoints.
LLM Configuration
web/config.py
Added new configuration constants: ANTHROPIC_API_URL, OPENAI_API_URL, and MAX_LLM_TOOL_ITERATIONS for controlling custom LLM provider endpoints and tool iteration limits.
LLM Module & Utilities
web/pgadmin/llm/__init__.py, web/pgadmin/llm/utils.py
Extended LLM module with anthropic_api_url and openai_api_url settings; added helper functions get_anthropic_api_url() and get_openai_api_url() with preference/config fallback; updated model-fetch endpoints to accept and propagate api_url parameter with enhanced error handling for connectivity issues.
Provider Client Updates
web/pgadmin/llm/client.py, web/pgadmin/llm/providers/anthropic.py, web/pgadmin/llm/providers/openai.py
Updated provider constructors to accept optional api_url parameter; made API key optional when using custom endpoints; added conditional header inclusion for API keys; implemented DEFAULT_API_BASE_URL constants and endpoint-specific availability validation logic.
UI Preference & Form Components
web/pgadmin/preferences/static/js/components/PreferencesHelper.jsx, web/pgadmin/static/js/SchemaView/MappedControl.jsx, web/pgadmin/static/js/components/FormComponents.jsx, web/pgadmin/static/js/components/SelectRefresh.jsx
Introduced dependency-change event tracking via depChangeEmitter for preference field relationships; extended SelectRefresh component with new public props (options/fieldOptions, optionsReloadBasis/fieldReloadBasis, onChange); added focus/blur event forwarding in InputText; propagated live unsaved dependency values through controlProps.

Sequence Diagram

sequenceDiagram
    participant User as User
    participant Pref as Preferences UI
    participant Schema as SchemaView
    participant LLMModule as LLM Module
    participant Provider as Provider Client
    participant API as Custom API Endpoint

    User->>Pref: Set custom API URL
    Pref->>Pref: Emit depchange event
    Schema->>Schema: Detect dependency change
    Schema->>LLMModule: Trigger model refresh with api_url
    LLMModule->>LLMModule: Get api_url from preferences/config
    LLMModule->>Provider: Initialize with api_url
    Provider->>Provider: Build endpoint from api_url
    Provider->>API: Fetch models from custom endpoint
    API-->>Provider: Return models
    Provider-->>LLMModule: Return models
    LLMModule-->>Schema: Update model list
    Schema-->>Pref: Render updated models
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related issues

  • Issue #9703: Directly addressed by these changes; implements the requested feature for configurable OpenAI and Anthropic API endpoints with support for compatible providers.

Possibly related PRs

  • PR #9472: Related through direct usage of api_url plumbing and config getters added in this PR to support custom provider URLs across LLM modules and model-fetch flows.

Suggested reviewers

  • akshay-joshi
🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately reflects the main change: adding support for custom LLM provider URLs for OpenAI and Anthropic providers to enable use of compatible alternatives.
Docstring Coverage ✅ Passed Docstring coverage is 86.21% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (5)
web/pgadmin/static/js/components/SelectRefresh.jsx (1)

182-188: Add missing PropTypes for new props.

The component now receives options, optionsReloadBasis, and onChange as props (destructured at line 64), but these are not declared in PropTypes.

🔧 Proposed fix
 SelectRefresh.propTypes = {
   required: PropTypes.bool,
   label: PropTypes.string,
   className: CustomPropTypes.className,
   helpMessage: PropTypes.string,
   testcid: PropTypes.string,
   controlProps: PropTypes.object,
+  options: PropTypes.oneOfType([PropTypes.array, PropTypes.func]),
+  optionsReloadBasis: PropTypes.any,
+  onChange: PropTypes.func,
 };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/pgadmin/static/js/components/SelectRefresh.jsx` around lines 182 - 188,
SelectRefresh's propTypes are missing declarations for the newly used props: add
PropTypes entries for options (array or arrayOf/object as appropriate),
optionsReloadBasis (string/number/oneOfType depending on usage) and onChange
(func) to the SelectRefresh.propTypes object so the destructured props at the
top (options, optionsReloadBasis, onChange) are validated; update the
SelectRefresh.propTypes block to include these three keys matching the types
used by the component.
web/pgadmin/llm/client.py (2)

149-153: Consider updating error message to reflect that custom URL is now an alternative.

The error message still says "Anthropic API key not configured" but now a custom API URL is also a valid configuration path. Consider updating to clarify both options:

💡 Suggested improvement
         if not api_key and not api_url:
             raise LLMClientError(LLMError(
-                message="Anthropic API key not configured",
+                message="Anthropic API key or custom API URL not configured",
                 provider="anthropic"
             ))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/pgadmin/llm/client.py` around lines 149 - 153, The error raised when both
api_key and api_url are missing uses a message "Anthropic API key not
configured" which is no longer accurate; update the LLMError message in the
block that raises LLMClientError (the check using api_key and api_url) to state
that either an Anthropic API key or a custom API URL must be provided (e.g.,
"Anthropic API key or custom API URL not configured") and keep
provider="anthropic" and the same exception types (LLMError, LLMClientError)
unchanged.

163-167: Same suggestion for OpenAI error message.

For consistency, update the OpenAI error message as well:

💡 Suggested improvement
         if not api_key and not api_url:
             raise LLMClientError(LLMError(
-                message="OpenAI API key not configured",
+                message="OpenAI API key or custom API URL not configured",
                 provider="openai"
             ))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/pgadmin/llm/client.py` around lines 163 - 167, The OpenAI error raised
when both api_key and api_url are missing should include more descriptive
details; modify the LLMClientError instantiation that wraps LLMError so the
message clearly states which configuration is missing (e.g., "OpenAI API key or
API URL not configured") and/or include the values of api_key and api_url
presence (without exposing secrets) for clarity; update the code that constructs
LLMError (the branch checking api_key and api_url) to produce the improved
message while keeping provider="openai" and raising
LLMClientError(LLMError(...)).
web/pgadmin/llm/__init__.py (2)

635-658: Consider validating URL scheme for security hardening.

The urllib.request.urlopen call accepts arbitrary URL schemes including file://. While this is user-configured and represents low risk, you could add scheme validation to restrict to http:// and https:// only.

Additionally, the exception at lines 656-658 should use raise ... from e to preserve the exception chain.

💡 Suggested improvement
 def _fetch_anthropic_models(api_key, api_url=''):
     ...
     base_url = (api_url or 'https://api.anthropic.com/v1').rstrip('/')
+    
+    # Validate URL scheme for security
+    if not base_url.startswith(('http://', 'https://')):
+        raise ValueError('API URL must use http:// or https:// scheme')
+    
     url = f'{base_url}/models'
     ...
     except urllib.error.URLError as e:
-        raise ConnectionError(
+        raise ConnectionError(
             f'Cannot connect to Anthropic API: {e.reason}'
-        )
+        ) from e
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/pgadmin/llm/__init__.py` around lines 635 - 658, Validate the constructed
base_url/url scheme before creating the Request and calling
urllib.request.urlopen: parse base_url (or url) and ensure the scheme is either
"http" or "https", and raise a ValueError if not, then proceed to build
urllib.request.Request and call urllib.request.urlopen with SSL_CONTEXT; also
update the exception handling in the except blocks that currently raise
ConnectionError or ValueError to use "raise ... from e" so the original
urllib.error.HTTPError/URLError (variable e) is preserved in the exception chain
(refer to base_url, url, urllib.request.Request, urllib.request.urlopen,
SSL_CONTEXT).

702-725: Same suggestions apply to OpenAI model fetching.

Apply the same URL scheme validation and exception chaining improvements to _fetch_openai_models.

💡 Suggested improvement
 def _fetch_openai_models(api_key, api_url=''):
     ...
     base_url = (api_url or 'https://api.openai.com/v1').rstrip('/')
+    
+    # Validate URL scheme for security
+    if not base_url.startswith(('http://', 'https://')):
+        raise ValueError('API URL must use http:// or https:// scheme')
+    
     url = f'{base_url}/models'
     ...
     except urllib.error.URLError as e:
-        raise ConnectionError(
+        raise ConnectionError(
             f'Cannot connect to OpenAI API: {e.reason}'
-        )
+        ) from e
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/pgadmin/llm/__init__.py` around lines 702 - 725, The _fetch_openai_models
function currently builds base_url from api_url without validating the URL
scheme and re-raises new exceptions without chaining; update it to parse and
validate api_url's scheme (using urllib.parse.urlparse) and only allow 'https'
(or 'http' if you accept it) otherwise raise a ValueError referencing api_url,
and modify the exception handlers for urllib.error.HTTPError and
urllib.error.URLError to re-raise ConnectionError/ValueError using exception
chaining (raise ... from e) so the original error is preserved; reference
symbols: _fetch_openai_models, base_url, api_url, urllib.parse.urlparse,
urllib.error.HTTPError, urllib.error.URLError, SSL_CONTEXT.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@web/pgadmin/llm/__init__.py`:
- Around line 635-658: Validate the constructed base_url/url scheme before
creating the Request and calling urllib.request.urlopen: parse base_url (or url)
and ensure the scheme is either "http" or "https", and raise a ValueError if
not, then proceed to build urllib.request.Request and call
urllib.request.urlopen with SSL_CONTEXT; also update the exception handling in
the except blocks that currently raise ConnectionError or ValueError to use
"raise ... from e" so the original urllib.error.HTTPError/URLError (variable e)
is preserved in the exception chain (refer to base_url, url,
urllib.request.Request, urllib.request.urlopen, SSL_CONTEXT).
- Around line 702-725: The _fetch_openai_models function currently builds
base_url from api_url without validating the URL scheme and re-raises new
exceptions without chaining; update it to parse and validate api_url's scheme
(using urllib.parse.urlparse) and only allow 'https' (or 'http' if you accept
it) otherwise raise a ValueError referencing api_url, and modify the exception
handlers for urllib.error.HTTPError and urllib.error.URLError to re-raise
ConnectionError/ValueError using exception chaining (raise ... from e) so the
original error is preserved; reference symbols: _fetch_openai_models, base_url,
api_url, urllib.parse.urlparse, urllib.error.HTTPError, urllib.error.URLError,
SSL_CONTEXT.

In `@web/pgadmin/llm/client.py`:
- Around line 149-153: The error raised when both api_key and api_url are
missing uses a message "Anthropic API key not configured" which is no longer
accurate; update the LLMError message in the block that raises LLMClientError
(the check using api_key and api_url) to state that either an Anthropic API key
or a custom API URL must be provided (e.g., "Anthropic API key or custom API URL
not configured") and keep provider="anthropic" and the same exception types
(LLMError, LLMClientError) unchanged.
- Around line 163-167: The OpenAI error raised when both api_key and api_url are
missing should include more descriptive details; modify the LLMClientError
instantiation that wraps LLMError so the message clearly states which
configuration is missing (e.g., "OpenAI API key or API URL not configured")
and/or include the values of api_key and api_url presence (without exposing
secrets) for clarity; update the code that constructs LLMError (the branch
checking api_key and api_url) to produce the improved message while keeping
provider="openai" and raising LLMClientError(LLMError(...)).

In `@web/pgadmin/static/js/components/SelectRefresh.jsx`:
- Around line 182-188: SelectRefresh's propTypes are missing declarations for
the newly used props: add PropTypes entries for options (array or arrayOf/object
as appropriate), optionsReloadBasis (string/number/oneOfType depending on usage)
and onChange (func) to the SelectRefresh.propTypes object so the destructured
props at the top (options, optionsReloadBasis, onChange) are validated; update
the SelectRefresh.propTypes block to include these three keys matching the types
used by the component.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: c51fd4b0-8590-46be-9af7-2330e380a96e

📥 Commits

Reviewing files that changed from the base of the PR and between c8bd75c and 3124b49.

📒 Files selected for processing (13)
  • docs/en_US/ai_tools.rst
  • docs/en_US/preferences.rst
  • docs/en_US/release_notes_9_14.rst
  • web/config.py
  • web/pgadmin/llm/__init__.py
  • web/pgadmin/llm/client.py
  • web/pgadmin/llm/providers/anthropic.py
  • web/pgadmin/llm/providers/openai.py
  • web/pgadmin/llm/utils.py
  • web/pgadmin/preferences/static/js/components/PreferencesHelper.jsx
  • web/pgadmin/static/js/SchemaView/MappedControl.jsx
  • web/pgadmin/static/js/components/FormComponents.jsx
  • web/pgadmin/static/js/components/SelectRefresh.jsx

@ecerichter
Copy link

Just to add my 2c, notice that OpenAI compatible api servers (for example, vllm) may not require a API Key.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants