Skip to content

feat: make MaxRetries configurable in OpenAI provider#271

Merged
mtojek merged 3 commits intomainfrom
115-configurable-max-retries
Apr 17, 2026
Merged

feat: make MaxRetries configurable in OpenAI provider#271
mtojek merged 3 commits intomainfrom
115-configurable-max-retries

Conversation

@mtojek
Copy link
Copy Markdown
Member

@mtojek mtojek commented Apr 17, 2026

Fixes: #115

Add MaxRetries *int to config.OpenAI. When set, option.WithMaxRetries is passed to the SDK client in both responses and chat completions interceptors. Nil preserves the SDK default (2 retries); 0 disables retries entirely.

Update TestClientAndConnectionError and TestUpstreamError to set MaxRetries=0, eliminating retry delays and speeding up these tests.

Add MaxRetries *int to config.OpenAI. When set, option.WithMaxRetries is
passed to the SDK client in both responses and chat completions interceptors.
Nil preserves the SDK default (2 retries); 0 disables retries entirely.

Update TestClientAndConnectionError and TestUpstreamError to set
MaxRetries=0, eliminating retry delays and speeding up these tests.
@mtojek mtojek changed the title feat: make MaxRetries configurable in OpenAI provider (#115) feat: make MaxRetries configurable in OpenAI provider Apr 17, 2026
@mtojek mtojek marked this pull request as ready for review April 17, 2026 11:30
@mtojek mtojek requested a review from pawbana April 17, 2026 11:30
@pawbana
Copy link
Copy Markdown
Contributor

pawbana commented Apr 17, 2026

It would be nice to have some setting for it (env var / flag) but it can be a follow up.
I think either new issue should be created or #115 left open.

When MaxRetries is not set in config, fall back to the OPENAI_MAX_RETRIES
environment variable. Invalid values are silently ignored, preserving the
SDK default of 2 retries.
@mtojek
Copy link
Copy Markdown
Member Author

mtojek commented Apr 17, 2026

Let me know your thoughts @pawbana 👍

@mtojek mtojek requested a review from pawbana April 17, 2026 13:15
@pawbana
Copy link
Copy Markdown
Contributor

pawbana commented Apr 17, 2026

LGTM although I missed before that messages interceptor is missing.
I think it should be configurable for both. I think I only mentioned OpenAI provider since I tunneled on responses interceptor then.

Add MaxRetries *int to config.Anthropic and config.Copilot, readable via
ANTHROPIC_MAX_RETRIES and COPILOT_MAX_RETRIES env vars respectively.
Pass option.WithMaxRetries in messages interceptor when set.
Copilot propagates MaxRetries to the config.OpenAI it builds for interceptors.
@mtojek
Copy link
Copy Markdown
Member Author

mtojek commented Apr 17, 2026

I think I only mentioned OpenAI provider since I tunneled on responses interceptor then.

Thanks for suggestion! I extended support to Anthropic and Copilot 👍

@mtojek mtojek merged commit bb7fced into main Apr 17, 2026
4 checks passed
Comment thread provider/copilot.go
cfg.APIDumpDir = os.Getenv("BRIDGE_DUMP_DIR")
}
if cfg.MaxRetries == nil {
if v := os.Getenv("COPILOT_MAX_RETRIES"); v != "" {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a bit weird in Copilot case as we've recently added support for multiple providers and multiple Copilot providers (normal / business / enterprise) in coder/coder but looking at what configuration options are possible in aibridge I think it is ok.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Make MaxRetries option configurable

2 participants