Skip to content

fix: pass **kwargs through to ChatOpenAI in init_chat_model#673

Open
dashitongzhi wants to merge 1 commit intoOpenPipe:mainfrom
dashitongzhi:fix/init-chat-model-kwargs
Open

fix: pass **kwargs through to ChatOpenAI in init_chat_model#673
dashitongzhi wants to merge 1 commit intoOpenPipe:mainfrom
dashitongzhi:fix/init-chat-model-kwargs

Conversation

@dashitongzhi
Copy link
Copy Markdown

Problem

init_chat_model silently ignores the model parameter and all **kwargs (including temperature, timeout, etc.), as reported in #474. Users calling:

chat_model = init_chat_model(model.name, temperature=0.5)

find that the model name and temperature are completely ignored — everything is driven solely by the CURRENT_CONFIG context variable.

Changes

  • init_chat_model: Accept model as str | None (was Literal[None]). When a model name is provided, use it; otherwise fall back to CURRENT_CONFIG. Build ChatOpenAI kwargs with config defaults, then overlay user-provided **kwargs so callers can override temperature, max_tokens, etc.
  • LoggingLLM: Add configurable timeout parameter (default: 10 minutes, matching previous hardcoded value). Propagate timeout through with_structured_output() and bind_tools() so it survives method chains.

Testing

# Before: temperature=0.5 was silently ignored
init_chat_model("gpt-4", temperature=0.5)

# After: temperature=0.5 is passed to ChatOpenAI
# model name "gpt-4" overrides CURRENT_CONFIG model

Fixes #474

Previously, init_chat_model silently ignored the  parameter and
all **kwargs (including , , etc.). This caused
confusion when users called:
    init_chat_model(model.name, temperature=0.5)
expecting those arguments to take effect.

Changes:
- Accept  as str | None; use it when provided, else fall back to
  CURRENT_CONFIG model
- Build ChatOpenAI kwargs dict with config defaults, then overlay any
  user-provided **kwargs so callers can override temperature, etc.
- Extract  from kwargs and pass it to LoggingLLM (default 10min)
- Add configurable  parameter to LoggingLLM, propagated through
  with_structured_output() and bind_tools() so it survives method chains

Fixes OpenPipe#474
Copilot AI review requested due to automatic review settings May 8, 2026 15:21
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: d4be8047af

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

**kwargs: Any,
):
config = CURRENT_CONFIG.get()
timeout = kwargs.pop("timeout", 10 * 60)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Pass timeout through to ChatOpenAI

init_chat_model currently strips timeout out of **kwargs before constructing ChatOpenAI, so that value is never applied to the underlying model client. This means callers using init_chat_model(..., timeout=...) still cannot configure request timeout behavior at the provider layer (and non-float timeout objects expected by ChatOpenAI are instead routed into asyncio.wait_for). In practice, this reintroduces silent misconfiguration for a common kwarg even though the function now claims to forward kwargs.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR updates the LangGraph integration’s init_chat_model wrapper so that a caller-provided model name and additional keyword arguments are actually applied when constructing the underlying ChatOpenAI, and makes the LoggingLLM timeout configurable and preserved across common method chains.

Changes:

  • Update init_chat_model to accept model: str | None, build ChatOpenAI kwargs from CURRENT_CONFIG, and overlay caller-provided **kwargs.
  • Add a timeout parameter to LoggingLLM (default 10 minutes) and use it for ainvoke()’s asyncio.wait_for.
  • Propagate the configured timeout through with_structured_output() and bind_tools() so it survives chaining.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 118 to +126
config = CURRENT_CONFIG.get()
timeout = kwargs.pop("timeout", 10 * 60)
chat_model_kwargs: dict[str, Any] = {
"base_url": config["base_url"],
"api_key": config["api_key"],
"model": model or config["model"],
"temperature": 1.0,
}
chat_model_kwargs.update(kwargs)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

init_chat_model always uses ChatOpenAI, ignores args, and still calls OpenAI / hits 10-minute timeout with Ollama

2 participants