fix: pass **kwargs through to ChatOpenAI in init_chat_model#673
fix: pass **kwargs through to ChatOpenAI in init_chat_model#673dashitongzhi wants to merge 1 commit intoOpenPipe:mainfrom
Conversation
Previously, init_chat_model silently ignored the parameter and
all **kwargs (including , , etc.). This caused
confusion when users called:
init_chat_model(model.name, temperature=0.5)
expecting those arguments to take effect.
Changes:
- Accept as str | None; use it when provided, else fall back to
CURRENT_CONFIG model
- Build ChatOpenAI kwargs dict with config defaults, then overlay any
user-provided **kwargs so callers can override temperature, etc.
- Extract from kwargs and pass it to LoggingLLM (default 10min)
- Add configurable parameter to LoggingLLM, propagated through
with_structured_output() and bind_tools() so it survives method chains
Fixes OpenPipe#474
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: d4be8047af
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| **kwargs: Any, | ||
| ): | ||
| config = CURRENT_CONFIG.get() | ||
| timeout = kwargs.pop("timeout", 10 * 60) |
There was a problem hiding this comment.
Pass
timeout through to ChatOpenAI
init_chat_model currently strips timeout out of **kwargs before constructing ChatOpenAI, so that value is never applied to the underlying model client. This means callers using init_chat_model(..., timeout=...) still cannot configure request timeout behavior at the provider layer (and non-float timeout objects expected by ChatOpenAI are instead routed into asyncio.wait_for). In practice, this reintroduces silent misconfiguration for a common kwarg even though the function now claims to forward kwargs.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Pull request overview
This PR updates the LangGraph integration’s init_chat_model wrapper so that a caller-provided model name and additional keyword arguments are actually applied when constructing the underlying ChatOpenAI, and makes the LoggingLLM timeout configurable and preserved across common method chains.
Changes:
- Update
init_chat_modelto acceptmodel: str | None, buildChatOpenAIkwargs fromCURRENT_CONFIG, and overlay caller-provided**kwargs. - Add a
timeoutparameter toLoggingLLM(default 10 minutes) and use it forainvoke()’sasyncio.wait_for. - Propagate the configured timeout through
with_structured_output()andbind_tools()so it survives chaining.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| config = CURRENT_CONFIG.get() | ||
| timeout = kwargs.pop("timeout", 10 * 60) | ||
| chat_model_kwargs: dict[str, Any] = { | ||
| "base_url": config["base_url"], | ||
| "api_key": config["api_key"], | ||
| "model": model or config["model"], | ||
| "temperature": 1.0, | ||
| } | ||
| chat_model_kwargs.update(kwargs) |
Problem
init_chat_modelsilently ignores themodelparameter and all**kwargs(includingtemperature,timeout, etc.), as reported in #474. Users calling:find that the model name and temperature are completely ignored — everything is driven solely by the
CURRENT_CONFIGcontext variable.Changes
init_chat_model: Acceptmodelasstr | None(wasLiteral[None]). When a model name is provided, use it; otherwise fall back toCURRENT_CONFIG. BuildChatOpenAIkwargs with config defaults, then overlay user-provided**kwargsso callers can overridetemperature,max_tokens, etc.LoggingLLM: Add configurabletimeoutparameter (default: 10 minutes, matching previous hardcoded value). Propagate timeout throughwith_structured_output()andbind_tools()so it survives method chains.Testing
Fixes #474