Adding Nvidia NIM Api key as one of the provider#960
Adding Nvidia NIM Api key as one of the provider#960skyhighbg22-jpg wants to merge 3 commits intoGitlawb:mainfrom
Conversation
- Add NIMClient class with simple API key-only authentication - Support both string prompts and chat format conversations - Include example usage script with basic and advanced examples - Update README with comprehensive documentation
Removed the plan
There was a problem hiding this comment.
I would suggest putting all of your files in a dedicated subfolder
gnanam1990
left a comment
There was a problem hiding this comment.
Thanks for taking the time to put this together, but this PR has some structural issues that I think mean it can't be merged as-is. Wanted to be specific about what's blocking and how a usable NVIDIA NIM provider would actually look in this codebase, in case you (or someone else) wants to take another pass at it.
Blockers
- README.md is replaced, not extended. The diff shows
-356 / +28onREADME.md— the entire OpenClaude README (badges, sponsors, quick-start, provider list, source-build guide, VS Code extension docs, etc.) is wiped and replaced with a short Python tutorial. That alone makes this unmergeable. - Python files in a TypeScript/Bun repo. OpenClaude is TypeScript shipped via Bun (
bun run build,bun test). Addingsrc/nim_client.pyandexample_usage.pyintroduces a Python dependency the project doesn't have and won't load at runtime. None of these files are reachable from the existing CLI. - No integration into the provider system. A real provider PR needs to plug into:
src/utils/model/providers.ts(provider registration and routing)src/utils/providerProfile.ts/providerProfiles.ts(profile system, env var wiring)src/services/api/openaiShim.ts(since NIM exposes an OpenAI-compatible endpoint, this is almost certainly where it should hook in)- Tests under
src/utils/*.test.tsandsrc/services/api/*.test.ts
- Branch is
main. PRing from your fork'smainmakes follow-up commits messy. A feature branch likefeat/nvidia-nim-provideris much easier to iterate on. - No validation evidence. No
bun testoutput, nobun run buildconfirmation, no manual smoke against NIM.
Suggested approach if you want to retry
NVIDIA NIM exposes an OpenAI-compatible API at https://integrate.api.nvidia.com/v1. That means you almost certainly do not need a new client class — you can wire it up by:
- Adding
NVIDIA_API_KEYenv var handling (similar toXAI_API_KEYorMINIMAX_API_KEY). - Adding NVIDIA as a recognized base URL in the provider registry / profile system.
- Adding it to the
/providersetup wizard. - A regression test that exercises the OpenAI-compatible shim against a mocked NIM endpoint.
Take a look at how MiniMax (#492) or Mistral were added — they're a much closer template than building a separate client.
Closing this isn't strictly necessary if you'd rather rework it in place, but I'd recommend opening a new PR from a feature branch with the integration approach above. Happy to help review whichever route you pick.
|
Thanks for the contribution, @skyhighbg22-jpg, and apologies for the back-and-forth on this one. I owe you a correction on my earlier review and a clearer picture of what would unblock it. Correction In my previous review I called out "Python files in a TypeScript/Bun repo" as a blocker. That was wrong — we already have a tracked What's still blocking
Smaller suggestion The hardcoded Happy to re-review as soon as the README is restored and the file moves into |
Adds Nividia NIM API Key as one of the provider and makes it very beginner friendly