fix(session): server always generates message ID, store client-provided ID as clientMessageID#26079
Open
klly14 wants to merge 2 commits intoanomalyco:devfrom
Open
fix(session): server always generates message ID, store client-provided ID as clientMessageID#26079klly14 wants to merge 2 commits intoanomalyco:devfrom
klly14 wants to merge 2 commits intoanomalyco:devfrom
Conversation
…ed ID as clientMessageID When a client passes a custom messageID via prompt_async, OpenCode was using it directly as the user message ID. This triggered a different internal setup path that ran asynchronously, creating a race condition where streaming chunks from LiteLLM (openai-compatible provider) arrived before the text part container was created — causing "text part chatcmpl-... not found" errors and dropped chunks. Fix: always generate the message ID server-side using MessageID.ascending(). Store the client-provided messageID as clientMessageID on the UserMessage for external correlation only. This matches the behaviour when no messageID is provided, ensuring the eager setup path is always used regardless of whether the client sends an ID.
Contributor
|
Thanks for your contribution! This PR doesn't have a linked issue. All PRs must reference an existing issue. Please:
See CONTRIBUTING.md for details. |
Contributor
|
The following comment was made by an LLM, it may be inaccurate: Based on my search, I found a related PR: Related PR:
The current PR (#26079) explicitly states it "Closes the same underlying issue as #11869," so this is an intentional follow-up rather than a true duplicate. However, if #11869 is still open, there may be overlapping changes. No other significant duplicate PRs were found addressing the same messageID handling problem. |
…r-generated message id Add three tests to prompt.test.ts that cover the fix in prompt.ts where input.messageID is stored as clientMessageID instead of being used as the canonical message id: - When a caller provides messageID, the stored user message uses a server-generated id (not the caller-provided one) - The caller-provided messageID is preserved in clientMessageID - The assistant's parentID points to the server-generated user message id, not the client-provided one - When no messageID is provided, clientMessageID is undefined
Contributor
|
Thanks for updating your PR! It now meets our contributing guidelines. 👍 |
Contributor
|
Thanks for updating your PR! It now meets our contributing guidelines. 👍 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Issue for this PR
Closes #11869
Type of change
What does this PR do?
When a caller passes a custom `messageID` to `prompt`, OpenCode used it directly:
id: input.messageID ?? MessageID.ascending(),Passing an ID takes a different async code path than the no-ID case. With openai-compatible providers like LiteLLM, streaming starts fast enough that by the time the stream processor attaches, the first `text-start` chunk has already come and gone. The Vercel AI SDK tracks text parts by ID in `activeTextContent` — `text-start` registers the entry, `text-delta` looks it up. If `text-start` is dropped because the processor wasn't ready yet, `text-delta` finds nothing and throws:
text part chatcmpl-xxx not foundThe no-`messageID` path doesn't hit this because it's synchronous — the processor is always attached before LiteLLM sends anything.
I looked at whether an `await` could fix it. `createUserMessage` is already awaited before the loop. The race is inside `streamText`'s `eventProcessor` TransformStream, which is internal to the AI SDK — there's nowhere in OpenCode's code to insert an await that reliably closes the window.
The fix: always generate the ID server-side, store whatever the caller sent as `clientMessageID`:
One code path now. Processor always attaches before streaming starts. Callers can still correlate their ID via `clientMessageID` on the `message.updated` SSE event. This also fixes the clock skew stall from #24476 / #25024 as a side effect — client-generated IDs can arrive out of order when clocks differ.
How did you verify your code works?
Added three tests to `test/session/prompt.test.ts`:
All pass. There's one pre-existing timeout failure in the suite unrelated to this change.
Screenshots / recordings
N/A — not a UI change.
Checklist