Summary
Conversation compaction wipes all message content from the JSONL transcript before the compaction API call succeeds. If the API call then fails (e.g., due to rate limiting), the original content is permanently lost — the transcript is left with 4,300+ messages that are all empty strings, and no compaction summary is generated to replace them.
This destroyed a ~1M-token conversation with hours of accumulated context.
Steps to reproduce
- Run a long Claude Code session until it approaches the context limit (~1M tokens, ~4,300 messages).
- Have the session hit a rate limit at the moment compaction triggers.
- Observe that the conversation cannot be continued or recovered.
What happened
- Compaction triggered automatically at message index 4253.
- The compaction process cleared all message content fields in the JSONL transcript (set to empty strings) as a first step — presumably in preparation for replacing them with a summary.
- The API call to generate the compaction summary then failed due to rate limiting (
/rate-limit-options queue-operations appear 5 times immediately after compaction).
- No summary was ever written back.
- Result: 4,252 out of 4,319 messages have empty content. The remaining 67 are metadata entries (queue-operations, system markers). Zero user or assistant messages retain their content.
- The conversation was continued into a new session via the
--continue mechanism, but the continued session received no summary — just the empty transcript.
Expected behavior
Compaction should be atomic: either the summary is successfully generated and the old content is replaced, or the old content is preserved intact. The current implementation clears content before confirming the summary exists, which is a destructive race condition.
At minimum:
- Don't clear the original content until the summary API call succeeds. Keep the original messages in memory or on disk as a rollback.
- If the summary call fails, restore the original content and retry compaction later.
- Consider writing a backup of the pre-compaction transcript before clearing anything.
Evidence from transcript analysis
Transcript file: 3947b03f-6560-4d7e-8563-b40c33a88495.jsonl
File size: 11,318,452 bytes (11 MB of mostly empty JSON structures)
Total messages: 4,319
Messages with empty content: 4,252
Messages with any content: 67 (all metadata/queue-operations)
Compaction marker at index: 4253
Rate-limit queue-operations immediately after: 5 occurrences
Post-compaction user/assistant messages: all empty strings
Impact
- Data loss is total and unrecoverable. The entire conversation history — all user messages, all assistant responses, all tool calls and results — is gone.
- The
--continue mechanism offers a "full transcript" path for the new session, but that file contains only empty messages.
- This was a long-running planning/development session with significant accumulated context and decisions. The user had to start from scratch.
Environment
- Claude Code version: 2.1.86
- Model: claude-opus-4-6 (1M context)
- OS: macOS (Darwin 25.0.0, arm64)
- Date: 2026-03-28
Severity
This is a data-loss bug. The compaction feature, which is supposed to transparently extend conversation lifetime, instead silently destroys the conversation when it races with a rate limit. Users on the Max plan with 1M context windows are especially likely to hit this because they run long sessions that approach rate limits and compaction thresholds simultaneously.
Summary
Conversation compaction wipes all message content from the JSONL transcript before the compaction API call succeeds. If the API call then fails (e.g., due to rate limiting), the original content is permanently lost — the transcript is left with 4,300+ messages that are all empty strings, and no compaction summary is generated to replace them.
This destroyed a ~1M-token conversation with hours of accumulated context.
Steps to reproduce
What happened
/rate-limit-optionsqueue-operations appear 5 times immediately after compaction).--continuemechanism, but the continued session received no summary — just the empty transcript.Expected behavior
Compaction should be atomic: either the summary is successfully generated and the old content is replaced, or the old content is preserved intact. The current implementation clears content before confirming the summary exists, which is a destructive race condition.
At minimum:
Evidence from transcript analysis
Impact
--continuemechanism offers a "full transcript" path for the new session, but that file contains only empty messages.Environment
Severity
This is a data-loss bug. The compaction feature, which is supposed to transparently extend conversation lifetime, instead silently destroys the conversation when it races with a rate limit. Users on the Max plan with 1M context windows are especially likely to hit this because they run long sessions that approach rate limits and compaction thresholds simultaneously.