feat: complete phase 8 runtime schema freeze#306
Conversation
📝 WalkthroughWalkthroughAdds a new crate Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant WasmABI as "echo-wasm-abi (ABI)"
participant WarpCore as "warp-core"
participant Schema as "echo-runtime-schema"
Client->>WasmABI: pack ControlIntentV1{ AbiWriterHeadKey(AbiWorldlineId, AbiHeadId) }
WasmABI->>Schema: AbiWorldlineId -> WorldlineId (from_bytes)
WasmABI->>Schema: AbiHeadId -> HeadId (from_bytes)
WasmABI->>WarpCore: deliver ControlIntent{ WriterHeadKey(WorldlineId, HeadId) }
WarpCore->>Schema: reference GlobalTick/RunId/WorldlineTick types
WarpCore-->>Client: acknowledge / process result
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 2af8725a51
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
There was a problem hiding this comment.
Actionable comments posted: 8
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.gitignore:
- Line 24: Remove the .gitignore exception for .vscode/settings.json or replace
the hardcoded Unix path in that settings file (/tmp/echo-rust-analyzer-target)
with a cross-platform solution: either use a relative path (e.g.,
target/rust-analyzer) or reference an environment-variable fallback (e.g.,
${RA_TARGET_DIR}) so Windows users aren’t broken; then update CONTRIBUTING.md to
document the chosen approach and any editor setup steps and expected optional vs
committed settings so contributors know whether to commit their .vscode settings
or keep local overrides.
In @.vscode/settings.json:
- Around line 2-4: Replace the absolute /tmp path in the rust-analyzer config so
it is repository-relative and portable: update the
"rust-analyzer.cargo.extraEnv" entry to set CARGO_TARGET_DIR to a repo-local
directory name (e.g. "target-ra") instead of "/tmp/echo-rust-analyzer-target" to
avoid platform, collision and permission issues; also add the chosen directory
(e.g. target-ra/) to .gitignore near the existing target-* entries so build
artifacts are not committed.
In `@crates/echo-runtime-schema/src/lib.rs`:
- Around line 75-89: WorldlineId exposes as_bytes() but lacks a symmetric
constructor like HeadId::from_bytes, forcing callers to construct it via the
public tuple field; add a const constructor WorldlineId::from_bytes(bytes:
RuntimeIdBytes) -> Self to provide a stable API surface matching HeadId and make
construction explicit and ergonomic alongside the existing as_bytes() method.
In `@crates/echo-wasm-abi/Cargo.toml`:
- Line 21: The Cargo.toml for echo-wasm-abi is forwarding the crate feature
"std" to serde, ciborium and half but not to echo-runtime-schema, so when
building with --features std the schema crate still compiles no-std; fix this by
updating the echo-runtime-schema dependency entry in Cargo.toml to enable its
std feature (mirror how serde/ciborium/half are forwarded) — e.g. change the
echo-runtime-schema declaration to include features = ["std"] (and workspace =
true as already present) so the schema types compile with std when the crate's
std feature is enabled.
In `@schemas/runtime/artifact-b-routing-and-admission.graphql`:
- Around line 111-114: The schema's InboxPolicyBudgeted.maxPerTick field is
documented to be positive but uses plain Int, so enforce the positivity either
by introducing/using a PositiveInt custom scalar in the schema and replacing
maxPerTick: Int! with maxPerTick: PositiveInt! (and add its implementation), or
add explicit runtime validation in the resolver/input handling for the
InboxPolicyBudgeted type that throws a GraphQLError when maxPerTick <= 0;
reference the type name InboxPolicyBudgeted and the field name maxPerTick when
making the change so callers/validators are updated accordingly.
In `@scripts/generate-dependency-dags.js`:
- Around line 562-565: The fallback never runs because parseArgs sets
args.tasksDagPath to the string "TASKS-DAG.md"; update the parseArgs option for
--tasks-dag (the default value for tasksDagPath) to undefined (or remove the
default) so args.tasksDagPath is nullish when the flag is not provided, and keep
the existing tasksDagPath resolution using: const tasksDagPath =
path.resolve(process.cwd(), args.tasksDagPath ??
path.join("docs","archive","tasks","TASKS-DAG.md")); this ensures the fallback
to docs/archive/tasks/TASKS-DAG.md is used.
In `@scripts/verify-local.sh`:
- Around line 710-717: The loop variable name md_file is misleading because the
loop iterates CHANGED_FILES looking for .graphql/.mjs files; rename md_file to a
clearer identifier like changed_file or file_in_changes inside the while loop
and update all references (the case pattern matching and any uses that set
should_validate_runtime_schema) so the logic remains identical but the variable
name accurately reflects it (look for occurrences of CHANGED_FILES, md_file, and
should_validate_runtime_schema to update).
- Around line 732-735: The script calls "node
scripts/validate-runtime-schema-fragments.mjs" when the variable
should_validate_runtime_schema is set but does not verify Node is installed;
update the if-block that checks should_validate_runtime_schema to first verify
Node is available (e.g., use a command existence check against "node"), and if
missing print a clear error and exit non‑zero before attempting to run
validate-runtime-schema-fragments.mjs; reference the existing
should_validate_runtime_schema check and the node invocation so you modify that
branch only.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 2cd2f967-fe13-426c-abc7-972cd5c1af2d
⛔ Files ignored due to path filters (3)
Cargo.lockis excluded by!**/*.lockdocs/assets/dags/tasks-dag.dotis excluded by!**/*.dotdocs/assets/dags/tasks-dag.svgis excluded by!**/*.svg
📒 Files selected for processing (34)
.gitignore.vscode/settings.jsonCargo.tomlcrates/echo-runtime-schema/Cargo.tomlcrates/echo-runtime-schema/README.mdcrates/echo-runtime-schema/src/lib.rscrates/echo-wasm-abi/Cargo.tomlcrates/echo-wasm-abi/src/kernel_port.rscrates/echo-wasm-abi/src/lib.rscrates/warp-core/Cargo.tomlcrates/warp-core/src/clock.rscrates/warp-core/src/head.rscrates/warp-core/src/observation.rscrates/warp-core/src/worldline.rscrates/warp-wasm/src/lib.rscrates/warp-wasm/src/warp_kernel.rsdocs/dependency-dags.mddocs/plans/adr-0008-and-0009.mddocs/plans/phase-8-runtime-schema-conformance.mddocs/plans/phase-8-runtime-schema-mapping-contract.mddocs/plans/phase-8-schema-freeze-inventory.mddocs/spec/SPEC-0009-wasm-abi-v3.mdpackage.jsonschemas/runtime/README.mdschemas/runtime/artifact-a-identifiers.graphqlschemas/runtime/artifact-b-routing-and-admission.graphqlschemas/runtime/artifact-c-playback-control.graphqlschemas/runtime/artifact-d-scheduler-results.graphqlscripts/check-append-only.jsscripts/generate-dependency-dags.jsscripts/generate-tasks-dag.jsscripts/validate-runtime-schema-fragments.mjsscripts/verify-local.shtests/hooks/test_runtime_schema_validation.sh
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/echo-runtime-schema/src/lib.rs`:
- Around line 76-89: Make WorldlineId's inner field private (remove pub from the
tuple field) and add a constructor method
WorldlineId::from_bytes(RuntimeIdBytes) -> Self (preferably const fn) that
constructs the new opaque id; keep the existing as_bytes(&self) returning
&RuntimeIdBytes. This mirrors HeadId's opacity and provides a canonical factory
to replace direct tuple construction (update tests and call sites from
WorldlineId([...]) to WorldlineId::from_bytes([...])). Ensure all references use
the new constructor and that RuntimeIdBytes remains the parameter/return type.
In `@scripts/generate-dependency-dags.js`:
- Around line 173-175: The fallback date label uses local-time formatting via
formatDateYYYYMMDD(new Date()), causing non-deterministic labels across
timezones; change this to produce a UTC-based date string instead—either update
formatDateYYYYMMDD to use getUTCFullYear/getUTCMonth/getUTCDate or call a new
UTC-aware helper (e.g., formatDateYYYYMMDDUTC) from the fallback assignment that
uses new Date() but formats with UTC methods; ensure the code referencing
generatedAt and the fallback const still returns { mode: "date", label:
<UTC-formatted-string> } so snapshot labels are consistent across environments.
In `@tests/hooks/test_dependency_dags.sh`:
- Line 92: The test's grep is brittle because it looks for the exact attribute
ordering 'i1 -> i2 \[color="red"' in tests/hooks/test_dependency_dags.sh; change
the check that uses that literal grep to a more flexible pattern that matches an
edge line for "i1 -> i2" with a bracketed attribute list that contains
color="red" anywhere (e.g., use a regex like i1 -> i2 \[[^]]*color="red"[^]]*\]
or equivalent POSIX/egrep form) so the assertion no longer depends on attribute
ordering while still verifying the color attribute is present.
- Around line 88-91: Before invoking node on scripts/generate-dependency-dags.js
in the temp workspace ($tmpdir), create a package.json that sets the module type
so Node treats files as ESM; add a step to write a package.json with
{"type":"module"} into $tmpdir (i.e., create the file in the same block that cds
into "$tmpdir" before running node scripts/generate-dependency-dags.js
>"$output_file" 2>&1) so the import-based script runs under ESM semantics.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 685201e4-296b-43db-a931-e4069b76a073
📒 Files selected for processing (8)
crates/echo-runtime-schema/Cargo.tomlcrates/echo-runtime-schema/README.mdcrates/echo-runtime-schema/src/lib.rscrates/echo-wasm-abi/Cargo.tomlcrates/warp-core/Cargo.tomldocs/guide/cargo-features.mdscripts/generate-dependency-dags.jstests/hooks/test_dependency_dags.sh
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@CONTRIBUTING.md`:
- Line 106: Replace the machine-specific absolute path
"/Users/james/git/echo/.vscode/settings.json" in the CONTRIBUTING.md sentence
with a repo-relative link to the tracked config, e.g. use
[.vscode/settings.json] (or ./ .vscode/settings.json) so the link works for all
users and does not expose a local filesystem path.
In `@crates/ttd-browser/src/lib.rs`:
- Around line 1206-1210: Update the wasm32 test accessors to match the new
WorldlineId API: replace any use of tuple-field access like wl.0 with the new
method call *wl.as_bytes(). Specifically, change occurrences in
wasm_tests::test_parse_helpers (and related wasm32 tests) that read wl.0 to use
*wl.as_bytes(), and adjust test_parse_worldline_id_inner_valid expectations to
remain consistent with that API.
In `@crates/warp-core/src/provenance_store.rs`:
- Line 2564: Multiple fork tests repeat the literal [99u8; 32] when constructing
WorldlineId via WorldlineId::from_bytes([99u8; 32]); introduce a single test
fixture (either a const like TEST_WORLDLINE_BYTES: [u8; 32] = [99u8; 32] or a
helper fn like test_worldline_id() -> WorldlineId that calls
WorldlineId::from_bytes(TEST_WORLDLINE_BYTES)) and replace each occurrence of
WorldlineId::from_bytes([99u8; 32]) (seen in the file where
WorldlineId::from_bytes is used around the fork tests) with the shared
constant/helper to avoid duplication and reduce drift risk.
In `@schemas/runtime/artifact-a-identifiers.graphql`:
- Around line 49-55: The PositiveInt scalar currently only guarantees "greater
than zero" but must be documented to map to Rust's u32 range; update the
mapping/docs for PositiveInt to explicitly state it represents the full u32
range (1–4,294,967,295) and note that cycle_limit (Option<u32>) and fields
cycleLimit / maxPerTick consume this scalar, so generators that treat GraphQL
Int as 32-bit signed need the explicit u32 upper-bound to accept values above
2,147,483,647; reference PositiveInt, cycle_limit, cycleLimit, maxPerTick and
Option<u32> in the doc update.
In `@schemas/runtime/artifact-b-routing-and-admission.graphql`:
- Around line 67-69: Unify the field name for the exact-head coordinate across
the schema by renaming ExactHeadTarget.key to match IngressTargetInput.headKey
(or vice versa) before the schema freeze; update the GraphQL type definitions
(e.g., ExactHeadTarget and IngressTargetInput) so both use the identical field
name (headKey or key), adjust any associated input/output types and usages in
the same file (including the occurrences around lines 84-89), and ensure all
references, resolvers, and generated DTO/mapper expectations are updated
accordingly so the schema is consistent.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: eb4deab0-eee9-46a7-a5e6-0b13108d7c51
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (36)
.gitignore.vscode/settings.jsonCHANGELOG.mdCONTRIBUTING.mdcrates/echo-dind-harness/tests/digest_golden_vectors.rscrates/echo-runtime-schema/src/lib.rscrates/echo-wasm-abi/Cargo.tomlcrates/echo-wasm-abi/src/codec.rscrates/ttd-browser/src/lib.rscrates/warp-core/src/coordinator.rscrates/warp-core/src/engine_impl.rscrates/warp-core/src/head.rscrates/warp-core/src/head_inbox.rscrates/warp-core/src/observation.rscrates/warp-core/src/playback.rscrates/warp-core/src/provenance_store.rscrates/warp-core/src/snapshot.rscrates/warp-core/src/worldline.rscrates/warp-core/src/worldline_registry.rscrates/warp-core/src/worldline_state.rscrates/warp-core/tests/checkpoint_fork_tests.rscrates/warp-core/tests/common/mod.rscrates/warp-core/tests/golden_vectors_phase0.rscrates/warp-core/tests/inbox.rscrates/warp-core/tests/invariant_property_tests.rscrates/warp-core/tests/outputs_playback_tests.rscrates/warp-core/tests/slice_theorem_proof.rscrates/warp-wasm/src/warp_kernel.rsdocs/guide/cargo-features.mdpackage.jsonschemas/runtime/artifact-a-identifiers.graphqlschemas/runtime/artifact-b-routing-and-admission.graphqlschemas/runtime/artifact-d-scheduler-results.graphqlscripts/generate-dependency-dags.jsscripts/verify-local.shtests/hooks/test_dependency_dags.sh
|
|
||
| ### Shared Workspace Settings | ||
|
|
||
| - The repo tracks a minimal [.vscode/settings.json](/Users/james/git/echo/.vscode/settings.json) for project-safe tooling settings only. |
There was a problem hiding this comment.
Replace machine-specific absolute path with a repo-relative link.
Line 106 currently points to /Users/james/git/echo/.vscode/settings.json, which is broken for everyone except one local machine and leaks local path details.
🔧 Proposed fix
-- The repo tracks a minimal [.vscode/settings.json](/Users/james/git/echo/.vscode/settings.json) for project-safe tooling settings only.
+- The repo tracks a minimal [.vscode/settings.json](.vscode/settings.json) for project-safe tooling settings only.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - The repo tracks a minimal [.vscode/settings.json](/Users/james/git/echo/.vscode/settings.json) for project-safe tooling settings only. | |
| - The repo tracks a minimal [.vscode/settings.json](.vscode/settings.json) for project-safe tooling settings only. |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@CONTRIBUTING.md` at line 106, Replace the machine-specific absolute path
"/Users/james/git/echo/.vscode/settings.json" in the CONTRIBUTING.md sentence
with a repo-relative link to the tracked config, e.g. use
[.vscode/settings.json] (or ./ .vscode/settings.json) so the link works for all
users and does not expose a local filesystem path.
| let mut store = LocalProvenanceStore::new(); | ||
| let source = test_worldline_id(); | ||
| let target = WorldlineId([99u8; 32]); | ||
| let target = WorldlineId::from_bytes([99u8; 32]); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Reduce repeated literal worldline IDs in fork tests.
[99u8; 32] is repeated across multiple tests. A small test helper/const would reduce drift risk if this fixture value changes later.
Also applies to: 2614-2614, 2782-2782
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/warp-core/src/provenance_store.rs` at line 2564, Multiple fork tests
repeat the literal [99u8; 32] when constructing WorldlineId via
WorldlineId::from_bytes([99u8; 32]); introduce a single test fixture (either a
const like TEST_WORLDLINE_BYTES: [u8; 32] = [99u8; 32] or a helper fn like
test_worldline_id() -> WorldlineId that calls
WorldlineId::from_bytes(TEST_WORLDLINE_BYTES)) and replace each occurrence of
WorldlineId::from_bytes([99u8; 32]) (seen in the file where
WorldlineId::from_bytes is used around the fork tests) with the shared
constant/helper to avoid duplication and reduce drift risk.
| """ | ||
| Positive signed integer used where GraphQL `Int` needs an explicit positivity | ||
| contract. | ||
|
|
||
| Semantic rule: values must be greater than zero. | ||
| """ | ||
| scalar PositiveInt |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
rg -n "PositiveInt|cycleLimit|maxPerTick|u32|2_147_483_647" \
schemas/runtime docs/plans crates/warp-wasm/src/warp_kernel.rs crates/echo-wasm-abi/srcRepository: flyingrobots/echo
Length of output: 6998
Pin PositiveInt to explicit u32 range (1–4,294,967,295) in mapping documentation.
cycle_limit is Option<u32> and cycleLimit/maxPerTick consume it as PositiveInt. The scalar is defined as "greater than zero" with no upper bound. Without explicit documentation pinning this to u32 semantics, generators treating this as GraphQL Int will reject values above 2,147,483,647. Documentation acknowledges the mapping but labels the positive-value rule as "semantic validation"—clarify the full u32 range constraint.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@schemas/runtime/artifact-a-identifiers.graphql` around lines 49 - 55, The
PositiveInt scalar currently only guarantees "greater than zero" but must be
documented to map to Rust's u32 range; update the mapping/docs for PositiveInt
to explicitly state it represents the full u32 range (1–4,294,967,295) and note
that cycle_limit (Option<u32>) and fields cycleLimit / maxPerTick consume this
scalar, so generators that treat GraphQL Int as 32-bit signed need the explicit
u32 upper-bound to accept values above 2,147,483,647; reference PositiveInt,
cycle_limit, cycleLimit, maxPerTick and Option<u32> in the doc update.
| type ExactHeadTarget { | ||
| key: WriterHeadKey! | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Normalize key vs headKey before the schema freeze.
ExactHeadTarget.key and IngressTargetInput.headKey represent the same exact-head coordinate with two different field names. Freezing that asymmetry now will leak into every generated DTO, mapper, and doc set.
Patch
type ExactHeadTarget {
- key: WriterHeadKey!
+ headKey: WriterHeadKey!
}Also applies to: 84-89
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@schemas/runtime/artifact-b-routing-and-admission.graphql` around lines 67 -
69, Unify the field name for the exact-head coordinate across the schema by
renaming ExactHeadTarget.key to match IngressTargetInput.headKey (or vice versa)
before the schema freeze; update the GraphQL type definitions (e.g.,
ExactHeadTarget and IngressTargetInput) so both use the identical field name
(headKey or key), adjust any associated input/output types and usages in the
same file (including the occurrences around lines 84-89), and ensure all
references, resolvers, and generated DTO/mapper expectations are updated
accordingly so the schema is consistent.
|
@coderabbitai review please 🥕 |
|
@coderabbitai review please 🥕 |
|
🧠 Learnings used✅ Actions performedFull review triggered. |
🧩 Analysis chain🏁 Script executed: Length of output: 260485
Executive summary
Blocking
High-priority follow-ups (pre-merge optional, post-merge acceptable)
diff --git a/scripts/validate-runtime-schema-fragments.mjs b/scripts/validate-runtime-schema-fragments.mjs
@@
-import { spawnSync } from "node:child_process";
+import { spawnSync } from "node:child_process";
+import { existsSync } from "node:fs";
@@
function runPrettierCheck(files) {
const formattingErrors = [];
+ // Try a few execution strategies in order of preference
+ const localPrettier = resolve(REPO_ROOT, "node_modules", ".bin", process.platform === "win32" ? "prettier.cmd" : "prettier");
+ const strategies = [
+ { cmd: "npx", args: ["prettier"] },
+ { cmd: "pnpm", args: ["exec", "prettier"] },
+ { cmd: localPrettier, args: [], onlyIfExists: true },
+ ];
+
+ const pickPrettier = () => {
+ for (const s of strategies) {
+ if (s.onlyIfExists && !existsSync(s.cmd)) continue;
+ const probe = spawnSync(s.cmd, [...s.args, "--version"], { encoding: "utf8" });
+ if (probe.error || probe.status !== 0) continue;
+ return s;
+ }
+ return null;
+ };
+
+ const runner = pickPrettier();
+ if (!runner) {
+ throw new Error("failed to find a runnable prettier (tried npx, pnpm exec, and local node_modules/.bin/prettier)");
+ }
+
for (const file of files) {
const source = readFileSync(file, "utf8");
const syntheticFilePath = join(
REPO_ROOT,
"schemas/runtime",
basename(file),
);
- const result = spawnSync(
- "npx",
- [
- "prettier",
- "--parser",
- "graphql",
- "--stdin-filepath",
- syntheticFilePath,
- ],
- {
- encoding: "utf8",
- input: source,
- },
- );
+ const result = spawnSync(runner.cmd, [...runner.args, "--parser", "graphql", "--stdin-filepath", syntheticFilePath], {
+ encoding: "utf8",
+ input: source,
+ });
@@
if (result.error) {
- throw new Error(
- `failed to run npx prettier for schema validation: ${result.error.message}`,
- );
+ throw new Error(`failed to run prettier for schema validation: ${result.error.message}`);
}
#[test]
fn test_worldline_id_rejects_33_byte_sequence() {
use crate::kernel_port::WorldlineId;
use ciborium::value::Value;
#[derive(Debug, PartialEq, Eq, Deserialize)]
struct Wrapper { id: WorldlineId }
// Sequence form: [u8; 33]
let arr = Value::Array((0..33).map(|_| Value::Integer(0u8.into())).collect());
let bytes = encode_value(&Value::Map(vec![(Value::Text("id".into()), arr)])).unwrap();
let err = decode_cbor::<Wrapper>(&bytes).unwrap_err();
assert!(err.toString().contains("32 bytes"));
}
#[test]
fn test_worldline_id_accepts_array_sequence_exactly_32() {
use crate::kernel_port::WorldlineId;
use ciborium::value::Value;
#[derive(Debug, PartialEq, Eq, Deserialize)]
struct Wrapper { id: WorldlineId }
let arr = Value::Array((0..32).map(|_| Value::Integer(7u8.into())).collect());
let bytes = encode_value(&Value::Map(vec![(Value::Text("id".into()), arr)])).unwrap();
let decoded: Wrapper = decode_cbor(&bytes).unwrap();
assert_eq!(decoded.id.as_bytes(), &[7u8; 32]);
}
Design and correctness review (OK/notes by area)
Non-blocking nits and small polish
What I verified explicitly
I retained prior context from earlier reviews:
If you want, I can open follow-up issues for: (a) schema-validator portability (npx fallback), (b) ABI id negative decode cases, and (c) optional no_std smoke lane for echo-wasm-abi. 🧠 Learnings used✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 10
♻️ Duplicate comments (4)
crates/ttd-browser/src/lib.rs (1)
1206-1209:⚠️ Potential issue | 🟠 MajorThe wasm32 test module still uses the old private-field accessor.
This native test was migrated, but
wasm_tests::test_parse_helpersat Line 1110 still doeswl.0.WorldlineIdis opaque now, so wasm32 test builds remain broken until that assertion also switches to*wl.as_bytes().🔧 Required follow-up
- assert_eq!(wl.0, [0u8; 32]); + assert_eq!(*wl.as_bytes(), [0u8; 32]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/ttd-browser/src/lib.rs` around lines 1206 - 1209, Update the wasm32 test in wasm_tests::test_parse_helpers to stop using the old private-field accessor on WorldlineId (wl.0) and use the public accessor instead: call parse_worldline_id_inner (or reuse the existing test variable wl) and assert against *wl.as_bytes() similarly to the native test; replace any occurrence of wl.0 with *wl.as_bytes() so the opaque WorldlineId is accessed via its as_bytes() method.docs/plans/phase-8-runtime-schema-conformance.md (1)
85-87:⚠️ Potential issue | 🟠 Major
u32does not map cleanly to GraphQLInt!.GraphQL
Inttops out at 2,147,483,647, somax_per_tick: u32cannot be described as a cleanInt!mapping across its full runtime range. The audit should either call out the signed-32-bit ceiling explicitly or constrain the allowed range instead of blessing the mapping as-is.What range does the GraphQL specification allow for the built-in `Int` scalar?As per coding guidelines, "Echo is a deterministic simulation engine. Documentation accuracy matters — especially anything touching determinism guarantees, hash stability, or canonical ordering. Flag factual errors and stale cross-references."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/plans/phase-8-runtime-schema-conformance.md` around lines 85 - 87, The doc incorrectly states that InboxPolicy::Budgeted { max_per_tick: u32 } maps cleanly to GraphQL maxPerTick: Int! — GraphQL Int is a signed 32-bit value with range -2,147,483,648 to 2,147,483,647, so update the text to either (a) explicitly call out the 2,147,483,647 positive ceiling and note the semantic validation required for u32 values above that, or (b) change the schema to use a constrained/custom scalar or validation rule that limits maxPerTick to <= 2,147,483,647 (and document that constraint) so the mapping is accurate; reference InboxPolicy::Budgeted, max_per_tick and maxPerTick in the update.crates/echo-runtime-schema/src/lib.rs (1)
19-27: 🛠️ Refactor suggestion | 🟠 MajorHide the raw
u64field on the shared counters.
pub struct $name(pub u64)makes the representation itself part of the cross-crate API, bypasses the checked helper surface, and leaves the public field undocumented. Before this crate becomes the canonical shared owner, make the field private and force external code ontofrom_raw(...)/as_u64().As per coding guidelines, "Every public API across crates (warp-core, warp-wasm, etc.) must carry rustdoc comments that explain intent, invariants, and usage."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/echo-runtime-schema/src/lib.rs` around lines 19 - 27, The public tuple field on counters created by the logical_counter! macro exposes the u64 representation and bypasses helpers; change the macro so the generated pub struct $name has a private field (e.g., struct $name(u64)) and expose a documented public API: a checked constructor from_raw(...) and an accessor as_u64() (or similar) and ensure both functions have rustdoc comments describing invariants/usage; update any existing impls or derives inside logical_counter! to use the private field and refer to these helpers so external crates must use from_raw/as_u64 rather than accessing the raw u64 directly.schemas/runtime/artifact-b-routing-and-admission.graphql (1)
67-69:⚠️ Potential issue | 🟠 MajorNormalize the exact-head field name before freezing this schema.
Line 68 uses
key, while Line 88 usesheadKeyfor the same exact-head coordinate. Freezing both names bakes an avoidable asymmetry into every DTO, mapper, and doc that touches this surface. Pick one name on both input and output types.Patch
type ExactHeadTarget { - key: WriterHeadKey! + headKey: WriterHeadKey! }Also applies to: 84-89
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@schemas/runtime/artifact-b-routing-and-admission.graphql` around lines 67 - 69, The ExactHeadTarget GraphQL type uses inconsistent field names for the same coordinate (field "key" on ExactHeadTarget vs "headKey" elsewhere); pick one canonical name (e.g., headKey) and update all schema definitions and related input/output types so the field name is identical across ExactHeadTarget and every related type (including the other occurrence at lines 84–89 referenced in the review), then regenerate DTOs/mappers/docs so the symbol ExactHeadTarget and any usages reference the chosen field name uniformly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/echo-runtime-schema/README.md`:
- Around line 18-19: Update the README sentence about serde feature gating to
reflect the crate's default behavior: state that serde derives are feature-gated
and consumers must enable the crate's `serde` feature unless they are using the
crate with default features enabled (or unless they compile with
`default-features = false` to force explicit opt-in); reference the `serde`
feature name and the `default-features = false` flag so readers know when
explicit enabling is required.
In `@crates/echo-runtime-schema/src/lib.rs`:
- Around line 79-80: WorldlineId and HeadId newtypes currently use
#[serde(transparent)] which serializes the inner [u8;32] as a 32-element integer
array instead of a byte string; implement explicit serde::Serialize and
serde::Deserialize for WorldlineId and HeadId to serialize/deserialze as
canonical bytes(32) (e.g., as a 32-byte byte string for CBOR/JSON) and update
WriterHeadKey usage to rely on those impls; add a unit test that serializes and
deserializes WorldlineId and HeadId (and a WriterHeadKey instance) to verify
round-trip produces the spec-compliant bytes(32) representation.
In `@crates/echo-wasm-abi/src/lib.rs`:
- Around line 519-535: The test test_worldline_id_rejects_non_32_byte_payloads
only checks a 31-byte underflow; add a symmetric overlong case to ensure
deserialization rejects >32 bytes too by constructing a CBOR map with "id" set
to 33 bytes and asserting decode_cbor::<Wrapper>(&bytes).unwrap_err() contains
"32 bytes"; modify the test (which uses WorldlineId and Wrapper and calls
decode_cbor) to include both the 31-byte and 33-byte failure assertions to pin
strict 32-byte enforcement from both sides.
In `@crates/warp-wasm/src/warp_kernel.rs`:
- Around line 170-176: The helpers parse_worldline_id and parse_head_id are not
parsers and should be renamed to to_core_worldline_id and to_core_head_id (or
similar) to reflect direct conversion; change their signatures to return
WorldlineId and HeadId directly (no Result) by returning
WorldlineId::from_bytes(*worldline_id.as_bytes()) and
HeadId::from_bytes(*head_id.as_bytes()); likewise rename and change
parse_head_key (and the similar function referenced around lines 460-464) to
to_core_head_key and return the HeadKey directly instead of Result, and update
all call sites (including the call at the former line 439) to remove dead error
handling/unwraps accordingly.
In `@docs/dependency-dags.md`:
- Line 89: The docs line referencing the scheduled workflow uses incorrect
platform casing; update the sentence that mentions
`.github/workflows/refresh-dependency-dags.yml` so the platform name reads
"GitHub" instead of "GITHUB" (search for the Generator line mentioning
`scripts/generate-tasks-dag.js` and
`.github/workflows/refresh-dependency-dags.yml` in docs/dependency-dags.md and
replace the uppercase token with the proper "GitHub" casing).
In `@docs/plans/phase-8-runtime-schema-conformance.md`:
- Around line 127-133: Update the blocker paragraph to reflect the current
typed-id migration: remove or reword the claim that `HeadId`, `WorldlineId`, and
`WriterHeadKey` are still exposed as raw `Vec<u8>` and instead list only the
actual runtime schema types that remain as raw byte vectors (if any); reference
the typed ABI wrappers for `HeadId`, `WorldlineId`, and `WriterHeadKey` so the
note is narrowed to the true remaining raw-byte ids and mark the previous
statement as stale/updated to avoid misleading the audit.
In `@docs/plans/phase-8-schema-freeze-inventory.md`:
- Around line 97-105: Artifact A's identifier list omits WorldlineId, causing
inconsistency with the Phase 8 freeze candidates and downstream artifacts;
update the Artifact A identifier enumeration (the block listing HeadId,
WriterHeadKey, IntentKind, WorldlineTick, GlobalTick) to include WorldlineId so
the artifact definition matches the freeze candidate set and downstream
dependencies that reference WorldlineId.
In `@scripts/validate-runtime-schema-fragments.mjs`:
- Around line 70-88: The script currently hard-codes spawning "npx" (see the
spawnSync call that runs prettier) which breaks pnpm-only environments; change
the launcher resolution to try a cascade: first "npx", then "pnpm exec", and
finally fall back to the local node_modules/.bin prettier executable if present,
using the same args and input so pnpm users can run the check. Also fix
extractTypeNames (the function that parses GraphQL field type expressions) so it
stops parsing when it encounters directive markers (@) and ignores text inside
string literals (respecting quoted boundaries), ensuring identifiers inside
directives or strings aren't collected as type names; update the
tokenization/state machine in extractTypeNames to exit on '@' and to treat
quoted strings as atomic tokens.
- Around line 112-147: The current sanitizeLines and extractTypeNames
implementations are brittle string-based heuristics; replace them by parsing the
SDL with a GraphQL parser (e.g., graphql-js parse/visit) and derive the
information from the AST: use parse(source, { noLocation: false }) to get
DocumentNode, iterate DocumentNode.definitions to collect type/operation
definitions (replacing sanitizeLines' role) and use a visitor to collect
NamedType nodes (replacing extractTypeNames) while ignoring descriptions,
comments and directive argument content; ensure you import and use graphql's
parse/visit functions, return the same shape expected by downstream code, and
remove the fragile line-based logic in sanitizeLines and extractTypeNames.
In `@tests/hooks/test_dependency_dags.sh`:
- Around line 98-103: The test currently calls fail and then immediately cats
"$tmpdir/docs/assets/dags/issue-deps.dot", which under set -e can hide the
original assertion if the DOT file is missing; change the branch so that after
the grep condition fails you first call fail with a clear message and only
attempt to dump the DOT file conditionally (e.g. check for existence or make the
cat a non-failing operation) to avoid a secondary “file not found” error; target
the block that references "$tmpdir/docs/assets/dags/issue-deps.dot" and the grep
check (the if grep -Eq ... then ... else ... fi) and ensure the fail() call runs
before any optional diagnostic dump or that the dump is guarded so it cannot
cause the test to exit with a different error.
---
Duplicate comments:
In `@crates/echo-runtime-schema/src/lib.rs`:
- Around line 19-27: The public tuple field on counters created by the
logical_counter! macro exposes the u64 representation and bypasses helpers;
change the macro so the generated pub struct $name has a private field (e.g.,
struct $name(u64)) and expose a documented public API: a checked constructor
from_raw(...) and an accessor as_u64() (or similar) and ensure both functions
have rustdoc comments describing invariants/usage; update any existing impls or
derives inside logical_counter! to use the private field and refer to these
helpers so external crates must use from_raw/as_u64 rather than accessing the
raw u64 directly.
In `@crates/ttd-browser/src/lib.rs`:
- Around line 1206-1209: Update the wasm32 test in
wasm_tests::test_parse_helpers to stop using the old private-field accessor on
WorldlineId (wl.0) and use the public accessor instead: call
parse_worldline_id_inner (or reuse the existing test variable wl) and assert
against *wl.as_bytes() similarly to the native test; replace any occurrence of
wl.0 with *wl.as_bytes() so the opaque WorldlineId is accessed via its
as_bytes() method.
In `@docs/plans/phase-8-runtime-schema-conformance.md`:
- Around line 85-87: The doc incorrectly states that InboxPolicy::Budgeted {
max_per_tick: u32 } maps cleanly to GraphQL maxPerTick: Int! — GraphQL Int is a
signed 32-bit value with range -2,147,483,648 to 2,147,483,647, so update the
text to either (a) explicitly call out the 2,147,483,647 positive ceiling and
note the semantic validation required for u32 values above that, or (b) change
the schema to use a constrained/custom scalar or validation rule that limits
maxPerTick to <= 2,147,483,647 (and document that constraint) so the mapping is
accurate; reference InboxPolicy::Budgeted, max_per_tick and maxPerTick in the
update.
In `@schemas/runtime/artifact-b-routing-and-admission.graphql`:
- Around line 67-69: The ExactHeadTarget GraphQL type uses inconsistent field
names for the same coordinate (field "key" on ExactHeadTarget vs "headKey"
elsewhere); pick one canonical name (e.g., headKey) and update all schema
definitions and related input/output types so the field name is identical across
ExactHeadTarget and every related type (including the other occurrence at lines
84–89 referenced in the review), then regenerate DTOs/mappers/docs so the symbol
ExactHeadTarget and any usages reference the chosen field name uniformly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 33ede45b-a8aa-4f5f-b399-91a409595a16
⛔ Files ignored due to path filters (4)
Cargo.lockis excluded by!**/*.lockdocs/assets/dags/tasks-dag.dotis excluded by!**/*.dotdocs/assets/dags/tasks-dag.svgis excluded by!**/*.svgpnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (58)
.gitignore.vscode/settings.jsonCHANGELOG.mdCONTRIBUTING.mdCargo.tomlcrates/echo-dind-harness/tests/digest_golden_vectors.rscrates/echo-runtime-schema/Cargo.tomlcrates/echo-runtime-schema/README.mdcrates/echo-runtime-schema/src/lib.rscrates/echo-wasm-abi/Cargo.tomlcrates/echo-wasm-abi/src/codec.rscrates/echo-wasm-abi/src/kernel_port.rscrates/echo-wasm-abi/src/lib.rscrates/ttd-browser/src/lib.rscrates/warp-core/Cargo.tomlcrates/warp-core/src/clock.rscrates/warp-core/src/coordinator.rscrates/warp-core/src/engine_impl.rscrates/warp-core/src/head.rscrates/warp-core/src/head_inbox.rscrates/warp-core/src/observation.rscrates/warp-core/src/playback.rscrates/warp-core/src/provenance_store.rscrates/warp-core/src/snapshot.rscrates/warp-core/src/worldline.rscrates/warp-core/src/worldline_registry.rscrates/warp-core/src/worldline_state.rscrates/warp-core/tests/checkpoint_fork_tests.rscrates/warp-core/tests/common/mod.rscrates/warp-core/tests/golden_vectors_phase0.rscrates/warp-core/tests/inbox.rscrates/warp-core/tests/invariant_property_tests.rscrates/warp-core/tests/outputs_playback_tests.rscrates/warp-core/tests/slice_theorem_proof.rscrates/warp-wasm/src/lib.rscrates/warp-wasm/src/warp_kernel.rsdocs/ROADMAP/backlog/README.mddocs/ROADMAP/backlog/tooling-misc.mddocs/dependency-dags.mddocs/guide/cargo-features.mddocs/plans/adr-0008-and-0009.mddocs/plans/phase-8-runtime-schema-conformance.mddocs/plans/phase-8-runtime-schema-mapping-contract.mddocs/plans/phase-8-schema-freeze-inventory.mddocs/spec/SPEC-0009-wasm-abi-v3.mdpackage.jsonschemas/runtime/README.mdschemas/runtime/artifact-a-identifiers.graphqlschemas/runtime/artifact-b-routing-and-admission.graphqlschemas/runtime/artifact-c-playback-control.graphqlschemas/runtime/artifact-d-scheduler-results.graphqlscripts/check-append-only.jsscripts/generate-dependency-dags.jsscripts/generate-tasks-dag.jsscripts/validate-runtime-schema-fragments.mjsscripts/verify-local.shtests/hooks/test_dependency_dags.shtests/hooks/test_runtime_schema_validation.sh
| Serde derives are feature-gated. Consumers that need serialization support must | ||
| enable this crate's `serde` feature explicitly. |
There was a problem hiding this comment.
Clarify serde feature semantics to match current defaults.
This wording implies explicit opt-in is always required, but in this PR serde is documented as default-enabled. Make this conditional on default-features = false.
Suggested patch
-Serde derives are feature-gated. Consumers that need serialization support must
-enable this crate's `serde` feature explicitly.
+Serde derives are feature-gated. The `serde` feature is enabled by default;
+consumers using `default-features = false` must enable `serde` explicitly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/echo-runtime-schema/README.md` around lines 18 - 19, Update the README
sentence about serde feature gating to reflect the crate's default behavior:
state that serde derives are feature-gated and consumers must enable the crate's
`serde` feature unless they are using the crate with default features enabled
(or unless they compile with `default-features = false` to force explicit
opt-in); reference the `serde` feature name and the `default-features = false`
flag so readers know when explicit enabling is required.
| #[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))] | ||
| #[cfg_attr(feature = "serde", serde(transparent))] |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Does #[serde(transparent)]on a newtype wrapping[u8; 32] serialize as a byte string or as a sequence of 32 integers by default?
💡 Result:
No, #[serde(transparent)] on a newtype wrapping [u8; 32] serializes as a sequence of 32 integers (array in JSON), not as a byte string, by default.
Citations:
- 1: https://serde.rs/container-attrs.html
- 2: https://serde.rs/json.html
- 3: https://stackoverflow.com/questions/67168397/how-does-serde-bincode-serialize-byte-arrays
- 4: https://docs.rs/serde-encoded-bytes
🏁 Script executed:
# First, find and examine the lib.rs file to see the actual definitions at lines 79-80 and 100-101
cat -n crates/echo-runtime-schema/src/lib.rs | sed -n '75,105p'Repository: flyingrobots/echo
Length of output: 1322
🏁 Script executed:
# Search for WorldlineId and HeadId definitions to understand their structure
rg -A 5 "struct WorldlineId|struct HeadId" crates/echo-runtime-schema/src/Repository: flyingrobots/echo
Length of output: 974
🏁 Script executed:
# Look for any bytes(32) contract references in the codebase
rg "bytes\(32\)" --type rustRepository: flyingrobots/echo
Length of output: 43
🏁 Script executed:
# Search for WriterHeadKey to see what it contains
rg -B 2 -A 5 "struct WriterHeadKey|WriterHeadKey" crates/Repository: flyingrobots/echo
Length of output: 50373
🏁 Script executed:
# Check for any serde-related tests or specifications about serialization format
rg -B 2 -A 5 "bytes\(" crates/echo-runtime-schema/src/Repository: flyingrobots/echo
Length of output: 3208
🏁 Script executed:
# Look for protobuf files
fd -e protoRepository: flyingrobots/echo
Length of output: 43
🏁 Script executed:
# Search for any references to "bytes" in comments or code that might indicate a schema
rg -i "bytes.*32|schema.*32" --type rust --type tomlRepository: flyingrobots/echo
Length of output: 16793
🏁 Script executed:
# Check for any protocol or schema documentation
find . -name "*.md" -type f -exec grep -l "bytes(32)" {} \;Repository: flyingrobots/echo
Length of output: 97
🏁 Script executed:
# Look for RuntimeIdBytes definition to understand the actual type
rg -B 3 -A 3 "type RuntimeIdBytes|RuntimeIdBytes\s*="Repository: flyingrobots/echo
Length of output: 509
🏁 Script executed:
cat -n docs/spec/SPEC-0009-wasm-abi-v3.md | grep -A 5 -B 5 "bytes(32)\|bytes\(32\)"Repository: flyingrobots/echo
Length of output: 9569
🏁 Script executed:
# Also search for any mention of serde contract in the spec
rg "serde|Serialize|Deserialize" docs/spec/ --type mdRepository: flyingrobots/echo
Length of output: 160
🏁 Script executed:
# Check the full lib.rs to see if WriterHeadKey has its own serde derive
cat -n crates/echo-runtime-schema/src/lib.rs | sed -n '1,200p'Repository: flyingrobots/echo
Length of output: 8010
Add explicit byte-oriented Serialize/Deserialize impls to WorldlineId and HeadId, and test serde round-trip.
The #[serde(transparent)] on these newtypes wrapping [u8; 32] serializes as a 32-element integer array by default, not as bytes(32). This violates the ABI spec (SPEC-0009-wasm-abi-v3.md, lines 184–185), which requires these types to serialize as byte strings. WriterHeadKey inherits this mismatch when its fields serialize.
Implement custom Serialize and Deserialize for both ID types to produce canonical bytes(32) in CBOR/JSON, then add a test verifying the round-trip matches the spec format.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/echo-runtime-schema/src/lib.rs` around lines 79 - 80, WorldlineId and
HeadId newtypes currently use #[serde(transparent)] which serializes the inner
[u8;32] as a 32-element integer array instead of a byte string; implement
explicit serde::Serialize and serde::Deserialize for WorldlineId and HeadId to
serialize/deserialze as canonical bytes(32) (e.g., as a 32-byte byte string for
CBOR/JSON) and update WriterHeadKey usage to rely on those impls; add a unit
test that serializes and deserializes WorldlineId and HeadId (and a
WriterHeadKey instance) to verify round-trip produces the spec-compliant
bytes(32) representation.
| fn test_worldline_id_rejects_non_32_byte_payloads() { | ||
| use crate::kernel_port::WorldlineId; | ||
| use ciborium::value::Value; | ||
|
|
||
| #[derive(Debug, PartialEq, Eq, Deserialize)] | ||
| struct Wrapper { | ||
| id: WorldlineId, | ||
| } | ||
|
|
||
| let bytes = encode_value(&Value::Map(vec![( | ||
| Value::Text("id".into()), | ||
| Value::Bytes(vec![9u8; 31]), | ||
| )])) | ||
| .unwrap(); | ||
|
|
||
| let err = decode_cbor::<Wrapper>(&bytes).unwrap_err(); | ||
| assert!(err.to_string().contains("32 bytes")); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Only the short opaque-id failure is covered.
This locks the 31-byte underflow path, but a decoder that accidentally accepts or truncates 33-byte payloads would still pass. Add the symmetric overlong case so strict 32-byte enforcement is pinned from both sides.
♻️ Proposed test addition
let err = decode_cbor::<Wrapper>(&bytes).unwrap_err();
assert!(err.to_string().contains("32 bytes"));
+
+ let bytes = encode_value(&Value::Map(vec![(
+ Value::Text("id".into()),
+ Value::Bytes(vec![9u8; 33]),
+ )]))
+ .unwrap();
+
+ let err = decode_cbor::<Wrapper>(&bytes).unwrap_err();
+ assert!(err.to_string().contains("32 bytes"));📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| fn test_worldline_id_rejects_non_32_byte_payloads() { | |
| use crate::kernel_port::WorldlineId; | |
| use ciborium::value::Value; | |
| #[derive(Debug, PartialEq, Eq, Deserialize)] | |
| struct Wrapper { | |
| id: WorldlineId, | |
| } | |
| let bytes = encode_value(&Value::Map(vec![( | |
| Value::Text("id".into()), | |
| Value::Bytes(vec![9u8; 31]), | |
| )])) | |
| .unwrap(); | |
| let err = decode_cbor::<Wrapper>(&bytes).unwrap_err(); | |
| assert!(err.to_string().contains("32 bytes")); | |
| fn test_worldline_id_rejects_non_32_byte_payloads() { | |
| use crate::kernel_port::WorldlineId; | |
| use ciborium::value::Value; | |
| #[derive(Debug, PartialEq, Eq, Deserialize)] | |
| struct Wrapper { | |
| id: WorldlineId, | |
| } | |
| let bytes = encode_value(&Value::Map(vec![( | |
| Value::Text("id".into()), | |
| Value::Bytes(vec![9u8; 31]), | |
| )])) | |
| .unwrap(); | |
| let err = decode_cbor::<Wrapper>(&bytes).unwrap_err(); | |
| assert!(err.to_string().contains("32 bytes")); | |
| let bytes = encode_value(&Value::Map(vec![( | |
| Value::Text("id".into()), | |
| Value::Bytes(vec![9u8; 33]), | |
| )])) | |
| .unwrap(); | |
| let err = decode_cbor::<Wrapper>(&bytes).unwrap_err(); | |
| assert!(err.to_string().contains("32 bytes")); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/echo-wasm-abi/src/lib.rs` around lines 519 - 535, The test
test_worldline_id_rejects_non_32_byte_payloads only checks a 31-byte underflow;
add a symmetric overlong case to ensure deserialization rejects >32 bytes too by
constructing a CBOR map with "id" set to 33 bytes and asserting
decode_cbor::<Wrapper>(&bytes).unwrap_err() contains "32 bytes"; modify the test
(which uses WorldlineId and Wrapper and calls decode_cbor) to include both the
31-byte and 33-byte failure assertions to pin strict 32-byte enforcement from
both sides.
| fn parse_worldline_id(worldline_id: &AbiWorldlineId) -> WorldlineId { | ||
| WorldlineId::from_bytes(*worldline_id.as_bytes()) | ||
| } | ||
|
|
||
| fn parse_head_id(head_id: &AbiHeadId) -> HeadId { | ||
| HeadId::from_bytes(*head_id.as_bytes()) | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
rg -n -C2 'fn parse_worldline_id|fn parse_head_id|fn parse_head_key|parse_head_key\(' crates/warp-wasm/src/warp_kernel.rsRepository: flyingrobots/echo
Length of output: 862
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Get full content of parse_head_key function
echo "=== parse_head_key full implementation ==="
sed -n '460,465p' crates/warp-wasm/src/warp_kernel.rs
# Search for all calls to parse_head_key
echo ""
echo "=== All calls to parse_head_key ==="
rg 'parse_head_key' crates/warp-wasm/src/warp_kernel.rs -nRepository: flyingrobots/echo
Length of output: 545
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Search for any external references to these helpers
echo "=== External references to parse_worldline_id ==="
rg 'parse_worldline_id' crates/ --type rust | grep -v 'crates/warp-wasm/src/warp_kernel.rs'
echo ""
echo "=== External references to parse_head_id ==="
rg 'parse_head_id' crates/ --type rust | grep -v 'crates/warp-wasm/src/warp_kernel.rs'
echo ""
echo "=== External references to parse_head_key ==="
rg 'parse_head_key' crates/ --type rust | grep -v 'crates/warp-wasm/src/warp_kernel.rs'Repository: flyingrobots/echo
Length of output: 1321
These helpers are conversions, not parsers.
AbiWorldlineId and AbiHeadId carry fixed-width wrappers. Naming these as parse_* and returning Result from parse_head_key advertise a failure mode that cannot happen. Rename to to_core_* and return the value directly to eliminate dead error-handling plumbing at the call site (line 439).
♻️ Proposed cleanup
- fn parse_worldline_id(worldline_id: &AbiWorldlineId) -> WorldlineId {
+ fn to_core_worldline_id(worldline_id: &AbiWorldlineId) -> WorldlineId {
WorldlineId::from_bytes(*worldline_id.as_bytes())
}
- fn parse_head_id(head_id: &AbiHeadId) -> HeadId {
+ fn to_core_head_id(head_id: &AbiHeadId) -> HeadId {
HeadId::from_bytes(*head_id.as_bytes())
}
@@
- fn parse_head_key(head: &AbiWriterHeadKey) -> Result<WriterHeadKey, AbiError> {
- Ok(WriterHeadKey {
- worldline_id: Self::parse_worldline_id(&head.worldline_id),
- head_id: Self::parse_head_id(&head.head_id),
- })
+ fn to_core_head_key(head: &AbiWriterHeadKey) -> WriterHeadKey {
+ WriterHeadKey {
+ worldline_id: Self::to_core_worldline_id(&head.worldline_id),
+ head_id: Self::to_core_head_id(&head.head_id),
+ }
}
@@
- let key = Self::parse_head_key(&head)?;
+ let key = Self::to_core_head_key(&head);Also applies to: 460-464
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/warp-wasm/src/warp_kernel.rs` around lines 170 - 176, The helpers
parse_worldline_id and parse_head_id are not parsers and should be renamed to
to_core_worldline_id and to_core_head_id (or similar) to reflect direct
conversion; change their signatures to return WorldlineId and HeadId directly
(no Result) by returning WorldlineId::from_bytes(*worldline_id.as_bytes()) and
HeadId::from_bytes(*head_id.as_bytes()); likewise rename and change
parse_head_key (and the similar function referenced around lines 460-464) to
to_core_head_key and return the HeadKey directly instead of Result, and update
all call sites (including the call at the former line 439) to remove dead error
handling/unwraps accordingly.
| - Source data: `TASKS-DAG.md` | ||
| - Generator: `scripts/generate-tasks-dag.js` (scheduled by `.github/workflows/refresh-dependency-dags.yml` to keep the rendered output aligned with `TASKS-DAG.md`) | ||
| - Source data: `docs/archive/tasks/TASKS-DAG.md` | ||
| - Generator: `scripts/generate-tasks-dag.js` (scheduled by `.github/workflows/refresh-dependency-dags.yml` to keep the rendered output aligned with `docs/archive/tasks/TASKS-DAG.md`) |
There was a problem hiding this comment.
Fix platform name casing in workflow reference.
GITHUB should be GitHub in the workflow sentence.
Suggested patch
-- Generator: `scripts/generate-tasks-dag.js` (scheduled by `.github/workflows/refresh-dependency-dags.yml` to keep the rendered output aligned with `docs/archive/tasks/TASKS-DAG.md`)
+- Generator: `scripts/generate-tasks-dag.js` (scheduled by `.github/workflows/refresh-dependency-dags.yml` to keep the rendered output aligned with `docs/archive/tasks/TASKS-DAG.md`)As per coding guidelines, docs/**/*.md must prioritize documentation accuracy and stale-reference correctness.
🧰 Tools
🪛 LanguageTool
[uncategorized] ~89-~89: The official name of this software platform is spelled with a capital “H”.
Context: ...ts/generate-tasks-dag.js(scheduled by.github/workflows/refresh-dependency-dags.yml` ...
(GITHUB)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/dependency-dags.md` at line 89, The docs line referencing the scheduled
workflow uses incorrect platform casing; update the sentence that mentions
`.github/workflows/refresh-dependency-dags.yml` so the platform name reads
"GitHub" instead of "GITHUB" (search for the Generator line mentioning
`scripts/generate-tasks-dag.js` and
`.github/workflows/refresh-dependency-dags.yml` in docs/dependency-dags.md and
replace the uppercase token with the proper "GitHub" casing).
| ### 1. Opaque ids should not dissolve into raw byte vectors at the ABI edge | ||
|
|
||
| `HeadId`, `WorldlineId`, and `WriterHeadKey` are frozen as typed opaque runtime | ||
| concepts. The current ABI still exposes some of them as raw `Vec<u8>` fields. | ||
|
|
||
| That is workable for hand-written DTOs, but it is the wrong source-of-truth | ||
| shape for generated runtime-schema types. |
There was a problem hiding this comment.
This blocker text is stale after the typed-id migration.
The branch now carries typed ABI wrappers for HeadId, WorldlineId, and WriterHeadKey. Saying the “current ABI still exposes some of them as raw Vec<u8> fields” misstates the state of this branch and makes the audit inaccurate. Narrow this note to whatever raw-byte ids actually remain.
As per coding guidelines, "Echo is a deterministic simulation engine. Documentation accuracy matters — especially anything touching determinism guarantees, hash stability, or canonical ordering. Flag factual errors and stale cross-references."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/plans/phase-8-runtime-schema-conformance.md` around lines 127 - 133,
Update the blocker paragraph to reflect the current typed-id migration: remove
or reword the claim that `HeadId`, `WorldlineId`, and `WriterHeadKey` are still
exposed as raw `Vec<u8>` and instead list only the actual runtime schema types
that remain as raw byte vectors (if any); reference the typed ABI wrappers for
`HeadId`, `WorldlineId`, and `WriterHeadKey` so the note is narrowed to the true
remaining raw-byte ids and mark the previous statement as stale/updated to avoid
misleading the audit.
| ### Artifact A: Runtime identifiers and logical counters | ||
|
|
||
| - `HeadId` | ||
| - `WriterHeadKey` | ||
| - `IntentKind` | ||
| - `WorldlineTick` | ||
| - `GlobalTick` | ||
|
|
||
| Source file: `schemas/runtime/artifact-a-identifiers.graphql` |
There was a problem hiding this comment.
Artifact A is missing WorldlineId.
This section reads like the concrete contents of Artifact A, but WorldlineId is omitted even though the same document lists it as a Phase 8 freeze candidate and later artifacts depend on it. That leaves the artifact breakdown internally inconsistent.
Patch
### Artifact A: Runtime identifiers and logical counters
- `HeadId`
+- `WorldlineId`
- `WriterHeadKey`
- `IntentKind`
- `WorldlineTick`
- `GlobalTick`As per coding guidelines, "Documentation accuracy matters — especially anything touching determinism guarantees, hash stability, or canonical ordering. Flag factual errors and stale cross-references."
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ### Artifact A: Runtime identifiers and logical counters | |
| - `HeadId` | |
| - `WriterHeadKey` | |
| - `IntentKind` | |
| - `WorldlineTick` | |
| - `GlobalTick` | |
| Source file: `schemas/runtime/artifact-a-identifiers.graphql` | |
| ### Artifact A: Runtime identifiers and logical counters | |
| - `HeadId` | |
| - `WorldlineId` | |
| - `WriterHeadKey` | |
| - `IntentKind` | |
| - `WorldlineTick` | |
| - `GlobalTick` | |
| Source file: `schemas/runtime/artifact-a-identifiers.graphql` |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/plans/phase-8-schema-freeze-inventory.md` around lines 97 - 105,
Artifact A's identifier list omits WorldlineId, causing inconsistency with the
Phase 8 freeze candidates and downstream artifacts; update the Artifact A
identifier enumeration (the block listing HeadId, WriterHeadKey, IntentKind,
WorldlineTick, GlobalTick) to include WorldlineId so the artifact definition
matches the freeze candidate set and downstream dependencies that reference
WorldlineId.
| const result = spawnSync( | ||
| "npx", | ||
| [ | ||
| "prettier", | ||
| "--parser", | ||
| "graphql", | ||
| "--stdin-filepath", | ||
| syntheticFilePath, | ||
| ], | ||
| { | ||
| encoding: "utf8", | ||
| input: source, | ||
| }, | ||
| ); | ||
|
|
||
| if (result.error) { | ||
| throw new Error( | ||
| `failed to run npx prettier for schema validation: ${result.error.message}`, | ||
| ); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
git ls-files | grep -E '(package\.json|validate-runtime-schema-fragments)' | head -20Repository: flyingrobots/echo
Length of output: 300
🏁 Script executed:
cat -n package.json | grep -A 2 -B 2 'schema:runtime'Repository: flyingrobots/echo
Length of output: 281
🏁 Script executed:
wc -l scripts/validate-runtime-schema-fragments.mjsRepository: flyingrobots/echo
Length of output: 110
🏁 Script executed:
cat -n scripts/validate-runtime-schema-fragments.mjsRepository: flyingrobots/echo
Length of output: 11041
Hard-coded npx breaks pnpm entrypoint; SDL parser cannot handle directives.
Line 71 hard-codes npx as the Prettier launcher. On systems with pnpm but no npm, this fails immediately with spawn npx ENOENT, breaking the documented pnpm schema:runtime:check path. Resolve Prettier through a cascade (npx → pnpm exec → node_modules/.bin) instead.
Additionally, the SDL validator incorrectly parses field types. extractTypeNames() at line 173 extracts all identifiers from the type expression, including those inside directives and string literals. A field like name: String @deprecated(reason: "old") will extract and validate deprecated, reason, and old as type names, causing false rejections of valid GraphQL. The parser must stop at directive markers (@) and respect string literal boundaries.
Prettier launcher fix
+function resolvePrettierInvocation() {
+ const candidates = [
+ { command: "npx", prefix: ["prettier"] },
+ { command: "pnpm", prefix: ["exec", "prettier"] },
+ {
+ command: join(
+ REPO_ROOT,
+ "node_modules",
+ ".bin",
+ process.platform === "win32" ? "prettier.cmd" : "prettier",
+ ),
+ prefix: [],
+ },
+ ];
+
+ for (const candidate of candidates) {
+ const probe = spawnSync(
+ candidate.command,
+ [...candidate.prefix, "--version"],
+ { encoding: "utf8" },
+ );
+ if (!probe.error && probe.status === 0) {
+ return candidate;
+ }
+ }
+
+ throw new Error(
+ "failed to locate prettier via npx, pnpm exec, or node_modules/.bin",
+ );
+}
+
function runPrettierCheck(files) {
const formattingErrors = [];
+ const prettier = resolvePrettierInvocation();
for (const file of files) {
const source = readFileSync(file, "utf8");
@@
const result = spawnSync(
- "npx",
+ prettier.command,
[
- "prettier",
+ ...prettier.prefix,
"--parser",
"graphql",
"--stdin-filepath",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/validate-runtime-schema-fragments.mjs` around lines 70 - 88, The
script currently hard-codes spawning "npx" (see the spawnSync call that runs
prettier) which breaks pnpm-only environments; change the launcher resolution to
try a cascade: first "npx", then "pnpm exec", and finally fall back to the local
node_modules/.bin prettier executable if present, using the same args and input
so pnpm users can run the check. Also fix extractTypeNames (the function that
parses GraphQL field type expressions) so it stops parsing when it encounters
directive markers (@) and ignores text inside string literals (respecting quoted
boundaries), ensuring identifiers inside directives or strings aren't collected
as type names; update the tokenization/state machine in extractTypeNames to exit
on '@' and to treat quoted strings as atomic tokens.
| function sanitizeLines(source) { | ||
| const lines = source.split(/\r?\n/u); | ||
| const sanitized = []; | ||
| let inDescription = false; | ||
|
|
||
| for (const rawLine of lines) { | ||
| let line = rawLine; | ||
|
|
||
| if (inDescription) { | ||
| if (line.includes('"""')) { | ||
| inDescription = false; | ||
| } | ||
| sanitized.push(""); | ||
| continue; | ||
| } | ||
|
|
||
| const firstTripleQuote = line.indexOf('"""'); | ||
| if (firstTripleQuote !== -1) { | ||
| const secondTripleQuote = line.indexOf('"""', firstTripleQuote + 3); | ||
| if (secondTripleQuote === -1) { | ||
| inDescription = true; | ||
| } | ||
| sanitized.push(""); | ||
| continue; | ||
| } | ||
|
|
||
| const commentStart = line.indexOf("#"); | ||
| if (commentStart !== -1) { | ||
| line = line.slice(0, commentStart); | ||
| } | ||
|
|
||
| sanitized.push(line.trimEnd()); | ||
| } | ||
|
|
||
| return sanitized; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's see the full file to understand context
wc -l scripts/validate-runtime-schema-fragments.mjsRepository: flyingrobots/echo
Length of output: 110
🏁 Script executed:
# Read the entire file to see lines 112-147 and 172-275
cat -n scripts/validate-runtime-schema-fragments.mjsRepository: flyingrobots/echo
Length of output: 11041
🏁 Script executed:
# Find schema files to see what patterns are actually used
find . -name "*.graphql" -type f | head -20Repository: flyingrobots/echo
Length of output: 277
🏁 Script executed:
# Look for test files related to this validation script
find . -name "*test*" -o -name "*spec*" | grep -i schemaRepository: flyingrobots/echo
Length of output: 108
🏁 Script executed:
# Examine the actual schema files to understand patterns
for file in ./schemas/runtime/*.graphql; do
echo "=== $file ==="
cat -n "$file"
doneRepository: flyingrobots/echo
Length of output: 11753
🏁 Script executed:
# Look at the test file
cat -n ./tests/hooks/test_runtime_schema_validation.shRepository: flyingrobots/echo
Length of output: 2114
🏁 Script executed:
# Test the extractTypeNames function with the examples mentioned in the review
node <<'EOF'
function extractTypeNames(typeExpression) {
return typeExpression.match(/[A-Za-z_][A-Za-z0-9_]*/gu) ?? [];
}
console.log("Test 1: field with directive");
console.log("Input: 'String `@deprecated`(reason: \"old\")'");
console.log("Output:", extractTypeNames('String `@deprecated`(reason: "old")'));
console.log();
console.log("Test 2: inline description then type");
console.log("For '\"\"\"desc\"\"\" type Foo', sanitizeLines would push empty string");
console.log("Then next line would be parsed");
console.log();
console.log("Test 3: scalar with specifiedBy directive");
console.log("Input: 'scalar PositiveInt `@specifiedBy`(url: \"...\")'");
console.log("Output:", extractTypeNames('scalar PositiveInt `@specifiedBy`(url: "...")'));
EOFRepository: flyingrobots/echo
Length of output: 472
🏁 Script executed:
# Check if any of the actual schema files use directives or inline descriptions
grep -n '@' schemas/runtime/*.graphqlRepository: flyingrobots/echo
Length of output: 43
🏁 Script executed:
# Check for any inline description patterns
grep -E '""".*""".*\b(type|scalar|input|enum|union)\b' schemas/runtime/*.graphqlRepository: flyingrobots/echo
Length of output: 43
🏁 Script executed:
# Test what happens with an inline description pattern
node <<'EOF'
function sanitizeLines(source) {
const lines = source.split(/\r?\n/u);
const sanitized = [];
let inDescription = false;
for (const rawLine of lines) {
let line = rawLine;
if (inDescription) {
if (line.includes('"""')) {
inDescription = false;
}
sanitized.push("");
continue;
}
const firstTripleQuote = line.indexOf('"""');
if (firstTripleQuote !== -1) {
const secondTripleQuote = line.indexOf('"""', firstTripleQuote + 3);
if (secondTripleQuote === -1) {
inDescription = true;
}
sanitized.push("");
continue;
}
const commentStart = line.indexOf("#");
if (commentStart !== -1) {
line = line.slice(0, commentStart);
}
sanitized.push(line.trimEnd());
}
return sanitized;
}
console.log("Test: inline description");
const source = '"""description""" type Foo { id: String! }';
const result = sanitizeLines(source);
console.log("Input:", source);
console.log("Output:", JSON.stringify(result));
console.log();
console.log("Test: multiline description then type on next line");
const source2 = `"""
description
"""
type Foo { id: String! }`;
const result2 = sanitizeLines(source2);
console.log("Input:", source2);
console.log("Output:", JSON.stringify(result2));
EOFRepository: flyingrobots/echo
Length of output: 298
🏁 Script executed:
# Check if prettier would reject inline descriptions or directives
# Create a test file with problematic patterns
cat > /tmp/test-schema.graphql << 'EOF'
"""inline description""" type TestInline {
id: String!
}
type TestDirective {
field: String `@deprecated`(reason: "old")
}
EOF
# See what prettier says about it
npx prettier --parser graphql --check /tmp/test-schema.graphql 2>&1 || trueRepository: flyingrobots/echo
Length of output: 214
🏁 Script executed:
# Let's verify what happens if we bypass prettier and directly process a problematic schema
node <<'EOF'
function sanitizeLines(source) {
const lines = source.split(/\r?\n/u);
const sanitized = [];
let inDescription = false;
for (const rawLine of lines) {
let line = rawLine;
if (inDescription) {
if (line.includes('"""')) {
inDescription = false;
}
sanitized.push("");
continue;
}
const firstTripleQuote = line.indexOf('"""');
if (firstTripleQuote !== -1) {
const secondTripleQuote = line.indexOf('"""', firstTripleQuote + 3);
if (secondTripleQuote === -1) {
inDescription = true;
}
sanitized.push("");
continue;
}
const commentStart = line.indexOf("#");
if (commentStart !== -1) {
line = line.slice(0, commentStart);
}
sanitized.push(line.trimEnd());
}
return sanitized;
}
function extractTypeNames(typeExpression) {
return typeExpression.match(/[A-Za-z_][A-Za-z0-9_]*/gu) ?? [];
}
// Simulate what validateReferences would do
const schemaWithDirective = `type User {
field: String `@deprecated`(reason: "old")
}`;
const lines = sanitizeLines(schemaWithDirective);
console.log("Schema with directive:");
console.log("Original:");
console.log(schemaWithDirective);
console.log("\nAfter sanitizeLines:");
lines.forEach((l, i) => console.log(` [${i}]: "${l}"`));
// Check what type names are extracted
const typeExpr = "String `@deprecated`(reason: \"old\")";
const extracted = extractTypeNames(typeExpr);
console.log("\nextractTypeNames on 'String `@deprecated`(reason: \"old\")':");
console.log(" Returns:", extracted);
console.log(" Problem: 'deprecated', 'reason', 'old' are not types but would be validated as such");
EOFRepository: flyingrobots/echo
Length of output: 455
🏁 Script executed:
# Check if there are any other validation or test files that might indicate expected behavior
find . -type f \( -name "*.mjs" -o -name "*.js" -o -name "*.test.*" \) | xargs grep -l "sanitizeLines\|extractTypeNames" 2>/dev/nullRepository: flyingrobots/echo
Length of output: 108
Replace string-based SDL processing with proper GraphQL AST parsing.
The sanitizeLines() and extractTypeNames() functions use regex and string manipulation that fails on valid GraphQL constructs. Directives like field: String @deprecated(reason: "...") cause extractTypeNames() to incorrectly extract directive names/arguments as type references (['String', 'deprecated', 'reason']). Inline descriptions like """desc""" type Foo blank out the entire definition line.
While prettier's formatting enforcement currently prevents these patterns in checked-in schemas, the validation logic is fragile. Use a proper GraphQL parser (via graphql-js AST or similar) to extract definitions and NamedType references instead of string surgery. This eliminates edge cases and makes the validator deterministic.
Also applies to: 172-275
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/validate-runtime-schema-fragments.mjs` around lines 112 - 147, The
current sanitizeLines and extractTypeNames implementations are brittle
string-based heuristics; replace them by parsing the SDL with a GraphQL parser
(e.g., graphql-js parse/visit) and derive the information from the AST: use
parse(source, { noLocation: false }) to get DocumentNode, iterate
DocumentNode.definitions to collect type/operation definitions (replacing
sanitizeLines' role) and use a visitor to collect NamedType nodes (replacing
extractTypeNames) while ignoring descriptions, comments and directive argument
content; ensure you import and use graphql's parse/visit functions, return the
same shape expected by downstream code, and remove the fragile line-based logic
in sanitizeLines and extractTypeNames.
| if grep -Eq 'i1 -> i2 \[[^]]*color="red"' "$tmpdir/docs/assets/dags/issue-deps.dot"; then | ||
| pass "generator reads archived TASKS-DAG source by default" | ||
| else | ||
| fail "generator should render a reality-only edge from the archived TASKS-DAG source" | ||
| cat "$tmpdir/docs/assets/dags/issue-deps.dot" | ||
| fi |
There was a problem hiding this comment.
Don’t let the diagnostic dump mask the real failure.
If the generator exits 0 but never writes issue-deps.dot, grep drops into this branch and cat then exits non-zero under set -e. That replaces the real assertion failure with a secondary “file not found” error and skips the clean FAIL summary.
Patch
else
fail "generator should render a reality-only edge from the archived TASKS-DAG source"
- cat "$tmpdir/docs/assets/dags/issue-deps.dot"
+ if [[ -f "$tmpdir/docs/assets/dags/issue-deps.dot" ]]; then
+ cat "$tmpdir/docs/assets/dags/issue-deps.dot"
+ else
+ cat "$output_file"
+ fi
fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if grep -Eq 'i1 -> i2 \[[^]]*color="red"' "$tmpdir/docs/assets/dags/issue-deps.dot"; then | |
| pass "generator reads archived TASKS-DAG source by default" | |
| else | |
| fail "generator should render a reality-only edge from the archived TASKS-DAG source" | |
| cat "$tmpdir/docs/assets/dags/issue-deps.dot" | |
| fi | |
| if grep -Eq 'i1 -> i2 \[[^]]*color="red"' "$tmpdir/docs/assets/dags/issue-deps.dot"; then | |
| pass "generator reads archived TASKS-DAG source by default" | |
| else | |
| fail "generator should render a reality-only edge from the archived TASKS-DAG source" | |
| if [[ -f "$tmpdir/docs/assets/dags/issue-deps.dot" ]]; then | |
| cat "$tmpdir/docs/assets/dags/issue-deps.dot" | |
| else | |
| cat "$output_file" | |
| fi | |
| fi |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/hooks/test_dependency_dags.sh` around lines 98 - 103, The test
currently calls fail and then immediately cats
"$tmpdir/docs/assets/dags/issue-deps.dot", which under set -e can hide the
original assertion if the DOT file is missing; change the branch so that after
the grep condition fails you first call fail with a clear message and only
attempt to dump the DOT file conditionally (e.g. check for existence or make the
cat a non-failing operation) to avoid a secondary “file not found” error; target
the block that references "$tmpdir/docs/assets/dags/issue-deps.dot" and the grep
check (the if grep -Eq ... then ... else ... fi) and ensure the fail() call runs
before any optional diagnostic dump or that the dump is guarded so it cannot
cause the test to exit with a different error.
Summary
schemas/runtime/with validation and conformance docsecho-runtime-schemaas the shared home for the frozen logical counters and core runtime ids/keyVerification
Summary by CodeRabbit
New Features
Refactor
Documentation
Tests
Chores