feat(cli): locks for agents running dev and build commands#1265
feat(cli): locks for agents running dev and build commands#1265
Conversation
📦 Bundle Size Comparison📈 nuxi
📈 nuxt-cli
➡️ create-nuxt
|
commit: |
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 18 minutes and 33 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
📝 WalkthroughWalkthroughIntroduces a file-based lock mechanism ( Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/nuxi/src/commands/build.ts`:
- Around line 84-94: The build lock written by writeLock(...) is removed by
clearBuildDir() because clearDir(...) only preserves
['cache','analyze','nuxt.json'] and omits 'nuxt.lock', so add 'nuxt.lock' to the
preservation list inside clearBuildDir()/clearDir call or alternatively move the
call to writeLock(...) to after clearBuildDir() runs; update references in this
file to use checkLock(...)/formatLockError(...) as-is to detect existing locks
before writing, and ensure lockCleanup remains set to the writeLock(...) return
value when you move it.
In `@packages/nuxi/src/dev/utils.ts`:
- Around line 488-490: The close() method removes nuxt.lock too early allowing
new dev/build to start while the current process is still reloading; modify
NuxtDevServer.close() so it does not remove the lock up-front but instead waits
until the server/listeners and reload path are fully stopped (i.e., stop/await
the HTTP server, remove any file watchers/listeners and run `#lockCleanup` only
after those shutdown steps complete). Specifically, in NuxtDevServer.close() and
the reload path used by NuxtDevServer.#load(), reorder shutdown: first stop
listeners/servers and await their completion, then invoke this.#lockCleanup() to
remove nuxt.lock so no concurrent nuxi dev/build can slip past during reload.
In `@packages/nuxi/src/utils/lockfile.ts`:
- Around line 95-112: The exit/signal handlers added in writeLock()
(process.on('exit', cleanup) and process.once for SIGTERM/SIGINT/SIGQUIT/SIGHUP)
are never removed, causing listener accumulation; update cleanup() (and/or
writeLock()) to detach those handlers when the lock is released by calling
process.off/process.removeListener for the exact bound listener functions (store
the cleanup handler and each bound signal callback so they can be removed),
ensure unlinkSync(lockPath) still runs inside cleanup, and only add the process
listeners once per active lock to prevent duplicates when writeLock() is invoked
multiple times.
- Around line 92-93: The current writeLock flow is racy because callers do
checkLock() then writeLock() using a plain writeFileSync; change writeLock() to
acquire the lock atomically (e.g., write the JSON to the lockPath with an
exclusive flag such as 'wx' or write to a temp file and atomically rename) so
the write fails if the lock file already exists rather than relying on a prior
check; keep using dirname(lockPath) mkdir(..., { recursive: true }) before the
atomic write. Also fix cleanup() so it removes the previously registered
process.on('exit') handler instead of leaving handlers to accumulate: store the
exit handler reference when you register it in writeLock()/registerCleanup and
call process.removeListener('exit', handler) inside cleanup() (and ensure you
don't register duplicate handlers). Reference functions/idents: writeLock,
checkLock, cleanup, lockPath, and the exit handler registered via
process.on('exit').
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 3c7c5c90-d9b5-4897-be94-cf65f5648f65
📒 Files selected for processing (4)
packages/nuxi/src/commands/build.tspackages/nuxi/src/dev/utils.tspackages/nuxi/src/utils/lockfile.tspackages/nuxi/test/unit/lockfile.spec.ts
There was a problem hiding this comment.
♻️ Duplicate comments (2)
packages/nuxi/src/utils/lockfile.ts (2)
97-99:⚠️ Potential issue | 🔴 CriticalUse atomic lock creation to prevent concurrent writers.
Line 98 does a normal write after a separate pre-check, so two processes can pass
checkLock()and both writenuxt.lock. Acquire the lock atomically with exclusive create (flag: 'wx') and treatEEXISTas lock contention.Proposed fix
await mkdir(dirname(lockPath), { recursive: true }) - writeFileSync(lockPath, JSON.stringify(info, null, 2)) + try { + writeFileSync(lockPath, JSON.stringify(info, null, 2), { flag: 'wx' }) + } + catch (error) { + if ((error as NodeJS.ErrnoException).code === 'EEXIST') { + throw new Error(`Lock file already exists: ${lockPath}`) + } + throw error + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nuxi/src/utils/lockfile.ts` around lines 97 - 99, The current non-atomic writeFileSync(lockPath, ...) after mkdir allows race conditions; replace the plain write with an exclusive-create atomic write (e.g. use write/open with flag: 'wx' or equivalent) so the file is only created if it does not already exist, and catch the EEXIST error to treat it as lock contention (handle/retry/propagate accordingly). Keep the existing mkdir(dirname(lockPath), { recursive: true }) before attempting the atomic create and reference lockPath, writeFileSync/open/write/close (or the chosen atomic API) and EEXIST handling in your changes.
100-117:⚠️ Potential issue | 🟠 MajorDetach
exit/signal handlers incleanup()to avoid listener accumulation.Lines 111-116 add process listeners on every
writeLock()call, butcleanup()(Lines 101-109) only unlinks the file. In repeated dev reload flows, this can accumulate listeners and eventually triggerMaxListenersExceededWarning.Proposed fix
let cleaned = false + const onExit = () => cleanup() + const signalHandlers: Array<[NodeJS.Signals, () => void]> = [] function cleanup() { if (cleaned) return cleaned = true + process.off('exit', onExit) + for (const [signal, handler] of signalHandlers) { + process.off(signal, handler) + } try { unlinkSync(lockPath) } catch {} } - process.on('exit', cleanup) + process.on('exit', onExit) for (const signal of ['SIGTERM', 'SIGINT', 'SIGQUIT', 'SIGHUP'] as const) { - process.once(signal, () => { + const handler = () => { cleanup() process.exit() - }) + } + signalHandlers.push([signal, handler]) + process.once(signal, handler) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nuxi/src/utils/lockfile.ts` around lines 100 - 117, The cleanup logic only unlinks the lock file but does not detach the process listeners added in writeLock(), causing listener accumulation; modify writeLock() to register signal handlers using named/bound functions (e.g., keep references for the exit handler and each signal handler) and in cleanup() call process.off/process.removeListener for 'exit' and the same signals ('SIGTERM','SIGINT','SIGQUIT','SIGHUP') to remove those handlers before unlinking; alternatively ensure listeners are only added once (guarded by a module-level flag) and still remove them in cleanup() to prevent MaxListenersExceededWarning—refer to the existing cleanup() function and the code that calls process.on('exit', cleanup) and process.once(signal, ...) to implement this.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@packages/nuxi/src/utils/lockfile.ts`:
- Around line 97-99: The current non-atomic writeFileSync(lockPath, ...) after
mkdir allows race conditions; replace the plain write with an exclusive-create
atomic write (e.g. use write/open with flag: 'wx' or equivalent) so the file is
only created if it does not already exist, and catch the EEXIST error to treat
it as lock contention (handle/retry/propagate accordingly). Keep the existing
mkdir(dirname(lockPath), { recursive: true }) before attempting the atomic
create and reference lockPath, writeFileSync/open/write/close (or the chosen
atomic API) and EEXIST handling in your changes.
- Around line 100-117: The cleanup logic only unlinks the lock file but does not
detach the process listeners added in writeLock(), causing listener
accumulation; modify writeLock() to register signal handlers using named/bound
functions (e.g., keep references for the exit handler and each signal handler)
and in cleanup() call process.off/process.removeListener for 'exit' and the same
signals ('SIGTERM','SIGINT','SIGQUIT','SIGHUP') to remove those handlers before
unlinking; alternatively ensure listeners are only added once (guarded by a
module-level flag) and still remove them in cleanup() to prevent
MaxListenersExceededWarning—refer to the existing cleanup() function and the
code that calls process.on('exit', cleanup) and process.once(signal, ...) to
implement this.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: e20f6c9d-6c34-4247-8da7-c7b4be5ce9de
📒 Files selected for processing (2)
packages/nuxi/src/utils/lockfile.tspackages/nuxi/test/unit/lockfile.spec.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/nuxi/test/unit/lockfile.spec.ts
|
I love the idea @harlan-zw Do you think it would also be possible somehow, and would it be useful to also provide a way to read the logs for the current process? EDIT: seems that Next.js has a path to a log file |
When running inside an AI agent environment, write a lock file to `<buildDir>/nuxt.lock` containing process info (PID, port, URL, command). On startup, check for existing locks and show actionable error messages with the running server URL and kill command. Stale locks from crashed processes are auto-cleaned via PID liveness checking. Gated behind `isAgent` from std-env, completely invisible to normal users.
- Use atomic file creation (`wx` flag) to prevent race conditions between concurrent checkLock/writeLock calls - Detach process exit/signal handlers in cleanup() to prevent listener accumulation during dev server reloads - Move writeLock() after clearBuildDir() in build command so the lock file isn't deleted immediately after creation - Don't remove lock during dev server reloads (only on final shutdown via releaseLock())
- Atomic acquireLock API replaces separate checkLock/writeLock (closes silent
race where writeLock returned noop on EEXIST and caller falsely believed it
held the lock)
- isProcessAlive returns true on EPERM so locks held by other users aren't
clobbered as "stale"
- updateLock writes port/url metadata in place, eliminating the
release+reacquire window on dev server reload
- PID-recycling safety: locks older than 24h are treated as stale
- cwd is stored in LockInfo and used in the error message instead of the
checking process's cwd (was misleading in monorepos / symlinks)
- Throw on conflict instead of process.exit(1) so nitro/vite cleanup runs
- No signal handlers registered; rely on process.on('exit') which fires after
Node's default signal handling, so sibling SIGINT/SIGTERM handlers are not
bypassed
- nuxt.lock is preserved across clearBuildDir by adding it to the exclusion
list in fs.ts (per @danielroe review)
- NUXT_LOCK=1 forces locking on for non-agents (symmetric with
NUXT_IGNORE_LOCK=1)
- Dev URL uses getAddressURL so IPv6 / 0.0.0.0 binds show a reachable host
instead of hardcoded 127.0.0.1
4bf0630 to
b245924
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/nuxi/src/dev/utils.ts`:
- Around line 200-211: The lock is currently acquired once against
this.#currentNuxt!.options.buildDir via acquireLock and its release saved to
this.#lockCleanup, but when the resolved buildDir changes on reload (e.g., in
updateLock / reload flow) you must re-acquire an exclusive lock for the new
buildDir and swap the release handler atomically to avoid stale locks or dual
servers. Modify the reload/update flow to: compute the new buildDir from
this.#currentNuxt!.options, call acquireLock(newBuildDir, {command:'dev',
cwd:this.options.cwd}), check lock.existing and handle/log/throw if taken, set a
newRelease = newLock.release, then replace this.#lockCleanup with newRelease and
only after successful swap call the oldRelease to free the previous lock; ensure
all references to acquireLock, this.#lockCleanup and buildDir in
updateLock/reload paths are updated accordingly.
In `@packages/nuxi/test/unit/lockfile.spec.ts`:
- Around line 70-77: The test "returns existing lock when another live process
holds it" currently writes a lock with pid: 1 and relies on real
process.liveness; instead stub the liveness check by using vi.spyOn(process,
'kill') to mock the behavior for that pid (e.g., vi.spyOn(process,
'kill').mockImplementation((pid, sig) => { if (pid === 1) return undefined;
throw new Error('no such process') })) so the lock is treated as held; ensure
you restore the spy after the test with mockRestore to avoid leaking state.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 2cfcd838-9535-4416-9c7d-a2172d24a753
📒 Files selected for processing (6)
packages/nuxi/src/commands/build.tspackages/nuxi/src/dev/index.tspackages/nuxi/src/dev/utils.tspackages/nuxi/src/utils/fs.tspackages/nuxi/src/utils/lockfile.tspackages/nuxi/test/unit/lockfile.spec.ts
✅ Files skipped from review due to trivial changes (1)
- packages/nuxi/src/utils/fs.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/nuxi/src/utils/lockfile.ts
There was a problem hiding this comment.
♻️ Duplicate comments (2)
packages/nuxi/test/unit/lockfile.spec.ts (1)
70-77:⚠️ Potential issue | 🟠 MajorAvoid PID 1 assumptions; mock liveness checks for deterministic tests.
pid: 1is not reliably “always alive” across environments, so this test can be flaky.Deterministic test pattern
it('returns existing lock when another live process holds it', async () => { + const killSpy = vi.spyOn(process, 'kill').mockImplementation(((pid: number, signal?: number | NodeJS.Signals) => { + if (pid === 424242 && signal === 0) { return true } + throw Object.assign(new Error('ESRCH'), { code: 'ESRCH' }) + }) as typeof process.kill) + try { await mkdir(tempDir, { recursive: true }) writeFileSync(join(tempDir, 'nuxt.lock'), JSON.stringify({ - pid: 1, // init/systemd, always alive + pid: 424242, command: 'dev', cwd: '/other', startedAt: Date.now(), })) const lock = acquireLock(tempDir, { command: 'build', cwd: '/project' }) expect(lock.existing).toBeDefined() - expect(lock.existing!.pid).toBe(1) + expect(lock.existing!.pid).toBe(424242) expect(lock.existing!.cwd).toBe('/other') expect(lock.release).toBeUndefined() expect(existsSync(join(tempDir, 'nuxt.lock'))).toBe(true) + } finally { + killSpy.mockRestore() + } })Is PID 1 guaranteed to exist on Windows/CI, and what is the documented behavior of Node.js `process.kill(pid, 0)` when the PID does not exist?Also applies to: 79-83
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nuxi/test/unit/lockfile.spec.ts` around lines 70 - 77, The test "returns existing lock when another live process holds it" (and the similar case at lines 79-83) assumes pid: 1 is always alive—make the test deterministic by mocking the process liveness check instead of relying on PID 1; locate the lock-checking logic (the function that uses process.kill(pid, 0) or an isProcessAlive utility) and stub that to return true for the test case (and false for the counterpart test), or inject a fake implementation; update the test to write any PID (e.g., 12345) to the lock file and assert behavior while the mocked liveness function controls the "alive" result so the test does not depend on platform-specific PID semantics.packages/nuxi/src/dev/utils.ts (1)
200-212:⚠️ Potential issue | 🟠 MajorRe-acquire the lock when
buildDirchanges across reloads.Line 202 acquires once for the initial
buildDir, but Line 481 updates lock metadata using the currentbuildDirafter reload. If config changesbuildDir, you can end up without exclusive ownership on the new path and with a stale lock on the old path.Suggested direction
export class NuxtDevServer extends EventEmitter<DevServerEventMap> { `#lockCleanup`?: () => void + `#lockedBuildDir`?: string async init(): Promise<void> { const buildDir = this.#currentNuxt!.options.buildDir const lock = acquireLock(buildDir, { command: 'dev', cwd: this.options.cwd, }) @@ this.#lockCleanup = lock.release + this.#lockedBuildDir = buildDir } async `#initializeNuxt`(reload: boolean): Promise<void> { @@ + const buildDir = this.#currentNuxt.options.buildDir + if (this.#lockedBuildDir && buildDir !== this.#lockedBuildDir) { + const next = acquireLock(buildDir, { command: 'dev', cwd: this.options.cwd }) + if (next.existing) { + console.error(formatLockError(next.existing)) + throw new Error(`Another Nuxt ${next.existing.command} is already running (PID ${next.existing.pid}).`) + } + const prevRelease = this.#lockCleanup + this.#lockCleanup = next.release + this.#lockedBuildDir = buildDir + prevRelease?.() + } - updateLock(this.#currentNuxt.options.buildDir, { + updateLock(buildDir, { command: 'dev', cwd: this.options.cwd, port: addr.port, hostname: addr.address, url: serverUrl, }) } releaseLock(): void { this.#lockCleanup?.() + this.#lockCleanup = undefined } }Also applies to: 481-487, 498-501
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nuxi/src/dev/utils.ts` around lines 200 - 212, The code only acquires a lock once using acquireLock(buildDir, ...) and stores release in this.#lockCleanup, but never re-acquires if this.#currentNuxt.options.buildDir changes on reload; update logic so whenever the configured buildDir changes (compare current buildDir to the path stored on the active lock), call the existing lock.release() to free the old lock and then call acquireLock(newBuildDir, { command: 'dev', cwd: this.options.cwd }) to obtain exclusive ownership for the new path, handle lock.existing the same way as the initial acquisition (console.error(formatLockError(...)) and throw), and replace this.#lockCleanup with the new lock.release reference so metadata updates later (the code around the later metadata update that touches lines where lock info is written) operate on the correct lock.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@packages/nuxi/src/dev/utils.ts`:
- Around line 200-212: The code only acquires a lock once using
acquireLock(buildDir, ...) and stores release in this.#lockCleanup, but never
re-acquires if this.#currentNuxt.options.buildDir changes on reload; update
logic so whenever the configured buildDir changes (compare current buildDir to
the path stored on the active lock), call the existing lock.release() to free
the old lock and then call acquireLock(newBuildDir, { command: 'dev', cwd:
this.options.cwd }) to obtain exclusive ownership for the new path, handle
lock.existing the same way as the initial acquisition
(console.error(formatLockError(...)) and throw), and replace this.#lockCleanup
with the new lock.release reference so metadata updates later (the code around
the later metadata update that touches lines where lock info is written) operate
on the correct lock.
In `@packages/nuxi/test/unit/lockfile.spec.ts`:
- Around line 70-77: The test "returns existing lock when another live process
holds it" (and the similar case at lines 79-83) assumes pid: 1 is always
alive—make the test deterministic by mocking the process liveness check instead
of relying on PID 1; locate the lock-checking logic (the function that uses
process.kill(pid, 0) or an isProcessAlive utility) and stub that to return true
for the test case (and false for the counterpart test), or inject a fake
implementation; update the test to write any PID (e.g., 12345) to the lock file
and assert behavior while the mocked liveness function controls the "alive"
result so the test does not depend on platform-specific PID semantics.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 9852fe31-169a-4dc1-a4fb-d490c3fde1c9
📒 Files selected for processing (6)
packages/nuxi/src/commands/build.tspackages/nuxi/src/dev/index.tspackages/nuxi/src/dev/utils.tspackages/nuxi/src/utils/fs.tspackages/nuxi/src/utils/lockfile.tspackages/nuxi/test/unit/lockfile.spec.ts
✅ Files skipped from review due to trivial changes (1)
- packages/nuxi/src/utils/fs.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- packages/nuxi/src/dev/index.ts
- packages/nuxi/src/utils/lockfile.ts
- Re-acquire lock when buildDir changes on reload (nuxt.config edits can relocate it). Lock swap is atomic: new release installed before old release runs, so we're never unlocked in between. - Test: mock process.kill for liveness instead of depending on PID 1 being alive (Windows/CI sandboxes may differ).
Seems like a good idea, I think the scope of that may go beyond the locks to get it up and running properly. I'd say we can tackle that in follow up PR. |
🔗 Linked issue
nuxt/nuxt#34629
❓ Type of change
📚 Description
Inspired by Next.js 16.2's dev server lock file & nuxt/nuxt#34629, this adds a
nuxt.lockfile to the build directory duringnuxi devandnuxi build. The lock file contains process info (PID, port, URL, command) so that a second invocation can detect the existing process and print an actionable error message with the running server's URL and a kill command.This only runs for agents (using
std-envagent detection). Agents can setNUXT_IGNORE_LOCK=1to bypass the check if they have a good reason to.Stale locks from crashed processes are automatically cleaned up on the next startup via PID liveness checking (
process.kill(pid, 0)). Pure Node.js, zero new dependencies.Lock file format (
<buildDir>/nuxt.lock){ "pid": 12345, "command": "dev", "port": 3000, "hostname": "127.0.0.1", "url": "http://127.0.0.1:3000", "startedAt": 1711540800000 }Error output when a dev server is already running
Limitations
Future considerations
Rust bindings for native OS file locking.