Skip to content

feat(cli): locks for agents running dev and build commands#1265

Open
harlan-zw wants to merge 7 commits intomainfrom
feat/dev-server-lockfile
Open

feat(cli): locks for agents running dev and build commands#1265
harlan-zw wants to merge 7 commits intomainfrom
feat/dev-server-lockfile

Conversation

@harlan-zw
Copy link
Copy Markdown
Contributor

@harlan-zw harlan-zw commented Mar 27, 2026

🔗 Linked issue

nuxt/nuxt#34629

❓ Type of change

  • 📖 Documentation
  • 🐞 Bug fix
  • 👌 Enhancement
  • ✨ New feature
  • 🧹 Chore
  • ⚠️ Breaking change

📚 Description

Inspired by Next.js 16.2's dev server lock file & nuxt/nuxt#34629, this adds a nuxt.lock file to the build directory during nuxi dev and nuxi build. The lock file contains process info (PID, port, URL, command) so that a second invocation can detect the existing process and print an actionable error message with the running server's URL and a kill command.

This only runs for agents (using std-env agent detection). Agents can set NUXT_IGNORE_LOCK=1 to bypass the check if they have a good reason to.

Stale locks from crashed processes are automatically cleaned up on the next startup via PID liveness checking (process.kill(pid, 0)). Pure Node.js, zero new dependencies.

Lock file format (<buildDir>/nuxt.lock)

{
  "pid": 12345,
  "command": "dev",
  "port": 3000,
  "hostname": "127.0.0.1",
  "url": "http://127.0.0.1:3000",
  "startedAt": 1711540800000
}

Error output when a dev server is already running

Another Nuxt dev server is already running:

  URL:     http://127.0.0.1:3000
  PID:     12345
  Dir:     /path/to/project
  Started: 3/27/2026, 2:00:00 PM

Run `kill 12345` to stop it, or connect to http://127.0.0.1:3000
Set NUXT_IGNORE_LOCK=1 to bypass this check.

Limitations

  • These aren't OS-level file locks - we can't support these without native bindings. There are edge cases around force process exits and hards crashes but there is recover logic

Future considerations

Rust bindings for native OS file locking.

@harlan-zw harlan-zw requested a review from danielroe as a code owner March 27, 2026 04:05
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 27, 2026

📦 Bundle Size Comparison

📈 nuxi

Metric Base Head Diff
Rendered 3385.35 KB 3389.35 KB +4.00 KB (+0.12%)

📈 nuxt-cli

Metric Base Head Diff
Rendered 149.88 KB 153.83 KB +3.95 KB (+2.63%)

➡️ create-nuxt

Metric Base Head Diff
Rendered 1640.37 KB 1640.37 KB 0.00 KB (0.00%)

@pkg-pr-new
Copy link
Copy Markdown

pkg-pr-new bot commented Mar 27, 2026

  • nuxt-cli-playground

    npm i https://pkg.pr.new/create-nuxt@1265
    
    npm i https://pkg.pr.new/nuxi@1265
    
    npm i https://pkg.pr.new/@nuxt/cli@1265
    

commit: 1a3acd8

@codspeed-hq
Copy link
Copy Markdown

codspeed-hq bot commented Mar 27, 2026

Merging this PR will not alter performance

✅ 2 untouched benchmarks


Comparing feat/dev-server-lockfile (1a3acd8) with main (47c6773)

Open in CodSpeed

@harlan-zw harlan-zw changed the title feat(cli): add lock file for dev and build commands feat(cli): locks for agents running dev and build commands Mar 27, 2026
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 27, 2026

Warning

Rate limit exceeded

@harlan-zw has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 18 minutes and 33 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 18 minutes and 33 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: bab1a746-7212-43fa-8fef-181ba7458df0

📥 Commits

Reviewing files that changed from the base of the PR and between b245924 and 1a3acd8.

📒 Files selected for processing (2)
  • packages/nuxi/src/dev/utils.ts
  • packages/nuxi/test/unit/lockfile.spec.ts
📝 Walkthrough

Walkthrough

Introduces a file-based lock mechanism (nuxt.lock) to coordinate dev and build runs. Adds packages/nuxi/src/utils/lockfile.ts with LockInfo, isLockEnabled, acquireLock, updateLock, and formatLockError. build acquires/releases a lock around execution and aborts with a formatted error if another process holds the lock. Dev server acquires a lock on init, updates it with runtime server metadata when ready, exposes releaseLock(), and the dev shutdown path calls that release. clearBuildDir() preserves nuxt.lock. New vitest unit tests cover lock behavior, edge cases, and error formatting.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 28.57% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically summarizes the main feature: adding lock file support for dev and build commands in agent environments.
Description check ✅ Passed The description is comprehensive and directly related to the changeset, explaining the feature, use cases, lock file format, error output, and limitations.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/dev-server-lockfile

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/nuxi/src/commands/build.ts`:
- Around line 84-94: The build lock written by writeLock(...) is removed by
clearBuildDir() because clearDir(...) only preserves
['cache','analyze','nuxt.json'] and omits 'nuxt.lock', so add 'nuxt.lock' to the
preservation list inside clearBuildDir()/clearDir call or alternatively move the
call to writeLock(...) to after clearBuildDir() runs; update references in this
file to use checkLock(...)/formatLockError(...) as-is to detect existing locks
before writing, and ensure lockCleanup remains set to the writeLock(...) return
value when you move it.

In `@packages/nuxi/src/dev/utils.ts`:
- Around line 488-490: The close() method removes nuxt.lock too early allowing
new dev/build to start while the current process is still reloading; modify
NuxtDevServer.close() so it does not remove the lock up-front but instead waits
until the server/listeners and reload path are fully stopped (i.e., stop/await
the HTTP server, remove any file watchers/listeners and run `#lockCleanup` only
after those shutdown steps complete). Specifically, in NuxtDevServer.close() and
the reload path used by NuxtDevServer.#load(), reorder shutdown: first stop
listeners/servers and await their completion, then invoke this.#lockCleanup() to
remove nuxt.lock so no concurrent nuxi dev/build can slip past during reload.

In `@packages/nuxi/src/utils/lockfile.ts`:
- Around line 95-112: The exit/signal handlers added in writeLock()
(process.on('exit', cleanup) and process.once for SIGTERM/SIGINT/SIGQUIT/SIGHUP)
are never removed, causing listener accumulation; update cleanup() (and/or
writeLock()) to detach those handlers when the lock is released by calling
process.off/process.removeListener for the exact bound listener functions (store
the cleanup handler and each bound signal callback so they can be removed),
ensure unlinkSync(lockPath) still runs inside cleanup, and only add the process
listeners once per active lock to prevent duplicates when writeLock() is invoked
multiple times.
- Around line 92-93: The current writeLock flow is racy because callers do
checkLock() then writeLock() using a plain writeFileSync; change writeLock() to
acquire the lock atomically (e.g., write the JSON to the lockPath with an
exclusive flag such as 'wx' or write to a temp file and atomically rename) so
the write fails if the lock file already exists rather than relying on a prior
check; keep using dirname(lockPath) mkdir(..., { recursive: true }) before the
atomic write. Also fix cleanup() so it removes the previously registered
process.on('exit') handler instead of leaving handlers to accumulate: store the
exit handler reference when you register it in writeLock()/registerCleanup and
call process.removeListener('exit', handler) inside cleanup() (and ensure you
don't register duplicate handlers). Reference functions/idents: writeLock,
checkLock, cleanup, lockPath, and the exit handler registered via
process.on('exit').
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 3c7c5c90-d9b5-4897-be94-cf65f5648f65

📥 Commits

Reviewing files that changed from the base of the PR and between 1c2ef0c and c545543.

📒 Files selected for processing (4)
  • packages/nuxi/src/commands/build.ts
  • packages/nuxi/src/dev/utils.ts
  • packages/nuxi/src/utils/lockfile.ts
  • packages/nuxi/test/unit/lockfile.spec.ts

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (2)
packages/nuxi/src/utils/lockfile.ts (2)

97-99: ⚠️ Potential issue | 🔴 Critical

Use atomic lock creation to prevent concurrent writers.

Line 98 does a normal write after a separate pre-check, so two processes can pass checkLock() and both write nuxt.lock. Acquire the lock atomically with exclusive create (flag: 'wx') and treat EEXIST as lock contention.

Proposed fix
   await mkdir(dirname(lockPath), { recursive: true })
-  writeFileSync(lockPath, JSON.stringify(info, null, 2))
+  try {
+    writeFileSync(lockPath, JSON.stringify(info, null, 2), { flag: 'wx' })
+  }
+  catch (error) {
+    if ((error as NodeJS.ErrnoException).code === 'EEXIST') {
+      throw new Error(`Lock file already exists: ${lockPath}`)
+    }
+    throw error
+  }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/nuxi/src/utils/lockfile.ts` around lines 97 - 99, The current
non-atomic writeFileSync(lockPath, ...) after mkdir allows race conditions;
replace the plain write with an exclusive-create atomic write (e.g. use
write/open with flag: 'wx' or equivalent) so the file is only created if it does
not already exist, and catch the EEXIST error to treat it as lock contention
(handle/retry/propagate accordingly). Keep the existing mkdir(dirname(lockPath),
{ recursive: true }) before attempting the atomic create and reference lockPath,
writeFileSync/open/write/close (or the chosen atomic API) and EEXIST handling in
your changes.

100-117: ⚠️ Potential issue | 🟠 Major

Detach exit/signal handlers in cleanup() to avoid listener accumulation.

Lines 111-116 add process listeners on every writeLock() call, but cleanup() (Lines 101-109) only unlinks the file. In repeated dev reload flows, this can accumulate listeners and eventually trigger MaxListenersExceededWarning.

Proposed fix
   let cleaned = false
+  const onExit = () => cleanup()
+  const signalHandlers: Array<[NodeJS.Signals, () => void]> = []
   function cleanup() {
     if (cleaned)
       return
     cleaned = true
+    process.off('exit', onExit)
+    for (const [signal, handler] of signalHandlers) {
+      process.off(signal, handler)
+    }
     try {
       unlinkSync(lockPath)
     }
     catch {}
   }

-  process.on('exit', cleanup)
+  process.on('exit', onExit)
   for (const signal of ['SIGTERM', 'SIGINT', 'SIGQUIT', 'SIGHUP'] as const) {
-    process.once(signal, () => {
+    const handler = () => {
       cleanup()
       process.exit()
-    })
+    }
+    signalHandlers.push([signal, handler])
+    process.once(signal, handler)
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/nuxi/src/utils/lockfile.ts` around lines 100 - 117, The cleanup
logic only unlinks the lock file but does not detach the process listeners added
in writeLock(), causing listener accumulation; modify writeLock() to register
signal handlers using named/bound functions (e.g., keep references for the exit
handler and each signal handler) and in cleanup() call
process.off/process.removeListener for 'exit' and the same signals
('SIGTERM','SIGINT','SIGQUIT','SIGHUP') to remove those handlers before
unlinking; alternatively ensure listeners are only added once (guarded by a
module-level flag) and still remove them in cleanup() to prevent
MaxListenersExceededWarning—refer to the existing cleanup() function and the
code that calls process.on('exit', cleanup) and process.once(signal, ...) to
implement this.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@packages/nuxi/src/utils/lockfile.ts`:
- Around line 97-99: The current non-atomic writeFileSync(lockPath, ...) after
mkdir allows race conditions; replace the plain write with an exclusive-create
atomic write (e.g. use write/open with flag: 'wx' or equivalent) so the file is
only created if it does not already exist, and catch the EEXIST error to treat
it as lock contention (handle/retry/propagate accordingly). Keep the existing
mkdir(dirname(lockPath), { recursive: true }) before attempting the atomic
create and reference lockPath, writeFileSync/open/write/close (or the chosen
atomic API) and EEXIST handling in your changes.
- Around line 100-117: The cleanup logic only unlinks the lock file but does not
detach the process listeners added in writeLock(), causing listener
accumulation; modify writeLock() to register signal handlers using named/bound
functions (e.g., keep references for the exit handler and each signal handler)
and in cleanup() call process.off/process.removeListener for 'exit' and the same
signals ('SIGTERM','SIGINT','SIGQUIT','SIGHUP') to remove those handlers before
unlinking; alternatively ensure listeners are only added once (guarded by a
module-level flag) and still remove them in cleanup() to prevent
MaxListenersExceededWarning—refer to the existing cleanup() function and the
code that calls process.on('exit', cleanup) and process.once(signal, ...) to
implement this.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: e20f6c9d-6c34-4247-8da7-c7b4be5ce9de

📥 Commits

Reviewing files that changed from the base of the PR and between c545543 and fabedc5.

📒 Files selected for processing (2)
  • packages/nuxi/src/utils/lockfile.ts
  • packages/nuxi/test/unit/lockfile.spec.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/nuxi/test/unit/lockfile.spec.ts

@atinux
Copy link
Copy Markdown
Member

atinux commented Mar 31, 2026

I love the idea @harlan-zw

Do you think it would also be possible somehow, and would it be useful to also provide a way to read the logs for the current process?

EDIT: seems that Next.js has a path to a log file

When running inside an AI agent environment, write a lock file to
`<buildDir>/nuxt.lock` containing process info (PID, port, URL, command).
On startup, check for existing locks and show actionable error messages
with the running server URL and kill command.

Stale locks from crashed processes are auto-cleaned via PID liveness
checking. Gated behind `isAgent` from std-env, completely invisible
to normal users.
- Use atomic file creation (`wx` flag) to prevent race conditions
  between concurrent checkLock/writeLock calls
- Detach process exit/signal handlers in cleanup() to prevent
  listener accumulation during dev server reloads
- Move writeLock() after clearBuildDir() in build command so the
  lock file isn't deleted immediately after creation
- Don't remove lock during dev server reloads (only on final
  shutdown via releaseLock())
- Atomic acquireLock API replaces separate checkLock/writeLock (closes silent
  race where writeLock returned noop on EEXIST and caller falsely believed it
  held the lock)
- isProcessAlive returns true on EPERM so locks held by other users aren't
  clobbered as "stale"
- updateLock writes port/url metadata in place, eliminating the
  release+reacquire window on dev server reload
- PID-recycling safety: locks older than 24h are treated as stale
- cwd is stored in LockInfo and used in the error message instead of the
  checking process's cwd (was misleading in monorepos / symlinks)
- Throw on conflict instead of process.exit(1) so nitro/vite cleanup runs
- No signal handlers registered; rely on process.on('exit') which fires after
  Node's default signal handling, so sibling SIGINT/SIGTERM handlers are not
  bypassed
- nuxt.lock is preserved across clearBuildDir by adding it to the exclusion
  list in fs.ts (per @danielroe review)
- NUXT_LOCK=1 forces locking on for non-agents (symmetric with
  NUXT_IGNORE_LOCK=1)
- Dev URL uses getAddressURL so IPv6 / 0.0.0.0 binds show a reachable host
  instead of hardcoded 127.0.0.1
@harlan-zw harlan-zw force-pushed the feat/dev-server-lockfile branch from 4bf0630 to b245924 Compare April 12, 2026 06:01
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/nuxi/src/dev/utils.ts`:
- Around line 200-211: The lock is currently acquired once against
this.#currentNuxt!.options.buildDir via acquireLock and its release saved to
this.#lockCleanup, but when the resolved buildDir changes on reload (e.g., in
updateLock / reload flow) you must re-acquire an exclusive lock for the new
buildDir and swap the release handler atomically to avoid stale locks or dual
servers. Modify the reload/update flow to: compute the new buildDir from
this.#currentNuxt!.options, call acquireLock(newBuildDir, {command:'dev',
cwd:this.options.cwd}), check lock.existing and handle/log/throw if taken, set a
newRelease = newLock.release, then replace this.#lockCleanup with newRelease and
only after successful swap call the oldRelease to free the previous lock; ensure
all references to acquireLock, this.#lockCleanup and buildDir in
updateLock/reload paths are updated accordingly.

In `@packages/nuxi/test/unit/lockfile.spec.ts`:
- Around line 70-77: The test "returns existing lock when another live process
holds it" currently writes a lock with pid: 1 and relies on real
process.liveness; instead stub the liveness check by using vi.spyOn(process,
'kill') to mock the behavior for that pid (e.g., vi.spyOn(process,
'kill').mockImplementation((pid, sig) => { if (pid === 1) return undefined;
throw new Error('no such process') })) so the lock is treated as held; ensure
you restore the spy after the test with mockRestore to avoid leaking state.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 2cfcd838-9535-4416-9c7d-a2172d24a753

📥 Commits

Reviewing files that changed from the base of the PR and between fabedc5 and 4bf0630.

📒 Files selected for processing (6)
  • packages/nuxi/src/commands/build.ts
  • packages/nuxi/src/dev/index.ts
  • packages/nuxi/src/dev/utils.ts
  • packages/nuxi/src/utils/fs.ts
  • packages/nuxi/src/utils/lockfile.ts
  • packages/nuxi/test/unit/lockfile.spec.ts
✅ Files skipped from review due to trivial changes (1)
  • packages/nuxi/src/utils/fs.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/nuxi/src/utils/lockfile.ts

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (2)
packages/nuxi/test/unit/lockfile.spec.ts (1)

70-77: ⚠️ Potential issue | 🟠 Major

Avoid PID 1 assumptions; mock liveness checks for deterministic tests.

pid: 1 is not reliably “always alive” across environments, so this test can be flaky.

Deterministic test pattern
 it('returns existing lock when another live process holds it', async () => {
+  const killSpy = vi.spyOn(process, 'kill').mockImplementation(((pid: number, signal?: number | NodeJS.Signals) => {
+    if (pid === 424242 && signal === 0) { return true }
+    throw Object.assign(new Error('ESRCH'), { code: 'ESRCH' })
+  }) as typeof process.kill)
+  try {
     await mkdir(tempDir, { recursive: true })
     writeFileSync(join(tempDir, 'nuxt.lock'), JSON.stringify({
-      pid: 1, // init/systemd, always alive
+      pid: 424242,
       command: 'dev',
       cwd: '/other',
       startedAt: Date.now(),
     }))

     const lock = acquireLock(tempDir, { command: 'build', cwd: '/project' })
     expect(lock.existing).toBeDefined()
-    expect(lock.existing!.pid).toBe(1)
+    expect(lock.existing!.pid).toBe(424242)
     expect(lock.existing!.cwd).toBe('/other')
     expect(lock.release).toBeUndefined()
     expect(existsSync(join(tempDir, 'nuxt.lock'))).toBe(true)
+  } finally {
+    killSpy.mockRestore()
+  }
 })
Is PID 1 guaranteed to exist on Windows/CI, and what is the documented behavior of Node.js `process.kill(pid, 0)` when the PID does not exist?

Also applies to: 79-83

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/nuxi/test/unit/lockfile.spec.ts` around lines 70 - 77, The test
"returns existing lock when another live process holds it" (and the similar case
at lines 79-83) assumes pid: 1 is always alive—make the test deterministic by
mocking the process liveness check instead of relying on PID 1; locate the
lock-checking logic (the function that uses process.kill(pid, 0) or an
isProcessAlive utility) and stub that to return true for the test case (and
false for the counterpart test), or inject a fake implementation; update the
test to write any PID (e.g., 12345) to the lock file and assert behavior while
the mocked liveness function controls the "alive" result so the test does not
depend on platform-specific PID semantics.
packages/nuxi/src/dev/utils.ts (1)

200-212: ⚠️ Potential issue | 🟠 Major

Re-acquire the lock when buildDir changes across reloads.

Line 202 acquires once for the initial buildDir, but Line 481 updates lock metadata using the current buildDir after reload. If config changes buildDir, you can end up without exclusive ownership on the new path and with a stale lock on the old path.

Suggested direction
 export class NuxtDevServer extends EventEmitter<DevServerEventMap> {
   `#lockCleanup`?: () => void
+  `#lockedBuildDir`?: string

   async init(): Promise<void> {
     const buildDir = this.#currentNuxt!.options.buildDir
     const lock = acquireLock(buildDir, {
       command: 'dev',
       cwd: this.options.cwd,
     })
@@
     this.#lockCleanup = lock.release
+    this.#lockedBuildDir = buildDir
   }

   async `#initializeNuxt`(reload: boolean): Promise<void> {
@@
+    const buildDir = this.#currentNuxt.options.buildDir
+    if (this.#lockedBuildDir && buildDir !== this.#lockedBuildDir) {
+      const next = acquireLock(buildDir, { command: 'dev', cwd: this.options.cwd })
+      if (next.existing) {
+        console.error(formatLockError(next.existing))
+        throw new Error(`Another Nuxt ${next.existing.command} is already running (PID ${next.existing.pid}).`)
+      }
+      const prevRelease = this.#lockCleanup
+      this.#lockCleanup = next.release
+      this.#lockedBuildDir = buildDir
+      prevRelease?.()
+    }
-    updateLock(this.#currentNuxt.options.buildDir, {
+    updateLock(buildDir, {
       command: 'dev',
       cwd: this.options.cwd,
       port: addr.port,
       hostname: addr.address,
       url: serverUrl,
     })
   }

   releaseLock(): void {
     this.#lockCleanup?.()
+    this.#lockCleanup = undefined
   }
 }

Also applies to: 481-487, 498-501

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/nuxi/src/dev/utils.ts` around lines 200 - 212, The code only
acquires a lock once using acquireLock(buildDir, ...) and stores release in
this.#lockCleanup, but never re-acquires if this.#currentNuxt.options.buildDir
changes on reload; update logic so whenever the configured buildDir changes
(compare current buildDir to the path stored on the active lock), call the
existing lock.release() to free the old lock and then call
acquireLock(newBuildDir, { command: 'dev', cwd: this.options.cwd }) to obtain
exclusive ownership for the new path, handle lock.existing the same way as the
initial acquisition (console.error(formatLockError(...)) and throw), and replace
this.#lockCleanup with the new lock.release reference so metadata updates later
(the code around the later metadata update that touches lines where lock info is
written) operate on the correct lock.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@packages/nuxi/src/dev/utils.ts`:
- Around line 200-212: The code only acquires a lock once using
acquireLock(buildDir, ...) and stores release in this.#lockCleanup, but never
re-acquires if this.#currentNuxt.options.buildDir changes on reload; update
logic so whenever the configured buildDir changes (compare current buildDir to
the path stored on the active lock), call the existing lock.release() to free
the old lock and then call acquireLock(newBuildDir, { command: 'dev', cwd:
this.options.cwd }) to obtain exclusive ownership for the new path, handle
lock.existing the same way as the initial acquisition
(console.error(formatLockError(...)) and throw), and replace this.#lockCleanup
with the new lock.release reference so metadata updates later (the code around
the later metadata update that touches lines where lock info is written) operate
on the correct lock.

In `@packages/nuxi/test/unit/lockfile.spec.ts`:
- Around line 70-77: The test "returns existing lock when another live process
holds it" (and the similar case at lines 79-83) assumes pid: 1 is always
alive—make the test deterministic by mocking the process liveness check instead
of relying on PID 1; locate the lock-checking logic (the function that uses
process.kill(pid, 0) or an isProcessAlive utility) and stub that to return true
for the test case (and false for the counterpart test), or inject a fake
implementation; update the test to write any PID (e.g., 12345) to the lock file
and assert behavior while the mocked liveness function controls the "alive"
result so the test does not depend on platform-specific PID semantics.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 9852fe31-169a-4dc1-a4fb-d490c3fde1c9

📥 Commits

Reviewing files that changed from the base of the PR and between 4bf0630 and b245924.

📒 Files selected for processing (6)
  • packages/nuxi/src/commands/build.ts
  • packages/nuxi/src/dev/index.ts
  • packages/nuxi/src/dev/utils.ts
  • packages/nuxi/src/utils/fs.ts
  • packages/nuxi/src/utils/lockfile.ts
  • packages/nuxi/test/unit/lockfile.spec.ts
✅ Files skipped from review due to trivial changes (1)
  • packages/nuxi/src/utils/fs.ts
🚧 Files skipped from review as they are similar to previous changes (2)
  • packages/nuxi/src/dev/index.ts
  • packages/nuxi/src/utils/lockfile.ts

- Re-acquire lock when buildDir changes on reload (nuxt.config edits can
  relocate it). Lock swap is atomic: new release installed before old release
  runs, so we're never unlocked in between.
- Test: mock process.kill for liveness instead of depending on PID 1 being
  alive (Windows/CI sandboxes may differ).
@harlan-zw
Copy link
Copy Markdown
Contributor Author

I love the idea @harlan-zw

Do you think it would also be possible somehow, and would it be useful to also provide a way to read the logs for the current process?

EDIT: seems that Next.js has a path to a log file

Seems like a good idea, I think the scope of that may go beyond the locks to get it up and running properly. I'd say we can tackle that in follow up PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants