feat: add web dashboard — visual application tracking, Kanban pipeline, job feed#585
feat: add web dashboard — visual application tracking, Kanban pipeline, job feed#585OmarShayya wants to merge 24 commits intosantifer:mainfrom
Conversation
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Postgres 16 via Docker Compose (port 5434) - Fastify + Drizzle + Zod server package - React 19 + TanStack + Tailwind + shadcn/ui client package - Vite dev server with API proxy to :3000 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…trip Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…outes Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…kdown sync Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…tions (TDD) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…peline Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…atus badges Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…nsitions Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…dismiss Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add optional API calls to merge-tracker.mjs and scan.mjs (fail silently) - Add web:* npm scripts to root package.json - Add docs/WEB_DASHBOARD.md with setup and usage guide - Update .gitignore for web build artifacts Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…ards - Remove forward-only restriction on pipeline status transitions - Add external link icon on Kanban cards to open original job posting - Add dotenv to server for .env loading - Update tests to match new transition logic Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
Welcome to career-ops, @OmarShayya! Thanks for your first PR. A few things to know:
We'll review your PR soon. Join our Discord if you have questions. |
📝 WalkthroughWalkthroughAdds a full-stack web dashboard: Vite+React client and Fastify+Drizzle server, Postgres schema and migrations, import/export sync between Markdown/YAML and the DB, job ingestion and pipeline services, UI routes/components, CLI scrapers and batch tools, CI/dev wiring, and docs; scan/merge scripts now attempt local dashboard notifications. ChangesWeb Dashboard Implementation
Tooling, Docs & Integrations
Sequence DiagramsequenceDiagram
actor User
participant Client as React Dashboard
participant Server as Fastify Server
participant DB as PostgreSQL
participant FS as File System
participant Scanner as Scanner/CLI
User->>Client: Open Applications page
Client->>Server: GET /api/applications
Server->>DB: SELECT applications
DB-->>Server: rows
Server-->>Client: paginated apps
User->>Client: Move card to Interview
Client->>Server: PATCH /api/pipeline/:id/move
Server->>DB: UPDATE application status
DB-->>Server: updated
Server->>FS: write applications.md
FS-->>Server: written
Server-->>Client: updated application
Scanner->>Server: POST /api/sync/import
Server->>FS: read applications.md and portals.yml
FS-->>Server: file contents
Server->>DB: upsert apps and sources
DB-->>Server: upsert summary
Server-->>Scanner: import result
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related issues
Possibly related PRs
✨ Finishing Touches🧪 Generate unit tests (beta)
⚔️ Resolve merge conflicts
|
There was a problem hiding this comment.
Actionable comments posted: 56
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@docs/WEB_DASHBOARD.md`:
- Line 26: The docs' Pipeline description is missing the "Responded" stage;
update the sentence describing `/pipeline` to list five stages matching
KANBAN_COLUMNS (Evaluated → Applied → Responded → Interview → Offer) so the
documentation aligns with the code's KANBAN_COLUMNS constant.
In `@merge-tracker.mjs`:
- Around line 369-374: The optional dashboard POST using fetch inside the
DRY_RUN check can hang or silently ignore server errors; wrap the request using
an AbortController-based timeout, await the fetch to completion, and verify
response.ok (and maybe parse response) before treating it as success; on non-ok
or timeout/exception log a warning and continue—update the block referencing
DRY_RUN and the fetch('http://localhost:3000/api/sync/import', { method: 'POST'
}) call to create an AbortController, set a short timeout, pass
controller.signal to fetch, check response.ok, and handle errors/timeouts
explicitly instead of an empty catch.
In `@package.json`:
- Around line 24-25: The npm scripts "web:migrate" and "web:seed" in
package.json currently hardcode DATABASE_URL with credentials; remove the inline
DATABASE_URL from those scripts and make them read the environment variable
instead (leave the commands as "cd web/server && npx drizzle-kit migrate" and
"cd web/server && CAREER_OPS_ROOT=../.. npx tsx src/db/seed.ts" or similar), add
a .env.example file in web/server documenting
DATABASE_URL=postgres://career_ops:career_ops_dev@localhost:5434/career_ops for
local dev, ensure web/server/.env is gitignored and update README/dev setup to
instruct developers to copy .env.example to .env or export DATABASE_URL in their
shell before running "web:migrate" and "web:seed".
- Line 19: The "web:dev" npm script calls "concurrently" but the root
package.json devDependencies does not include it; add "concurrently" to the root
package.json devDependencies (so the "web:dev" script can run) and then
install/update devDependencies to ensure it's available locally—edit
package.json devDependencies to include the dependency and run the package
manager to sync.
In `@scan.mjs`:
- Around line 358-369: The optional dashboard ingest using newOffers.map(...)
and fetch('http://localhost:3000/api/jobs/ingest', ...) can hang and treats
non-2xx responses as success; update this block to use an AbortController with a
short timeout (e.g., 2–5s) to abort the fetch if it stalls, clear the timeout in
finally, and after awaiting fetch check response.ok and treat non-2xx as a
failure (throw or log) so the catch path handles it; preserve the existing
silent-fail behavior by catching and ignoring errors but ensure the timeout is
enforced and cleared.
In `@web/.env.example`:
- Around line 3-4: The comment above CAREER_OPS_ROOT incorrectly says "Absolute
path" while the value uses a relative path (".."); update the comment to
accurately describe the variable (e.g., "Path to the career-ops project root
(parent of web/) — can be relative or absolute") or replace the sample value
with an absolute placeholder (e.g., "/path/to/career-ops") so the
CAREER_OPS_ROOT example and its comment match; change the text around the
CAREER_OPS_ROOT entry accordingly.
In `@web/client/src/components/applications/applications-table.tsx`:
- Around line 12-14: Replace the column accessorKey "createdAt" with "appliedAt"
so the "Date" column shows the application submission time (use the existing
cell renderer: cell: ({ getValue }) => { const d = getValue<string>(); return d
? new Date(d).toLocaleDateString() : ""; } ). Update any references to
accessorKey: "createdAt" in applications-table.tsx to accessorKey: "appliedAt"
and ensure the Application type's appliedAt field is used.
In `@web/client/src/components/applications/filters.tsx`:
- Around line 23-42: The search input currently relies only on a placeholder and
needs an explicit aria-label for screen readers: add an aria-label (e.g.,
aria-label="Search applications") to the input used with value={search} and
onChange={(e) => onSearchChange(e.target.value)}; for the status filter buttons
generated from ALL_STATUSES and using toggleStatus(status) and
statusFilter.includes(status), add an aria-pressed attribute set to the boolean
statusFilter.includes(status) so assistive tech can announce which toggles are
active.
- Line 10: Replace the hardcoded extras in ALL_STATUSES with a derived list from
the shared STATUS_COLORS constant: import STATUS_COLORS from '@/lib/constants'
and set ALL_STATUSES to use the keys/entries from STATUS_COLORS (or its array of
status names) instead of [...KANBAN_COLUMNS, "Rejected", "Discarded", "SKIP"];
keep KANBAN_COLUMNS usage if needed (e.g., preserve order by merging
KANBAN_COLUMNS with STATUS_COLORS-derived statuses while deduplicating) so
ALL_STATUSES always reflects the authoritative STATUS_COLORS values.
In `@web/client/src/components/applications/status-badge.tsx`:
- Line 6: The fallback class string used in the className expression (where
STATUS_COLORS[status] || "bg-gray-100 text-gray-700") lacks dark-mode variants;
update the fallback to include matching dark:* classes so unknown statuses
render correctly in dark mode (adjust the fallback used in the span's className
expression alongside STATUS_COLORS and status to include dark:bg-... and
dark:text-... variants consistent with entries in STATUS_COLORS).
In `@web/client/src/components/feed/feed-filters.tsx`:
- Around line 10-13: The search input (controlled by the search variable and
onSearchChange) and the source select (controlled by sourceFilter and
onSourceFilterChange) lack accessible labels; add explicit labels by either
rendering visible <label> elements tied via htmlFor to the input/select ids or
by adding aria-label attributes (e.g., aria-label="Search jobs" and
aria-label="Filter by source") to those controls, ensuring ids (or label text)
uniquely reference the input and select so screen readers can identify them.
In `@web/client/src/components/feed/job-card.tsx`:
- Around line 19-21: The anchor containing only the ExternalLink icon in
job-card.tsx is inaccessible; update the <a> in the JobCard (the anchor wrapping
ExternalLink) to include an accessible label by adding an aria-label (and
optionally a title) that includes the job context (e.g., job.title or job.url)
such as "Open [job.title] in new tab" so screen readers can identify the link;
alternatively add visually hidden text inside the anchor if you prefer
text-based labeling instead of aria-label.
In `@web/client/src/components/pipeline/kanban-board.tsx`:
- Around line 16-26: handleDragEnd currently treats over.id as a column status
but with pointerWithin + SortableContext it can be a card UUID; update
handleDragEnd to detect when over.id is a card by searching data.columns for an
application with id === String(over.id) and, if found, derive targetStatus from
that application's parent column (e.g., const targetColumn = data.columns.find(c
=> c.applications.some(a => a.id === String(over.id))); if targetColumn then
targetStatus = targetColumn.status), otherwise fall back to String(over.id); if
no column can be resolved, bail out to avoid calling onMoveCard with an invalid
status. Ensure references to handleDragEnd, DragEndEvent, active.id, over.id,
data.columns and onMoveCard are used so the fix targets the right code paths.
- Around line 10-14: The handleDragStart function currently types its parameter
inline; replace that manual type with the exported DragStartEvent from
`@dnd-kit/core` for consistency with DragEndEvent. Import DragStartEvent from
'@dnd-kit/core' (alongside DragEndEvent) and change the function signature of
handleDragStart to use (event: DragStartEvent), keeping the existing body that
derives id and looks up the app via data.columns.flatMap(...). This ensures
consistent, accurate typing across drag handlers.
In `@web/client/src/components/pipeline/kanban-card.tsx`:
- Line 11: daysInStage currently uses app.updatedAt which resets on any field
change; update the KanbanCard to compute daysInStage from the timestamp when the
current status was entered instead. Replace the use of app.updatedAt in the
daysInStage calculation with either (a) a new stageEnteredAt field on the
Application type (populated server-side) or (b) derive it client-side by reading
the most recent entry in app.status_history (find last item where status equals
app.status and use its changedAt). Adjust the KanbanCard props/signature to
accept status_history or stageEnteredAt accordingly and update the daysInStage
calculation to use new Date(stageEnteredAt || derivedChangedAt).
In `@web/client/src/components/sources/source-list.tsx`:
- Around line 33-43: The toggle button used in the row (the element that calls
onToggleEnabled with source.id and toggles source.enabled) is missing
accessibility attributes and should be exposed as a switch: add role="switch",
set aria-checked based on source.enabled, and provide an accessible name
(aria-label or aria-labelledby) that includes the source name or purpose; also
ensure the onClick handler remains intact. The delete button (the icon button
that calls onDelete with source.id and renders <Trash2/>) needs an accessible
name (e.g., aria-label="Delete source" or aria-label={`Delete ${source.name}`})
so screen readers announce its purpose; keep the existing onDelete usage and
visual markup. Ensure both buttons retain keyboard focusability and visible
focus outlines as before.
In `@web/client/src/lib/queries.ts`:
- Around line 75-81: The onSuccess handler in useSendToPipeline currently only
invalidates ["jobs"] causing Pipeline and Applications views to remain stale;
update the onSuccess in the useSendToPipeline mutation (same function and
mutationFn) to also invalidate the relevant query keys for the pipeline and
applications (e.g. add qc.invalidateQueries({ queryKey: ["pipeline"] }) and
qc.invalidateQueries({ queryKey: ["applications"] }) or a combined
invalidateQueries call) so that the Pipeline board and Applications table
refresh after the /jobs/:id/to-pipeline mutation succeeds.
In `@web/client/src/lib/types.ts`:
- Around line 72-76: The AppStats interface's avgScore should allow null to
match server behavior; update the type of avgScore in AppStats (in
web/client/src/lib/types.ts) from number to number | null and then audit caller
sites that use AppStats.avgScore (components or helpers that call .toFixed(),
arithmetic, or render it) to guard against null (e.g., conditional rendering,
null-coalescing, or default values) so runtime errors no longer occur when the
server returns null.
In `@web/client/src/routes/__root.tsx`:
- Around line 17-20: The root nav Link currently uses prefix matching and stays
active on nested routes; update the Link (the component rendering item.to) to
pass activeOptions={{ exact: true }} when item.to === "/" (i.e., the root link)
so it only becomes active on the exact root path; locate the Link usage in the
navigation render (the element with key={item.to} and className=...) and
conditionally add the activeOptions prop for the root entry.
In `@web/client/src/routes/feed.tsx`:
- Around line 29-36: The rendering assumes data.data is always an array; replace
the unsafe uses of data?.data.map(...) and data?.data.length by first
normalizing/guarding the payload so map/length never run on null/undefined—e.g.,
treat jobs = Array.isArray(data?.data) ? data.data : [] and iterate
jobs.map(...) and check jobs.length === 0; update the JSX that renders <JobCard
...> (and the "No new jobs" check) to use that guarded jobs variable
(references: data, JobCard, sendToPipeline, dismiss).
- Around line 9-39: FeedPage currently only checks isLoading from useJobs so
failed fetches render an empty grid; update FeedPage to also check useJobs'
isError and error and render a clear error state (e.g., a centered message
showing error.message or a generic "Failed to load jobs" with optional retry)
instead of rendering the job grid when isError is true; ensure the conditional
rendering uses isLoading first, then isError, then the success branch that uses
data?.data (also guard data?.data?.length) and keep JobCard rendering and
dismiss/send handlers unchanged.
In `@web/client/src/routes/index.tsx`:
- Around line 32-36: The current render collapses missing or failed fetches into
an empty list by passing data?.data || [] to ApplicationsTable, hiding errors;
update the JSX to add an explicit branch that checks for error (e.g., a
fetchError or !data) and an explicit empty-state when data exists but has zero
items, instead of defaulting to [], so in the component containing isLoading and
ApplicationsTable you should render three distinct states: loading (isLoading),
error/failure (show an error message or retry UI when data is undefined or an
error flag is set), and empty-data (show a “no results” indicator when
data?.data is an empty array), then only pass the real array to
ApplicationsTable when data?.data is present.
In `@web/client/src/routes/pipeline.tsx`:
- Around line 14-18: The current render uses "isLoading || !data" which treats
missing data as perpetual loading; change the conditional to only show Loading
when isLoading is true, and handle errors/empty data separately by checking
isError and/or error (from your query hook) to render an error message or
fallback UI, and only render <KanbanBoard data={data} ... /> when data is
defined; update the JSX in pipeline.tsx to use isLoading for the spinner,
isError/error for an error view, and a clear empty-state when data is
null/undefined rather than falling back to loading (references: isLoading,
isError, error, data, KanbanBoard, moveCard.mutate).
In `@web/client/src/routes/sources.tsx`:
- Line 28: The form currently closes immediately because setShowForm(false) is
called right after createSource.mutate; change this so the form is closed only
on a successful mutation by using the mutation's onSuccess callback (either by
configuring createSource with an onSuccess handler or by passing { onSuccess: ()
=> setShowForm(false) } as the second argument to createSource.mutate). Update
the SourceForm onSubmit handler to remove the immediate setShowForm(false) and
ensure errors from createSource.mutate do not close the form.
In `@web/client/src/styles/globals.css`:
- Around line 1-3: The lint failures are due to Tailwind directives like
"@tailwind" and "@apply" in globals.css not being recognized by the current
Biome/Stylelint configuration; update the linter settings to be Tailwind-aware
(enable the Tailwind plugin/processor for Stylelint or Biome) or add an explicit
ignore/exclude rule for globals.css (and other files referenced at lines 51-52)
so the directives are not parsed as invalid CSS; ensure the change targets the
linter config (e.g., stylelint/biome config) rather than modifying the
"@tailwind" or "@apply" usage in globals.css.
In `@web/docker-compose.yml`:
- Around line 8-10: The compose file currently contains hardcoded credentials
(POSTGRES_PASSWORD, POSTGRES_USER, POSTGRES_DB); replace the inline
POSTGRES_PASSWORD with environment interpolation (e.g.
POSTGRES_PASSWORD=${POSTGRES_PASSWORD}) or use an env_file reference so the
secret is loaded from an external .env, then add a .env.example with a
placeholder value and ensure the real .env is gitignored; update any code or CI
that relies on those vars to load from the environment and validate
POSTGRES_PASSWORD is set at runtime.
In `@web/server/drizzle.config.ts`:
- Around line 1-9: Replace the non-null assertion on process.env.DATABASE_URL in
the defineConfig dbCredentials with an explicit runtime check: import
dotenv/config at the top of the file, read DATABASE_URL into a const, and if
it's missing throw a clear Error (e.g. "DATABASE_URL is required") before
calling defineConfig so defineConfig/dbCredentials uses a validated url;
reference the defineConfig call and the dbCredentials object to locate where to
replace process.env.DATABASE_URL! and mirror the guard pattern used in seed.ts.
In `@web/server/src/config/env.ts`:
- Line 6: The default CAREER_OPS_ROOT value in env.ts is wrong (currently "..")
and causes join(careerOpsRoot, "data", "applications.md") in SyncService to
point at web/data instead of the repo root; update the z.string().default for
CAREER_OPS_ROOT to "../.." so that process.cwd() = web/server resolves correctly
to the repository root for file lookups (references: CAREER_OPS_ROOT in env.ts
and the SyncService usage of join(careerOpsRoot, "data", "applications.md")).
In `@web/server/src/db/client.ts`:
- Around line 5-8: createDb currently constructs the underlying postgres client
(variable client) but returns only the drizzle instance, preventing callers from
calling client.end() and leaking connection pools; change createDb to return
both the drizzle instance and a teardown method (e.g., an end or close function
that calls client.end()), so callers (for example server bootstrap or tests) can
call db.end() in Fastify's onClose hook or test teardown; reference the createDb
function and the local client variable when making this change.
In `@web/server/src/db/migrations/0000_icy_marvel_zombies.sql`:
- Around line 1-54: The migration lacks indexes on foreign-key and
high-selectivity filter columns which will cause sequential scans as data grows;
add b-tree indexes for status_history.application_id, discovered_jobs.source_id,
discovered_jobs.status, and applications.status (i.e., create indexes on those
columns in the same migration or a follow-up) so FK lookups, JOINs and frequent
WHERE status = ... filters use indexes instead of sequential scans.
In `@web/server/src/db/migrations/meta/0000_snapshot.json`:
- Line 95: The migration shows no indexes defined for tables that are frequently
queried; update the Drizzle table definitions in schema.ts by adding index
declarations inside the pgTable callbacks for the relevant tables (e.g., add an
index entry in the status_history pgTable definition referenced by the symbol
statusHistory—use a descriptive symbol like applicationIdIdx that calls
index(...).on(t.applicationId)—and add an index entry in the applications
pgTable definition referenced by the symbol applications—use a symbol like
statusIdx that calls index(...).on(t.status)); regenerate migrations so Drizzle
emits the corresponding index creation statements.
In `@web/server/src/index.ts`:
- Line 23: The CORS setup currently reflects any Origin via app.register(cors, {
origin: true });; change it to read a CORS_ORIGIN environment variable, parse it
into an allowlist (comma-separated or single origin), and register the cors
plugin with an origin function that returns the origin when it is in the
allowlist and false otherwise. Update the registration call in index.ts (where
app.register(cors, ... ) is used) to use that origin-checking function and
ensure endpoints that mutate tracker/source/sync state rely on this restricted
CORS policy.
- Line 41: The not-found handler currently always serves the SPA index via
app.setNotFoundHandler((_request, reply) => reply.sendFile("index.html"));
change it to detect API requests (e.g., check request.url or request.raw.url and
see if it startsWith "/api/") and for those return a JSON 404 (e.g.,
reply.code(404).send({ error: "Not Found" })); otherwise continue to serve the
SPA index.html as before. Update the handler referenced by
app.setNotFoundHandler to perform this conditional branching and use
reply.code(...).send(...) for API responses instead of reply.sendFile.
In `@web/server/src/modules/applications/schema.ts`:
- Around line 14-21: Replace the freeform status validator in listQuerySchema
with the same constrained validator used in patchBodySchema so only values in
CANONICAL_STATUSES are allowed; specifically, change the status line from
z.string().optional() to a validator built from CANONICAL_STATUSES (e.g.
z.enum(CANONICAL_STATUSES).optional() or z.union(CANONICAL_STATUSES.map(s =>
z.literal(s))).optional()) to ensure invalid statuses produce a 400 instead of
silently returning empty results.
In `@web/server/src/modules/applications/service.ts`:
- Around line 187-191: The code writes raw app.status and app.notes into the
Markdown table row (in the block updating updatedLines[lineIdx]), which corrupts
parsing if notes contain '|' or newlines; before assigning to cells[6] and
cells[9] (in the update loop in service.ts) escape pipe characters (e.g.,
replace '|' with '\|') and replace newlines with '<br>' (or strip them) for both
app.notes and app.status, and mirror the inverse transformation inside
parseApplicationsMd so it unescapes '\|' back to '|' and converts '<br>' back to
newline when reading; ensure the escape/unescape logic is centralized (or shared
helper) and applied consistently to avoid shifting cells during split.
- Around line 119-139: The getStats method currently selects all rows
(materializing the entire applications table) and computes counts and average in
memory; change it to push aggregation into SQL by replacing the full-table
select in getStats with an aggregate query that GROUPs BY applications.status to
produce per-status counts and uses AVG(score) for avgScore (you can also compute
totalCount with SUM(count) or a separate scalar aggregate); update the return
construction to build byStatus from the grouped rows and set avgScore/totalCount
from the aggregate results so you avoid loading all rows into memory.
- Around line 141-195: syncToMarkdown currently only updates status and notes
when patching existing table rows (see syncToMarkdown, updatedLines, cells[...]
reconstruction) which silently drops DB changes to other fields; either (A)
change the row-reconstruction to write the entire serialized row for each app
(use serializeApplicationsMd or format all fields from parsed/app into the table
row) so the markdown fully reflects the DB, or (B) add a clear comment in
syncToMarkdown documenting that only status and notes are authoritative from the
DB and all other fields remain sourced from the markdown; additionally, to
prevent concurrent file races on appsMdPath (also targeted by
PipelineService.syncStatusToMarkdown and SyncService.exportToMarkdown), add a
per-file mutex or a single async sync queue around writes/read-modify-write
(wrap the readFile/writeFile and updatedLines logic with the mutex) or ensure
callers await a single shared sync promise instead of fire-and-forget.
In `@web/server/src/modules/jobs/router.ts`:
- Around line 33-45: Replace the inline status check in the app.patch handler
with a Zod-based validation to match existing endpoints: add a
patchStatusBodySchema in schema.ts (e.g., z.object({ status:
z.enum(["dismissed","reviewed"]) })), import it into router.ts, call
patchStatusBodySchema.safeParse(request.body) in the app.patch handler, throw
the same ValidationError with the zod parse error when parsing fails, and then
pass the validated data.status into getService().updateStatus(request.params.id,
status) as before.
In `@web/server/src/modules/jobs/schema.ts`:
- Around line 12-19: The object schema's required string fields "title" and
"company" need non-empty guards and "postedAt" needs date-format validation:
update the z.object(...) field validators so title and company use
z.string().min(1) to reject empty strings, and make postedAt validate ISO
timestamps (e.g. z.string().datetime().optional() or use a z.preprocess that
coerces/parses Date and then validates ISO) so only valid TIMESTAMP-compatible
values pass; apply these changes to the schema literal where title, company, and
postedAt are defined.
- Around line 3-9: Replace the permissive status validator in
jobsListQuerySchema with an enum validator that restricts values to the four
known discovered_jobs statuses: update the status field (currently
z.string().optional()) to use z.enum([...]).optional() or
z.nativeEnum(MyStatusEnum).optional() so only the documented statuses are
accepted; reference the jobsListQuerySchema and the status property when making
the change and ensure the chosen enum values exactly match the documented
discovered_jobs statuses.
In `@web/server/src/modules/jobs/service.ts`:
- Around line 31-38: The ILIKE patterns built for discoveredJobs.title/company
interpolate raw user search terms so LIKE metacharacters are not escaped; create
a small helper (e.g., escapeLike or escapeSearch) that replaces \, % and _ with
escaped versions (e.g., prefix with backslash), call it on the incoming search
string, and then use ilike(discoveredJobs.title, `%${escaped}%`) and
ilike(discoveredJobs.company, `%${escaped}%`) so both title and company patterns
use the escaped value; ensure the same helper is used wherever search is turned
into a LIKE/ILIKE pattern so matching is deterministic.
- Around line 122-164: sendToPipeline currently appends blindly and updates the
DB separately, causing duplicates and inconsistency; change it to first check
for idempotency by verifying job.status !== "sent_to_pipeline" and scanning
content for job.url (use pendientesSectionHeader and pipelinePath) and if
already present return the existing job without modification. Then perform the
DB status update and file write as an atomic operation: start a DB transaction
(use this.db transaction APIs around
update(discoveredJobs).set(...).where(...)), update the job to
"sent_to_pipeline" only if it wasn't already, write the new pipeline content to
pipelinePath (using writeFile) inside the transaction scope, and on any failure
rollback the DB transaction and restore the original file content variable
(content) so the filesystem and DB remain consistent; ensure functions and
variables referenced include sendToPipeline, pendientesSectionHeader,
pipelinePath, writeFile, and update(discoveredJobs) so the changes are applied
in the right spots.
- Around line 145-153: The code appends under the "## Pendientes" header but
assumes a trailing newline; when content.indexOf("\n", idx) returns -1 it sets
afterHeader to 0 and inserts at file start. Fix by computing a newLineIdx =
content.indexOf("\n", idx) and if newLineIdx === -1 set afterHeader = idx +
pendientesSectionHeader.length (so insertion happens immediately after the
header), otherwise set afterHeader = newLineIdx + 1; update the block that
calculates afterHeader (the variables idx and afterHeader around
pendientesSectionHeader) before slicing content.
- Around line 91-99: Replace the substring-based unique check in the catch block
that inspects (err as Error).message with a precise SQLSTATE check using the
error's code property (e.g., test (err as { code?: string }).code === "23505");
update the catch around the block that increments skipped to use this code
equality check and keep the else branch to rethrow the error, ensuring you add a
narrow type assertion or guard for err.code to satisfy TypeScript.
In `@web/server/src/modules/pipeline/router.ts`:
- Around line 18-28: The route in app.patch currently performs a loose
hand-rolled check for toStatus and lets any non-empty string reach
service.moveCard; replace this with a zod-based validation: create a co-located
schema (export moveCardSchema and VALID_STATUSES as suggested) that uses z.enum
for toStatus, import moveCardSchema into router.ts, call
moveCardSchema.safeParse(request.body) and throw a ValidationError with the zod
error if parsing fails, then pass the validated value (parsed.data.toStatus)
into service.moveCard instead of using request.body directly.
In `@web/server/src/modules/pipeline/service.ts`:
- Around line 18-20: The declared types KanbanStatus, ArchiveStatus, and
AnyStatus are unused and should either be removed or applied to tighten APIs;
update the signatures to use them (e.g., change validateTransition(from:
AnyStatus, to: AnyStatus) and moveCard(id: string, toStatus: AnyStatus)) and
adjust any internal calls to pass the correctly-typed status values, or if you
prefer removal, delete the three type aliases and keep validateTransition and
moveCard using string consistently; ensure imports/constants KANBAN_COLUMNS and
ARCHIVE_STATUSES remain the single source of truth for allowed statuses.
- Around line 22-32: The function validateTransition currently allows any change
between statuses as long as from !== to and both are in KANBAN_COLUMNS or
ARCHIVE_STATUSES; update the code to make intent explicit by either renaming
validateTransition to something like isAllowedStatusTransition or
isBidirectionalStatusChange, or add a clear JSDoc/comment above
validateTransition that states it intentionally permits any bidirectional move
between Kanban and archive statuses (referencing KANBAN_COLUMNS and
ARCHIVE_STATUSES) so future maintainers won't expect state-machine rules; adjust
any callers if you rename the function.
- Around line 67-99: moveCard performs the UPDATE on applications and INSERT
into statusHistory as separate DB calls, risking divergence; wrap the update and
insert in a single transaction (use this.db.transaction(async (tx) => { ... }))
and use the transaction handle (tx.update(...).where(...).returning() and await
tx.insert(statusHistory).values(...)) so both statements commit or rollback
together; keep the NotFoundError/validateTransition checks before the
transaction, return the transactional update result, and call
this.syncStatusToMarkdown(...) after the transaction completes (outside the tx)
so the sync runs only on successful commit.
In `@web/server/src/modules/sources/service.ts`:
- Around line 52-54: Calls to syncToPortalsYml are currently fire-and-forget
(e.g., this.syncToPortalsYml().catch(...)), which swallows errors and allows
concurrent read/modify/write races; change callers to await the Promise and
propagate or handle errors instead of just logging (replace the .catch pattern
with an awaited call and proper error handling), and modify syncToPortalsYml
itself to serialize concurrent invocations by introducing a simple mutex/queue
(e.g., a private _syncLock Promise or semaphore inside the same class) so that
read/modify/write is atomic and failures are returned to callers (ensure
syncToPortalsYml rejects on error rather than always resolving).
In `@web/server/src/modules/sync/parsers.ts`:
- Around line 23-24: The parser and serializer must handle pipe characters
inside cell values: add escape/unescape helpers (e.g., escapeCell and
unescapeCell) that escape backslashes and pipes when serializing in
serializeApplicationsMd and unescape them when parsing; update the parsing logic
that currently uses line.split("|") to split on unescaped pipes (use a regex
split on '|' not preceded by backslash or temporarily replace escaped pipes with
a placeholder before splitting), and ensure serializeApplicationsMd calls
escapeCell for each field so round-trips preserve literal '|' and '\'
characters.
- Around line 72-75: The function parsePortalsYml uses yaml.load(content) and
assumes a non-null object; guard against yaml.load returning null by checking
the result before accessing properties: after calling yaml.load in
parsePortalsYml, treat its return as possibly null and default doc to an empty
object (so that doc.tracked_companies and doc.title_filter are safe), e.g.
replace direct casts/usage of doc with a null-coalesced object and then derive
companies and titleFilter from that safe object (refer to parsePortalsYml,
yaml.load, doc.tracked_companies, and title_filter).
In `@web/server/src/modules/sync/router.ts`:
- Around line 9-19: The import/export routes (app.post("/api/sync/import") and
app.post("/api/sync/export")) call getService().importFromMarkdown and
exportToMarkdown with no auth checks; protect these mutations by enforcing
authentication and authorization. Update these handlers to require an
authenticated user (use the project's auth middleware or validate request auth
token) and verify a proper role/permission (e.g., admin or sync:write) before
calling getService(); if the check fails return 401/403. Use existing auth
helpers if available (or a middleware wrapper) so both routes perform the same
auth/role check prior to invoking service.importFromMarkdown or
service.exportToMarkdown.
In `@web/server/src/modules/sync/service.ts`:
- Around line 95-118: exportToMarkdown currently writes directly to
applications.md which can be truncated or clobbered; change it to write to a
temp file in the same directory (e.g., `${appsMdPath}.tmp`), fsync the temp,
then atomically swap with fs.rename to replace applications.md, and ensure you
serialize concurrent writes by adding a per-path mutex/lock around
exportToMarkdown (and any other writers such as
ApplicationsService.syncToMarkdown or PipelineService.syncStatusToMarkdown) so
only one writer touches the file at a time; keep use of serializeApplicationsMd
and the appsMdPath/careerOpsRoot values unchanged except for the new
safe-write/rename and locking logic.
- Around line 95-118: exportToMarkdown currently overwrites the entire
applications.md (using serializeApplicationsMd(parsed)) which destroys any
non-table content; change it to follow ApplicationsService.syncToMarkdown's
approach: read the existing applications.md from appsMdPath, parse and locate
the table region, replace only the table rows with the newly serialized table
generated from parsed (or splice in the output of serializeApplicationsMd for
the table section), and write back the merged content; update exportToMarkdown
to use appsMdPath, readFile, the same table-detection/splice logic used by
ApplicationsService.syncToMarkdown, and only overwrite the table portion
(alternatively, add an explicit force flag/confirmation parameter to
exportToMarkdown if preserving content is not desired).
- Around line 18-93: The importFromMarkdown function currently does per-item
upserts causing N+M round-trips and no atomicity; wrap the applications and
sources import blocks each in this.db.transaction(async (tx) => { ... }) and
perform a single multi-row upsert using
tx.insert(applications).values(parsedApplications).onConflictDoUpdate({...}) and
similarly tx.insert(sources).values(parsedSources).onConflictDoUpdate({...}) to
reduce round-trips and ensure rollback on error; keep the existing ENOENT
handling but switch DB calls inside the transaction to use the tx instance and
ensure the onConflict target (applications.number, sources.name) and
updatedAt/apply date mapping are preserved.
In `@web/server/tests/sync.test.ts`:
- Around line 83-121: Add a regression test for parsePortalsYml to assert it
returns an empty array when given empty or "null" YAML input; update
web/server/tests/sync.test.ts by adding a new it block (e.g., "returns empty
array for empty YAML content") that calls parsePortalsYml("") and
parsePortalsYml("null") and expects [] for both, ensuring parsePortalsYml's
null/empty guard remains covered.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 90899685-d57b-4ac5-a321-8a1bd400f201
📒 Files selected for processing (71)
.claude-plugin/plugin.json.gitignoreCLAUDE.mddocs/WEB_DASHBOARD.mddocs/superpowers/plans/2026-05-04-web-dashboard.mddocs/superpowers/specs/2026-05-04-web-dashboard-design.mdmerge-tracker.mjspackage.jsonscan.mjsweb/.env.exampleweb/client/index.htmlweb/client/package.jsonweb/client/postcss.config.jsweb/client/src/components/applications/applications-table.tsxweb/client/src/components/applications/filters.tsxweb/client/src/components/applications/status-badge.tsxweb/client/src/components/feed/feed-filters.tsxweb/client/src/components/feed/job-card.tsxweb/client/src/components/pipeline/kanban-board.tsxweb/client/src/components/pipeline/kanban-card.tsxweb/client/src/components/pipeline/kanban-column.tsxweb/client/src/components/sources/source-form.tsxweb/client/src/components/sources/source-list.tsxweb/client/src/lib/api.tsweb/client/src/lib/constants.tsweb/client/src/lib/queries.tsweb/client/src/lib/types.tsweb/client/src/lib/utils.tsweb/client/src/main.tsxweb/client/src/routeTree.gen.tsweb/client/src/routes/__root.tsxweb/client/src/routes/feed.tsxweb/client/src/routes/index.tsxweb/client/src/routes/pipeline.tsxweb/client/src/routes/sources.tsxweb/client/src/styles/globals.cssweb/client/tailwind.config.tsweb/client/tsconfig.jsonweb/client/tsconfig.node.jsonweb/client/vite.config.tsweb/docker-compose.ymlweb/server/drizzle.config.tsweb/server/package.jsonweb/server/src/config/env.tsweb/server/src/db/client.tsweb/server/src/db/migrations/0000_icy_marvel_zombies.sqlweb/server/src/db/migrations/meta/0000_snapshot.jsonweb/server/src/db/migrations/meta/_journal.jsonweb/server/src/db/schema.tsweb/server/src/db/seed.tsweb/server/src/index.tsweb/server/src/modules/applications/router.tsweb/server/src/modules/applications/schema.tsweb/server/src/modules/applications/service.tsweb/server/src/modules/jobs/router.tsweb/server/src/modules/jobs/schema.tsweb/server/src/modules/jobs/service.tsweb/server/src/modules/pipeline/router.tsweb/server/src/modules/pipeline/service.tsweb/server/src/modules/sources/router.tsweb/server/src/modules/sources/schema.tsweb/server/src/modules/sources/service.tsweb/server/src/modules/sync/parsers.tsweb/server/src/modules/sync/router.tsweb/server/src/modules/sync/service.tsweb/server/src/shared/errors.tsweb/server/src/shared/middleware.tsweb/server/src/types.tsweb/server/tests/pipeline.test.tsweb/server/tests/sync.test.tsweb/server/tsconfig.json
| "scan": "node scan.mjs", | ||
| "gemini:eval": "node gemini-eval.mjs" | ||
| "gemini:eval": "node gemini-eval.mjs", | ||
| "web:dev": "concurrently \"npm run web:server:dev\" \"npm run web:client:dev\"", |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if concurrently is declared as a root-level dependency
rg -n "concurrently" package.json
# Also check if it's available as a transitive dep or in web sub-packages
fd "package.json" --max-depth 3 --exec grep -l "concurrently" {}Repository: santifer/career-ops
Length of output: 192
🏁 Script executed:
cat package.json | head -100Repository: santifer/career-ops
Length of output: 2002
🏁 Script executed:
tail -20 package.jsonRepository: santifer/career-ops
Length of output: 528
🏁 Script executed:
rg -l "workspaces|lerna" package.json && cat package.json | grep -A 5 "workspaces"Repository: santifer/career-ops
Length of output: 45
🏁 Script executed:
ls -la | grep -E "lerna|package-lock|yarn.lock" && \
rg "concurrently" --type jsonRepository: santifer/career-ops
Length of output: 45
concurrently is missing from root devDependencies—web:dev will fail.
The web:dev script at line 19 invokes concurrently directly but it is not declared in the root package.json dependencies or devDependencies. Add concurrently to the root devDependencies to resolve this.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@package.json` at line 19, The "web:dev" npm script calls "concurrently" but
the root package.json devDependencies does not include it; add "concurrently" to
the root package.json devDependencies (so the "web:dev" script can run) and then
install/update devDependencies to ensure it's available locally—edit
package.json devDependencies to include the dependency and run the package
manager to sync.
| // Notify dashboard (optional — fails silently if dashboard is not running) | ||
| try { | ||
| const payload = newOffers.map(o => ({ | ||
| title: o.title, company: o.company, url: o.url, | ||
| source: o.source, location: o.location || undefined, | ||
| })); | ||
| await fetch('http://localhost:3000/api/jobs/ingest', { | ||
| method: 'POST', | ||
| headers: { 'Content-Type': 'application/json' }, | ||
| body: JSON.stringify(payload), | ||
| }); | ||
| } catch { /* dashboard not running — no-op */ } |
There was a problem hiding this comment.
Add timeout + response guard for optional dashboard ingestion.
This optional call can still stall script completion because it has no timeout, and non-2xx responses are currently treated as success.
Proposed fix
try {
const payload = newOffers.map(o => ({
title: o.title, company: o.company, url: o.url,
source: o.source, location: o.location || undefined,
}));
- await fetch('http://localhost:3000/api/jobs/ingest', {
+ const controller = new AbortController();
+ const timer = setTimeout(() => controller.abort(), 2000);
+ const res = await fetch('http://localhost:3000/api/jobs/ingest', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(payload),
+ signal: controller.signal,
});
+ clearTimeout(timer);
+ if (!res.ok) throw new Error(`Dashboard ingest failed: HTTP ${res.status}`);
} catch { /* dashboard not running — no-op */ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Notify dashboard (optional — fails silently if dashboard is not running) | |
| try { | |
| const payload = newOffers.map(o => ({ | |
| title: o.title, company: o.company, url: o.url, | |
| source: o.source, location: o.location || undefined, | |
| })); | |
| await fetch('http://localhost:3000/api/jobs/ingest', { | |
| method: 'POST', | |
| headers: { 'Content-Type': 'application/json' }, | |
| body: JSON.stringify(payload), | |
| }); | |
| } catch { /* dashboard not running — no-op */ } | |
| // Notify dashboard (optional — fails silently if dashboard is not running) | |
| try { | |
| const payload = newOffers.map(o => ({ | |
| title: o.title, company: o.company, url: o.url, | |
| source: o.source, location: o.location || undefined, | |
| })); | |
| const controller = new AbortController(); | |
| const timer = setTimeout(() => controller.abort(), 2000); | |
| const res = await fetch('http://localhost:3000/api/jobs/ingest', { | |
| method: 'POST', | |
| headers: { 'Content-Type': 'application/json' }, | |
| body: JSON.stringify(payload), | |
| signal: controller.signal, | |
| }); | |
| clearTimeout(timer); | |
| if (!res.ok) throw new Error(`Dashboard ingest failed: HTTP ${res.status}`); | |
| } catch { /* dashboard not running — no-op */ } |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@scan.mjs` around lines 358 - 369, The optional dashboard ingest using
newOffers.map(...) and fetch('http://localhost:3000/api/jobs/ingest', ...) can
hang and treats non-2xx responses as success; update this block to use an
AbortController with a short timeout (e.g., 2–5s) to abort the fetch if it
stalls, clear the timeout in finally, and after awaiting fetch check response.ok
and treat non-2xx as a failure (throw or log) so the catch path handles it;
preserve the existing silent-fail behavior by catching and ignoring errors but
ensure the timeout is enforced and cleared.
| const cells = line.split("|").map((s) => s.trim()); | ||
| const num = parseInt(cells[1]); |
There was a problem hiding this comment.
Pipe characters in cell values corrupt both parsing and serialization.
line.split("|") (line 23) assumes no | appears inside a cell value. Likewise, serializeApplicationsMd (line 59) concatenates field values without escaping. Since markdown remains the source of truth, a company name, role, or notes value containing | will:
- Parse the row with wrong field alignment (silently producing bad data).
- Serialize a row that breaks the table structure on round-trip.
🐛 Proposed fix — add an escape/unescape helper
+function escapeCell(value: string): string {
+ return value.replace(/\|/g, "\\|").replace(/\n/g, " ");
+}
+
+function unescapeCell(value: string): string {
+ return value.replace(/\\\|/g, "|");
+}
// In parseApplicationsMd, unescape each cell after splitting:
-const cells = line.split("|").map((s) => s.trim());
+const cells = line.split("|").map((s) => unescapeCell(s.trim()));
// In serializeApplicationsMd, escape values before interpolating:
-return `| ${a.number} | ${a.date} | ${a.company} | ${a.role} | ${score} | ${a.status} | ${pdf} | ${report} | ${a.notes} |`;
+return `| ${a.number} | ${a.date} | ${escapeCell(a.company)} | ${escapeCell(a.role)} | ${score} | ${a.status} | ${pdf} | ${report} | ${escapeCell(a.notes)} |`;Also applies to: 55-59
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@web/server/src/modules/sync/parsers.ts` around lines 23 - 24, The parser and
serializer must handle pipe characters inside cell values: add escape/unescape
helpers (e.g., escapeCell and unescapeCell) that escape backslashes and pipes
when serializing in serializeApplicationsMd and unescape them when parsing;
update the parsing logic that currently uses line.split("|") to split on
unescaped pipes (use a regex split on '|' not preceded by backslash or
temporarily replace escaped pipes with a placeholder before splitting), and
ensure serializeApplicationsMd calls escapeCell for each field so round-trips
preserve literal '|' and '\' characters.
| describe("parsePortalsYml", () => { | ||
| it("parses tracked companies into source records", () => { | ||
| const yml = ` | ||
| title_filter: | ||
| positive: ["AI", "ML"] | ||
| negative: ["Junior"] | ||
|
|
||
| tracked_companies: | ||
| - name: OpenAI | ||
| careers_url: https://jobs.ashbyhq.com/openai | ||
| enabled: true | ||
| - name: Anthropic | ||
| api: https://boards-api.greenhouse.io/v1/boards/anthropic/jobs | ||
| enabled: false | ||
| `; | ||
|
|
||
| const result = parsePortalsYml(yml); | ||
|
|
||
| expect(result).toHaveLength(2); | ||
| expect(result[0]).toEqual({ | ||
| name: "OpenAI", | ||
| type: "ashby", | ||
| config: { | ||
| careers_url: "https://jobs.ashbyhq.com/openai", | ||
| title_filter: { positive: ["AI", "ML"], negative: ["Junior"] }, | ||
| }, | ||
| enabled: true, | ||
| }); | ||
| expect(result[1]).toEqual({ | ||
| name: "Anthropic", | ||
| type: "greenhouse", | ||
| config: { | ||
| api: "https://boards-api.greenhouse.io/v1/boards/anthropic/jobs", | ||
| title_filter: { positive: ["AI", "ML"], negative: ["Junior"] }, | ||
| }, | ||
| enabled: false, | ||
| }); | ||
| }); | ||
| }); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial | ⚡ Quick win
Add a test for parsePortalsYml with empty/null YAML input.
The current test suite has no coverage for empty or null YAML content. Once the null-guard fix is applied to parsePortalsYml, a regression test will prevent it from being removed:
it("returns empty array for empty YAML content", () => {
expect(parsePortalsYml("")).toEqual([]);
expect(parsePortalsYml("null")).toEqual([]);
});🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@web/server/tests/sync.test.ts` around lines 83 - 121, Add a regression test
for parsePortalsYml to assert it returns an empty array when given empty or
"null" YAML input; update web/server/tests/sync.test.ts by adding a new it block
(e.g., "returns empty array for empty YAML content") that calls
parsePortalsYml("") and parsePortalsYml("null") and expects [] for both,
ensuring parsePortalsYml's null/empty guard remains covered.
- RemoteOK: fetches https://remoteok.com/api (JSON, first element is metadata) - Remotive: fetches https://remotive.com/api/remote-jobs?category=software-dev - WeWorkRemotely: fetches RSS feed, parses "Company: Title" format - All 3 sources apply the same title_filter from portals.yml - Deduplicates against existing scan history and pipeline - Can be disabled with `open_boards: false` in portals.yml - Ported from Job Hunter project's discovery module Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
New scan-scrape.mjs script complements scan.mjs with browser-based scraping: - Indeed (es.indeed.com) — Spain job search with Cloudflare detection - InfoJobs (infojobs.net) — Spain's largest job board, cookie consent handling - Tecnoempleo (tecnoempleo.com) — Spanish tech job board - Sequential scraping (one page at a time) to avoid detection - Same title filter and dedup logic as scan.mjs - Dashboard sync on completion - Ported from Job Hunter project's discovery module Usage: node scan-scrape.mjs [--dry-run] [--board indeed] [--query "python"] Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…ltiple jobs
Generates tailored CVs (PDF) and draft application answers for N jobs in parallel:
- Parses applications.md for jobs above score threshold
- Fetches JD from report URLs
- Tailors CV summary per role/company
- Generates PDFs via Playwright (parallel, not sequential)
- Drafts common application answers (why company, hard problem, salary, etc.)
- Saves everything to output/{num}-{company-slug}/
Usage: node batch-apply.mjs [--min-score 3.5] [--nums 1,2,5] [--concurrency 5]
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adzuna: - Added to scan.mjs as API source (requires ADZUNA_APP_ID + ADZUNA_APP_KEY) - Configurable country (default: Spain) and query via portals.yml - Skips silently if no credentials set - Free API key from developer.adzuna.com LinkedIn: - Added to scan-scrape.mjs with persistent browser profile - First run: `node scan-scrape.mjs --linkedin-login` opens visible browser for manual login - Session saved to data/browser-profiles/linkedin/, reused on subsequent runs - Scrapes job cards from search results (title, company, location, URL) - Skips if no saved session (won't crash) Both ported from Job Hunter project's discovery module. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 40
♻️ Duplicate comments (11)
web/server/src/modules/sync/router.ts (1)
9-19:⚠️ Potential issue | 🔴 Critical | ⚡ Quick winGuard
/api/sync/importand/api/sync/exportwith auth.Both endpoints rewrite Markdown source-of-truth and DB state with no authentication. With
origin: trueon the CORS layer, these are reachable from any origin via a browser. Add apreHandlerthat requires a shared token (SYNC_API_TOKEN) or proper auth before invoking the service.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@web/server/src/modules/sync/router.ts` around lines 9 - 19, The POST routes for /api/sync/import and /api/sync/export call getService().importFromMarkdown() and exportToMarkdown() without authentication; add a preHandler to both route definitions that validates a shared secret (process.env.SYNC_API_TOKEN) or other auth before calling the service. Implement a single reusable preHandler function (e.g., validateSyncToken) that reads the incoming Authorization header or x-sync-token, compares it securely to SYNC_API_TOKEN, returns a 401/403 on failure, and attach that preHandler to the route options for the handlers invoking importFromMarkdown and exportToMarkdown.web/server/src/index.ts (2)
23-23:⚠️ Potential issue | 🟠 Major | ⚡ Quick winRestrict CORS origins instead of allowing all origins.
origin: truereflects any incoming Origin header back asAccess-Control-Allow-Origin, which is unsafe for endpoints that mutate tracker/source/sync state. Drive this from aCORS_ORIGINenv var with an explicit allowlist.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@web/server/src/index.ts` at line 23, The current cors registration uses origin: true which reflects any Origin header; change the cors setup at the app.register(cors, ...) call to read a CORS_ORIGIN environment variable (e.g. comma-separated allowlist), build a Set of allowed origins, and provide an origin validation function (origin => allowed.has(origin) || false) or callback that only returns the incoming origin when it matches the allowlist; update the code around app.register(cors, { origin: ... }) to use that allowlist function and handle undefined/no-origin requests safely.
38-42:⚠️ Potential issue | 🟠 Major | ⚡ Quick winReturn JSON 404 for unknown
/api/*routes; SPA fallback should only apply to non-API paths.The current
setNotFoundHandlerservesindex.htmlfor every miss, so API clients hitting an unknown endpoint get HTML back. Branch onrequest.url.startsWith("/api/")and respond with a JSON 404 in that case.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@web/server/src/index.ts` around lines 38 - 42, The SPA fallback currently served by app.setNotFoundHandler will return index.html for all unknown routes; change the handler (the setNotFoundHandler registered alongside clientDist and fastifyStatic) to check if request.url.startsWith("/api/") and for API paths respond with a JSON 404 (e.g. reply.code(404).send({ error: "Not Found" })) while preserving reply.sendFile("index.html") for non-API requests so SPA routing still works.web/server/src/modules/sources/service.ts (1)
52-54:⚠️ Potential issue | 🟠 Major | 🏗️ Heavy liftAwait and serialize portals.yml sync; don't fire-and-forget.
These three call sites swallow sync failures and return success regardless. Combined with the read-modify-write inside
syncToPortalsYml(lines 101-138), overlapping mutations can race and write a staleportals.yml. Await an in-instance promise queue so syncs are serialized and errors propagate.Also applies to: 79-81, 96-98
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@web/server/src/modules/sources/service.ts` around lines 52 - 54, Call sites that currently do fire-and-forget via this.syncToPortalsYml().catch(...) must instead await a serialized in-instance queue so errors bubble and concurrent syncs don't race; implement a simple promise-queue on the Service instance (e.g. a private property like this._portalsSyncQueue initialized to Promise.resolve()) and replace the three fire-and-forget invocations with awaiting a queued runner that does this._portalsSyncQueue = this._portalsSyncQueue.then(() => this.syncToPortalsYml()); then await that resulting promise and let errors propagate (remove the .catch(...) that swallows errors) so read-modify-write in syncToPortalsYml is serialized and failures surface.web/server/drizzle.config.ts (1)
1-9:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winValidate
DATABASE_URLat runtime instead of using!.The non-null assertion will silently pass
undefinedto drizzle-kit when the env var is missing. Importdotenv/configand throw a clear error whenDATABASE_URLis not set, mirroring the guard pattern used inseed.ts.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@web/server/drizzle.config.ts` around lines 1 - 9, Replace the non-null assertion on DATABASE_URL by validating it at runtime: import 'dotenv/config' at top, read process.env.DATABASE_URL into a local const, throw a clear Error if it's undefined (same guard pattern as in seed.ts), and pass that validated const into the defineConfig dbCredentials.url instead of using process.env.DATABASE_URL!; reference defineConfig and dbCredentials in the file when applying this change.web/client/src/routes/pipeline.tsx (1)
14-18:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winDistinguish loading from failure in the render branch.
isLoading || !datawill keep showing "Loading..." indefinitely if the query errors or returns undefined. UseisError/errorfrom the query for a separate failure state.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@web/client/src/routes/pipeline.tsx` around lines 14 - 18, Replace the single combined check (isLoading || !data) with distinct branches: show the loading placeholder only when isLoading is true; render an error UI (using isError and error) when isError is true (include error.message); and handle the empty/undefined data case separately (e.g., show a "No data" message) before rendering <KanbanBoard data={data} onMoveCard={...}>; update the render logic around isLoading, isError, error, and data so moveCard.mutate is only called when data exists.web/client/src/routes/sources.tsx (1)
28-28:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winClose the form only on successful mutation.
setShowForm(false)runs immediately aftercreateSource.mutate(data), so failures dismiss the form and hide the error. Move the close into the per-callonSuccesscallback.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@web/client/src/routes/sources.tsx` at line 28, The form is being closed immediately because setShowForm(false) is called right after createSource.mutate(data); change this to only close on a successful mutation by removing the immediate setShowForm(false) and instead provide an onSuccess handler for the mutation call (e.g., call createSource.mutate(data, { onSuccess: () => setShowForm(false) }) or await createSource.mutateAsync(data).then(() => setShowForm(false))). Update the SourceForm onSubmit usage so the form is dismissed only in the createSource onSuccess callback and not unconditionally.web/client/src/routes/feed.tsx (1)
29-36:⚠️ Potential issue | 🟡 Minor | ⚡ Quick win
data?.data.map(...)throws ifdata.dataisnull/undefined— fix still pending.Optional chaining on
data?.dataonly guards againstdataitself being nullish;.map()on a null/undefineddata.datawill throw aTypeError.🔧 Proposed fix
- {data?.data.map((job) => ( + {data?.data?.map((job) => ( <JobCard key={job.id} job={job} onSendToPipeline={(id) => sendToPipeline.mutate(id)} onDismiss={(id) => dismiss.mutate(id)} /> ))} - {data?.data.length === 0 && ( + {data?.data?.length === 0 && ( <div className="col-span-full text-center py-8 text-muted-foreground">No new jobs. Run a scan to discover more.</div> )}🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@web/client/src/routes/feed.tsx` around lines 29 - 36, The current data?.data.map(...) can throw if data.data is null/undefined; change the rendering to use a safe jobs array (e.g., const jobs = data?.data ?? []) and map over that instead of data?.data, and use jobs.length (not data?.data.length) for the empty-state check so JobCard rendering and the "No new jobs" message (related symbols: JobCard, sendToPipeline.mutate, dismiss.mutate) work when API returns null/undefined.web/server/src/db/client.ts (1)
5-8:⚠️ Potential issue | 🟠 Major | ⚡ Quick winPostgres connection pool is never exposed for teardown — fix still pending.
clientis created insidecreateDbbut swallowed; callers cannot invokeclient.end(), leaving the pool open on server shutdown and causing test workers to hang.🔧 Proposed fix
export function createDb(databaseUrl: string) { const client = postgres(databaseUrl); - return drizzle(client, { schema }); + const db = drizzle(client, { schema }); + return Object.assign(db, { end: () => client.end() }); }Then in the Fastify bootstrap (
index.ts):fastify.addHook("onClose", async () => { await fastify.db.end(); });🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@web/server/src/db/client.ts` around lines 5 - 8, The createDb function currently hides the underlying postgres client so callers cannot close the pool; update createDb (exported function createDb) to return both the drizzle instance and the raw postgres client (or attach an end/close method onto the returned object) so callers can call client.end()/close on shutdown; locate createDb in client.ts and change its return from just drizzle(client, { schema }) to an object { db: drizzle(...), client } (or equivalent) and update callers (e.g., fastify.db usage in bootstrap) to call the client's end method in onClose.web/client/src/routes/__root.tsx (1)
17-20:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winAdd
activeOptions={{ exact: true }}to the root nav link.The
"/"link still uses TanStack Router's default prefix matching, so it stays active on all nested routes (/pipeline,/feed,/sources).🔧 Proposed fix
{NAV_ITEMS.map((item) => ( <Link key={item.to} to={item.to} + activeOptions={item.to === "/" ? { exact: true } : undefined} className="flex items-center gap-3 px-3 py-2 rounded-md text-sm text-muted-foreground hover:bg-accent hover:text-accent-foreground transition-colors [&.active]:bg-accent [&.active]:text-accent-foreground [&.active]:font-medium" >🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@web/client/src/routes/__root.tsx` around lines 17 - 20, The root navigation Link (the one rendering with props key={item.to} and to={item.to}) is using TanStack Router's prefix matching so the "/" link remains active on nested routes; update the Link JSX to pass activeOptions={{ exact: true }} (only for the root "/" item) so the active styling applies only on the exact path — add a conditional that when item.to === "/" you include activeOptions={{ exact: true }} on that Link component.web/client/src/routes/index.tsx (1)
32-36:⚠️ Potential issue | 🟠 Major | ⚡ Quick winFetch failures silently render as an empty table — still unaddressed.
data?.data || []collapses both "no results" and "request failed" into the same empty state, making network errors invisible to the user.🔧 Proposed fix
- const { data, isLoading } = useApplications(params); + const { data, isLoading, isError } = useApplications(params);- {isLoading ? ( - <div className="text-muted-foreground py-8 text-center">Loading...</div> - ) : ( - <ApplicationsTable data={data?.data || []} /> - )} + {isLoading ? ( + <div className="text-muted-foreground py-8 text-center">Loading...</div> + ) : isError || !data ? ( + <div className="text-destructive py-8 text-center">Failed to load applications.</div> + ) : ( + <ApplicationsTable data={data.data} /> + )}🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@web/client/src/routes/index.tsx` around lines 32 - 36, The UI currently treats a failed fetch the same as an empty result because it renders ApplicationsTable with data?.data || []; update the render to check the query error state (e.g., use the query's error or isError alongside isLoading) and show an error message/UI instead of rendering ApplicationsTable when the request failed; specifically, use the same query result that provides isLoading/isError/error (the variable named data and its accompanying error flag) to conditionally render an error block (or pass an explicit status prop) and only pass a real array to ApplicationsTable when there is no error and data?.data is defined.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In @.gitignore:
- Line 47: The `.gitignore` entry "package-lock.json" is too broad and
unintentionally ignores package-lock.json in subdirectories (e.g.,
web/server/package-lock.json); change the rule to "/package-lock.json" if you
only want to ignore the repo root lockfile, or move the "package-lock.json" line
into the existing "Generated" section (consistent with "bun.lock") if you intend
to ignore all package-lock.json files project-wide; update the
"package-lock.json" entry accordingly.
In `@batch-apply.mjs`:
- Line 32: TEMPLATE_PATH is declared but unused; modify generateCvHtml to load
and use the HTML template file instead of building markup inline: read
TEMPLATE_PATH (the 'templates/cv-template.html' file) at startup or inside
generateCvHtml, parse placeholders in the template (e.g., {{personal}},
{{work}}, {{education}} or similar tokens you add), and inject the parsed CV
sections produced by the existing parsing logic into those placeholders; remove
the duplicated layout strings from generateCvHtml and ensure TEMPLATE_PATH is
actually referenced so the template file drives layout and formatting.
- Around line 208-322: The file incorrectly hardcodes user-specific data in the
HTML and answer drafts; update generateCvHtml to read and use TEMPLATE_PATH
(templates/cv-template.html) and inject parsed sections (contact-row, education,
technical skills, header name/location/links) from cv.md rather than embedding
literals, and refactor draftAnswers to pull salary/location/language/profile
fields from config/profile.yml and derive employer and impact bullets from the
parsed CV data (not hardcoded "Autonomous Enterprise Management Systems");
locate and replace the inline header/contact/education/skills HTML blocks and
the draftAnswers object in the current module (functions generateCvHtml and
draftAnswers, constant TEMPLATE_PATH) so the script becomes data-driven from
templates and profile YAML.
- Around line 115-137: The fetchJd function accepts untrusted URLs and must
validate and constrain them to prevent SSRF: before calling fetch (in fetchJd),
parse the URL and require a safe scheme (https: only, or http: only for explicit
allow-listed hostnames), resolve the hostname to an IP and reject
private/loopback/link-local ranges (127.0.0.0/8, 10.0.0.0/8, 172.16.0.0/12,
192.168.0.0/16, 169.254.0.0/16, ::1, fc00::/7), and reject unresolved/invalid
hosts; also protect against large downloads by streaming the response body and
enforcing a byte cap (smaller than the current 5KB slice) as you read, aborting
when the limit is reached; keep existing timeout/AbortController behavior and
ensure fetch errors return null. Use these checks where URLs from
extractUrlFromReport are passed into fetchJd so untrusted inputs are validated
before any network call.
- Around line 463-475: Wrap the browser lifecycle so the Playwright browser
opened by chromium.launch(...) is always closed: create the browser before
running parallelProcess(targetApps, concurrency, (app) => processApp(app, cv,
browser)) and ensure that parallelProcess (and any work using browser) runs
inside a try block with a finally that awaits browser.close() to prevent leaks
if processApp or parallelProcess throws. For the dashboard notify call
(fetch('http://localhost:3000/api/sync/import', { method: 'POST' })), use an
AbortController with a short timeout (reuse the pattern from fetchJd) and pass
its signal to fetch so the request is aborted if the dashboard doesn't respond;
ensure the controller is cleared/canceled after the fetch attempt and wrap the
fetch in try/catch to swallow errors as before.
- Around line 103-111: The extractUrlFromReport function currently reads
reportPath directly and is vulnerable to path traversal; fix it by importing
resolve and sep from 'path' and normalizing/anchoring reportPath to the
repository root (reject absolute paths and paths that resolve outside the
project root), only allow files under an explicit whitelist directory (e.g.,
"reports" or "data/reports") and expected extensions (e.g., ".md"), and if the
resolved path does not start with projectRoot + sep or is not inside an allowed
subfolder return null before calling existsSync/readFileSync; keep the rest of
the URL-extraction logic unchanged.
In `@docs/superpowers/specs/2026-05-04-web-dashboard-design.md`:
- Line 20: Several fenced code blocks in the spec are missing language labels
(markdownlint MD040); update each bare triple-backtick block to include the
appropriate language specifier: add "text" or "plaintext" for the directory tree
and API endpoint blocks, "bash" for the first-run snippet, and "json" for the
scripts block so tools like markdownlint can correctly parse and lint them (look
for the directory tree, API endpoint samples, the first-run example, and the
scripts JSON block in the document and replace ``` with
```text/```plaintext/```bash/```json as appropriate).
In `@scan-scrape.mjs`:
- Around line 267-271: In main(), replace the single PORTALS_PATH existence
check with a unified onboarding gate that verifies all core files exist (cv.md,
config/profile.yml, modes/_profile.md, and portals.yml); if any are missing,
call the onboarding routine (e.g., enterOnboarding()/runOnboarding()) and abort
further work so the process never proceeds to evaluations/scans or writes
pipeline.md or scan-history.tsv until onboarding completes; update references in
main() to use this composite check and ensure any early exits (process.exit or
returns) occur immediately when onboarding is triggered.
- Around line 385-396: The optional dashboard ingest fetch in scan-scrape.mjs
should use a timeout and verify response.ok so hangs and HTTP errors don't stall
the scan: wrap the POST to 'http://localhost:3000/api/jobs/ingest' in an
AbortController with a short timeout (e.g. 1–3s), pass signal to fetch, await
the response and check response.ok (and optionally response.status) before
treating it as success, and catch AbortError/other errors to silently skip the
dashboard without blocking the rest of the scan; update the code around the
payload/newOffers/fetch block accordingly.
- Around line 93-112: appendToPipeline currently calls
readFileSync(PIPELINE_PATH) unconditionally which throws if data/pipeline.md (or
its directory) doesn't exist; update appendToPipeline to first ensure the
containing directory exists and the file exists (use fs.existsSync or try/catch
around readFileSync), and if the file is missing initialize text to a sensible
starting content that includes the "## Pendientes" and "## Procesadas" sections
(so the existing insertion logic still works), then proceed to insert the new
offers and use writeFileSync to create the file; reference appendToPipeline,
PIPELINE_PATH, readFileSync, and writeFileSync when making the changes.
In `@scan.mjs`:
- Around line 111-132: The helper fetchAdzuna currently applies titleFilter
before returning, causing the overall scan summary (which uses jobs.length as
raw results and only increments totalFiltered later) to undercount total jobs
and filtered-out jobs; modify fetchAdzuna (and the other helpers mentioned: the
RemoteOK/Remotive/WeWorkRemotely functions at the referenced ranges) to return
the raw results array (unfiltered) or return an object like { jobs: rawResults,
filteredCount } so the caller can compute totals correctly, and ensure callers
use the raw jobs for jobs.length and apply titleFilter later (or consume
filteredCount) so the summary metrics remain consistent with existing logic.
In `@web/client/src/components/applications/applications-table.tsx`:
- Around line 40-52: The table header <th> elements render sorting only via
icons; add an aria-sort attribute driven by header.column.getIsSorted() so
screen readers can announce sort direction—map getIsSorted() === "asc" ->
"ascending", "desc" -> "descending", and null/undefined -> "none" (or omit if
preferred), and attach it to the same <th> that uses
header.column.getToggleSortingHandler() and header.column.getIsSorted(); ensure
the attribute is updated whenever header.getContext()/header.column state
changes.
In `@web/client/src/components/pipeline/kanban-board.tsx`:
- Line 35: The DragOverlay currently renders <KanbanCard app={activeApp} />
which causes two instances to call useSortable({ id: app.id }) and both report
isDragging; fix this by adding an isOverlay prop to KanbanCard and, when
isOverlay is true, skip/avoid calling useSortable (or bypass the opacity logic)
inside KanbanCard so it renders as the preview only; update the DragOverlay
render to <KanbanCard app={activeApp} isOverlay /> and adjust the KanbanCard
component to accept isOverlay and conditionally not use useSortable (or not
apply the 0.5 opacity) when isOverlay is true.
In `@web/client/src/components/pipeline/kanban-card.tsx`:
- Around line 26-30: The icon button in kanban-card.tsx (the button rendering
ExternalLink and using handleClick) relies on title instead of an accessible
name; replace or supplement the title with an explicit aria-label (e.g.,
aria-label="Open job posting") or an aria-labelledby that points to visible or
visually-hidden text so screen readers and keyboard users get a clear name for
the button; keep the existing handleClick and ExternalLink usage but ensure the
button has the aria-label/aria-labelledby attribute and remove reliance on title
alone.
In `@web/client/src/components/sources/source-form.tsx`:
- Around line 20-26: The inputs in SourceForm (the name input bound to
name/setName, the type select bound to type/setType and SOURCE_TYPES, and the
careersUrl input bound to careersUrl/setCareersUrl) rely only on placeholders;
add explicit <label> elements for each control and associate them via id/htmlFor
attributes (give each input/select a unique id, e.g.,
nameInput/typeSelect/careersUrlInput) so screen readers can identify them;
update the JSX in source-form.tsx to include these labels (visually hidden if
needed) without changing existing state handlers.
In `@web/client/src/lib/api.ts`:
- Around line 18-19: The post helper currently drops falsy-but-valid JSON bodies
because it checks truthiness before JSON.stringify; update the post
implementation (the post: <T>(path: string, body?: unknown) => request<T>(...))
to only omit stringifying when body is strictly undefined (e.g., use body ===
undefined ? undefined : JSON.stringify(body)), so values like 0, false, and ""
are preserved and sent to request.
- Around line 12-13: The code always calls res.json() which throws on successful
responses with empty bodies; change the return logic to detect empty responses
before parsing (e.g. if res.status === 204 || res.headers.get('Content-Length')
=== '0' || !res.headers.get('Content-Type')) return undefined as T, otherwise
call res.json(); alternatively wrap res.json() in a try/catch and return
undefined on a SyntaxError to avoid crashing—update the branch that currently
uses res.status and res.json() accordingly.
In `@web/client/src/lib/constants.ts`:
- Around line 1-10: STATUS_COLORS is typed as Record<string,string>, allowing
silent misspellings or missing statuses; change its typing to a stricter type so
the compiler enforces the exact status keys (e.g., create a union type like type
Status =
'Evaluated'|'Applied'|'Responded'|'Interview'|'Offer'|'Rejected'|'Discarded'|'SKIP'
and then declare STATUS_COLORS as Record<Status,string> or use a const assertion
and derive a Status type from the object's keys); update the declaration of
STATUS_COLORS accordingly so missing or extra keys produce compile errors and
keep the same key names and values.
In `@web/client/src/main.tsx`:
- Line 22: The code uses a non-null assertion on document.getElementById("root")
when calling ReactDOM.createRoot which yields an unclear runtime error if the
element is missing; change it to first fetch the element into a variable (e.g.,
rootEl = document.getElementById("root")), check if rootEl is null and throw a
clear Error mentioning the missing mount element (or fallback appropriately),
and then pass the validated rootEl into ReactDOM.createRoot(...). This updates
the ReactDOM.createRoot and removes the trailing non-null assertion so failures
produce an explicit, helpful message.
In `@web/client/src/routes/__root.tsx`:
- Around line 11-32: The root route currently created by createRootRoute (export
const Route) only supplies a component and lacks an errorComponent, so any
render error in child routes can blank out the entire app; add an errorComponent
to the createRootRoute call (alongside the existing component) that renders a
sensible fallback UI for the app shell (e.g., keeps the aside/navigation visible
and shows an error message/Retry button in the main area, and exposes error
details for debugging) and ensure the errorComponent uses the same layout and
renders the routed error (or a reset/ retry action) so child-route failures
don't remove the shell; update the Route definition to include this
errorComponent property.
In `@web/server/src/db/migrations/0000_icy_marvel_zombies.sql`:
- Line 7: Add CHECK constraints to the migrations so applications.status and
discovered_jobs.status can only contain the canonical values: modify the
migration that defines "status" for applications (column applications.status) to
include a CHECK limiting values to the allowed canonical states per the design
spec, and similarly update the discovered_jobs.status column
(discovered_jobs.status) to include a CHECK restricting values to
'new','reviewed','sent_to_pipeline','dismissed'; implement these as CHECK (...)
clauses in the CREATE/ALTER statements in 0000_icy_marvel_zombies.sql so the DB
enforces the permitted enums at the schema level.
In `@web/server/src/db/schema.ts`:
- Line 11: The status column in the table schema (status: varchar("status", {
length: 50 }).notNull().default("Evaluated")) is currently unconstrained; change
it to use a pgEnum (or add a CHECK constraint) that enumerates the allowed
values from the KANBAN_COLUMNS set so the DB enforces valid statuses. Update the
schema.ts definition for the status column to reference a new/existing pgEnum
type (or append a CHECK comparing status against the allowed list) and ensure
the default value "Evaluated" is one of the enum members; keep the enum values
synchronized with the KANBAN_COLUMNS constant used by the application.
- Line 11: The status column currently uses an open varchar which allows invalid
values; change it to a DB-enforced enum or add a CHECK constraint in schema.ts:
create a pgEnum (e.g., pgEnum("kanban_status", [...allowedValues])) or a CHECK
using the same KANBAN_COLUMNS set, then replace varchar("status", ...) with the
enum column (or add .check(...) referencing the allowed set) and keep
.notNull().default("Evaluated"); reference the existing KANBAN_COLUMNS constant
to populate the allowed values so application and DB constraints stay in sync.
- Around line 5-52: The schema lacks indexes on frequently-filtered status
columns which will cause full-table scans; add explicit indexes for
applications.status and discoveredJobs.status in the schema definitions (e.g.,
create indexes after the pgTable declarations for the applications and
discovered_jobs tables or use your ORM's index helper) and consider adding a
composite index for common filter/sort patterns (for example status + created_at
or status + applied_at) to speed list and pipeline grouping queries; update the
symbols applications and discoveredJobs (and the physical columns "status" /
"status" in those tables) accordingly.
- Around line 5-52: Add B-tree indexes for the high-cardinality filter columns
to prevent full-table scans: create an index on applications.status (e.g.,
idx_applications_status) and on discovered_jobs.status (e.g.,
idx_discovered_jobs_status) — ideally implement these either by adding index
definitions to the schema export or, better, via a migration that runs CREATE
INDEX CONCURRENTLY to avoid locking; for discovered_jobs consider a partial
index for the common predicate (CREATE INDEX CONCURRENTLY
idx_discovered_jobs_status_new ON discovered_jobs(status) WHERE status = 'new')
to optimize the feed query.
- Around line 17-18: The updatedAt column currently uses .defaultNow() so it
never changes on updates; for both the applications and sources table
definitions in schema.ts, change updatedAt to also call .$onUpdate(() => new
Date()) (or the equivalent timestamp-on-update factory your ORM offers) so the
column gets a fresh timestamp on row updates while keeping .defaultNow() for
inserts; update the updatedAt definitions referenced as applications.updatedAt
and sources.updatedAt accordingly.
- Around line 17-18: The schema's updatedAt columns for applications and sources
are not using Drizzle's .$onUpdate() so timestamps are being updated manually in
the service layer; modify the table column definitions for updatedAt (e.g., the
updatedAt timestamp("updated_at", { withTimezone: true }) entries in the
applications and sources table defs) to chain .$onUpdate(sql`now()` or
equivalent Drizzle function) so Drizzle auto-updates the timestamp, and then
remove manual updatedAt: new Date() assignments from service update methods to
rely on the schema-level behavior.
In `@web/server/src/index.ts`:
- Around line 44-50: Add graceful shutdown handlers for SIGTERM and SIGINT that
call your server and DB cleanup routines: make the Fastify instance (app) and
any DB client accessible in main's scope (or export a shutdown function) then
register process.on('SIGTERM'/'SIGINT') handlers that: stop accepting new
requests by calling await app.close(), flush and close DB connections (e.g.,
await db.close() or similar client shutdown), log outcome, and only then exit
the process. Ensure the handlers are idempotent, set a hard timeout fallback to
force exit if cleanup hangs, and remove immediate process.exit(1) from
main().catch so errors are logged but shutdown handlers run first.
In `@web/server/src/modules/applications/router.ts`:
- Around line 30-37: The route handlers do not validate request.params.id as a
UUID, so invalid IDs reach service.getById/service.update and cause a 500; add a
UUID guard at the start of both app.get and app.patch handlers: validate
request.params.id (e.g., with a UUID regex or a utility isUuid function) and if
invalid return reply.status(400).send({ error: "Invalid id" }) before calling
getService()/service.getById or service.update; update the handlers that
reference request.params.id (app.get<{ Params: { id: string } }>,
app.patch(...)) to perform this check.
In `@web/server/src/modules/applications/service.ts`:
- Around line 112-114: The fire-and-forget call to this.syncToMarkdown() from
update() (and analogous calls to syncStatusToMarkdown in pipeline/service.ts)
can cause concurrent reads/writes that silently overwrite applications.md;
change the implementation so markdown writes are serialized: either (a)
introduce an in-process async mutex/queue keyed by the appsMdPath and acquire it
inside syncToMarkdown (and syncStatusToMarkdown) before
readFile/modify/writeFile, or (b) await syncToMarkdown in the request handler so
the request serializes the operation, or (c) push row-update events to a single
background worker that performs writes sequentially; also add a brief comment in
syncToMarkdown near the logic that updates cells[6] and cells[9] documenting
that only status/notes are intentionally modified to avoid future silent
desyncs.
- Around line 85-117: The update method mutates applications and conditionally
inserts statusHistory outside a transaction; wrap both the update and the
conditional insert in a single db.transaction so they commit or rollback
together: inside ApplicationService.update use this.db.transaction(async (tx) =>
{ const [existing] = await
tx.select().from(applications).where(eq(applications.id, id)); perform the
tx.update(applications).set(updateData).where(...).returning(); if (body.status
changed) await tx.insert(statusHistory).values(...); return the updated row from
the transaction }), and call this.syncToMarkdown() after the transaction
resolves (so sync runs only after successful commit). Ensure all references to
applications update and statusHistory insert use the transaction object (tx) not
this.db.
- Around line 35-42: The search string is being interpolated into ilike patterns
so user-supplied '%' and '_' act as wildcards; before building the pattern for
ilike in applications/service.ts (the block that pushes
or(ilike(applications.company, `%${search}%`), ilike(applications.role,
`%${search}%`))), sanitize search by escaping '%' and '_' (replace '%' with '\%'
and '_' with '\_') and then wrap the escaped value with '%' for the contains
match; apply the same fix in jobs/service.ts where ilike is used similarly so
literal '%' and '_' from users are treated as characters not wildcards.
In `@web/server/src/modules/jobs/service.ts`:
- Around line 145-153: The code that appends under the pendientesSectionHeader
can corrupt content when the header is the last line because
content.indexOf("\n", idx) returns -1; modify the block that computes
afterHeader (and uses content.slice) to explicitly handle the “header at EOF”
case (e.g., detect idx !== -1 and nextNewlineIndex !== -1 and set afterHeader =
idx + pendientesSectionHeader.length + 1 otherwise set afterHeader =
content.length + 1 or insert a newline before inserting the bullet), ensuring
you use pendientesSectionHeader and job.url to build the insertion; additionally
make the operation resilient/atomic by either performing the
discoveredJobs.status DB update first (update discoveredJobs.status from "new"
to "discovered" before modifying content) or, if you must write the file first,
check for an existing bullet in content (detect job.url or an existing
"Pendientes" bullet) and skip duplicate appends so a failed DB update won't
create duplicates.
- Around line 79-99: Replace the fragile error-string sniffing in the insert
block by using the DB's conflict-ignoring primitive: change the insert call at
this.db.insert(discoveredJobs).values(...) to use the ORM's onConflictDoNothing
(or equivalent) so duplicate-URL unique constraint violations are skipped by the
DB rather than detected from err.message; remove the try/catch that checks for
"unique"/"duplicate"/"23505". Also fix sendToPipeline's header-parsing edge case
in sendToPipeline: when locating the "## Pendientes" header, handle
indexOf("\n", idx) returning -1 (e.g., treat -1 as end-of-string or ensure the
header is normalized to always end with a newline) so afterHeader is set
correctly and content is appended after the header, not before it.
In `@web/server/src/modules/pipeline/router.ts`:
- Around line 17-29: The route handler for app.patch("/api/pipeline/:id/move")
currently skips strict validation: ensure request.params.id is validated as a
UUID and rejected with a ValidationError if invalid, and check
request.body.toStatus against the canonical set of allowed pipeline
statuses/columns (e.g., an exported array/enum like ALLOWED_PIPELINE_STATUSES or
PipelineColumns) before calling service.moveCard; if toStatus is not one of the
allowed values throw a ValidationError. Update the handler (the app.patch
callback that reads request.params.id and const { toStatus } = request.body) to
perform these two checks and only call service.moveCard(request.params.id,
toStatus) after both validations pass.
In `@web/server/src/modules/sources/schema.ts`:
- Line 4: The schema currently allows names consisting only of whitespace;
update the Zod string validators for the "name" field(s) in the source schema
(the z.string().min(1) occurrences) to trim input and enforce non-empty content
(e.g., use a trimmed string or a refine that checks trimmed length) so
whitespace-only names are rejected at validation time; apply this change to both
places where z.string().min(1) is used for source names.
In `@web/server/src/modules/sources/service.ts`:
- Line 137: The write to portals.yml via writeFile(portalsPath, yaml.dump(doc),
"utf-8") must be made atomic: import rename from "fs/promises" (add to the
existing fs/promises import), write the YAML to a sibling temp file (e.g.
portalsPath + ".tmp" or a unique suffix) using the existing writeFile call, then
call rename(tempPath, portalsPath) to atomically replace the target; ensure any
temp file is cleaned up on error. Locate the writeFile usage in this module (the
call referencing portalsPath and yaml.dump(doc)) and update it to the
temp-write-then-rename pattern.
In `@web/server/src/modules/sync/service.ts`:
- Around line 18-93: The importFromMarkdown function performs many individual
insert/onConflictDoUpdate calls; wrap each bulk set of operations in a single
transaction so the import is atomic and uses the transaction client for reduced
round trips. Specifically, in importFromMarkdown replace the sequential inserts
for applications and for sources with two transactions (or one enclosing
transaction) using this.db.transaction(async (tx) => { ... }) and call
tx.insert(...) / tx.onConflictDoUpdate(...) inside the loops (instead of
this.db); accumulate appsCount and sourcesCount inside the transaction and
return them after the transaction resolves; keep the existing ENOENT handling
around file reads but ensure DB operations are only inside the transaction scope
(refer to importFromMarkdown, parseApplicationsMd, parsePortalsYml, and the
usages of applications/sources insert blocks).
- Around line 116-117: The export is writing to appsMdPath using writeFile but
doesn't ensure the parent directory exists, causing ENOENT on fresh checkouts;
update the top import from fs/promises to include mkdir, then before calling
writeFile (around the appsMdPath assignment in the exportToMarkdown / service.ts
flow) call mkdir(join(this.careerOpsRoot, "data"), { recursive: true }) to
create the data directory if missing, then proceed to writeFile(appsMdPath,
content, "utf-8").
In `@web/server/tests/pipeline.test.ts`:
- Around line 4-19: Add a negative test that asserts validateTransition rejects
unknown status names by verifying that a transition with any invalid status
string returns false; specifically add expectations like
validateTransition("InvalidStatus", "Applied") === false and
validateTransition("Applied", "InvalidStatus") === false (and optionally both
invalid) to exercise the branch that checks membership in KANBAN_COLUMNS ∪
ARCHIVE_STATUSES, referencing the validateTransition function and the
KANBAN_COLUMNS/ARCHIVE_STATUSES constants used in the service.
---
Duplicate comments:
In `@web/client/src/routes/__root.tsx`:
- Around line 17-20: The root navigation Link (the one rendering with props
key={item.to} and to={item.to}) is using TanStack Router's prefix matching so
the "/" link remains active on nested routes; update the Link JSX to pass
activeOptions={{ exact: true }} (only for the root "/" item) so the active
styling applies only on the exact path — add a conditional that when item.to ===
"/" you include activeOptions={{ exact: true }} on that Link component.
In `@web/client/src/routes/feed.tsx`:
- Around line 29-36: The current data?.data.map(...) can throw if data.data is
null/undefined; change the rendering to use a safe jobs array (e.g., const jobs
= data?.data ?? []) and map over that instead of data?.data, and use jobs.length
(not data?.data.length) for the empty-state check so JobCard rendering and the
"No new jobs" message (related symbols: JobCard, sendToPipeline.mutate,
dismiss.mutate) work when API returns null/undefined.
In `@web/client/src/routes/index.tsx`:
- Around line 32-36: The UI currently treats a failed fetch the same as an empty
result because it renders ApplicationsTable with data?.data || []; update the
render to check the query error state (e.g., use the query's error or isError
alongside isLoading) and show an error message/UI instead of rendering
ApplicationsTable when the request failed; specifically, use the same query
result that provides isLoading/isError/error (the variable named data and its
accompanying error flag) to conditionally render an error block (or pass an
explicit status prop) and only pass a real array to ApplicationsTable when there
is no error and data?.data is defined.
In `@web/client/src/routes/pipeline.tsx`:
- Around line 14-18: Replace the single combined check (isLoading || !data) with
distinct branches: show the loading placeholder only when isLoading is true;
render an error UI (using isError and error) when isError is true (include
error.message); and handle the empty/undefined data case separately (e.g., show
a "No data" message) before rendering <KanbanBoard data={data}
onMoveCard={...}>; update the render logic around isLoading, isError, error, and
data so moveCard.mutate is only called when data exists.
In `@web/client/src/routes/sources.tsx`:
- Line 28: The form is being closed immediately because setShowForm(false) is
called right after createSource.mutate(data); change this to only close on a
successful mutation by removing the immediate setShowForm(false) and instead
provide an onSuccess handler for the mutation call (e.g., call
createSource.mutate(data, { onSuccess: () => setShowForm(false) }) or await
createSource.mutateAsync(data).then(() => setShowForm(false))). Update the
SourceForm onSubmit usage so the form is dismissed only in the createSource
onSuccess callback and not unconditionally.
In `@web/server/drizzle.config.ts`:
- Around line 1-9: Replace the non-null assertion on DATABASE_URL by validating
it at runtime: import 'dotenv/config' at top, read process.env.DATABASE_URL into
a local const, throw a clear Error if it's undefined (same guard pattern as in
seed.ts), and pass that validated const into the defineConfig dbCredentials.url
instead of using process.env.DATABASE_URL!; reference defineConfig and
dbCredentials in the file when applying this change.
In `@web/server/src/db/client.ts`:
- Around line 5-8: The createDb function currently hides the underlying postgres
client so callers cannot close the pool; update createDb (exported function
createDb) to return both the drizzle instance and the raw postgres client (or
attach an end/close method onto the returned object) so callers can call
client.end()/close on shutdown; locate createDb in client.ts and change its
return from just drizzle(client, { schema }) to an object { db: drizzle(...),
client } (or equivalent) and update callers (e.g., fastify.db usage in
bootstrap) to call the client's end method in onClose.
In `@web/server/src/index.ts`:
- Line 23: The current cors registration uses origin: true which reflects any
Origin header; change the cors setup at the app.register(cors, ...) call to read
a CORS_ORIGIN environment variable (e.g. comma-separated allowlist), build a Set
of allowed origins, and provide an origin validation function (origin =>
allowed.has(origin) || false) or callback that only returns the incoming origin
when it matches the allowlist; update the code around app.register(cors, {
origin: ... }) to use that allowlist function and handle undefined/no-origin
requests safely.
- Around line 38-42: The SPA fallback currently served by app.setNotFoundHandler
will return index.html for all unknown routes; change the handler (the
setNotFoundHandler registered alongside clientDist and fastifyStatic) to check
if request.url.startsWith("/api/") and for API paths respond with a JSON 404
(e.g. reply.code(404).send({ error: "Not Found" })) while preserving
reply.sendFile("index.html") for non-API requests so SPA routing still works.
In `@web/server/src/modules/sources/service.ts`:
- Around line 52-54: Call sites that currently do fire-and-forget via
this.syncToPortalsYml().catch(...) must instead await a serialized in-instance
queue so errors bubble and concurrent syncs don't race; implement a simple
promise-queue on the Service instance (e.g. a private property like
this._portalsSyncQueue initialized to Promise.resolve()) and replace the three
fire-and-forget invocations with awaiting a queued runner that does
this._portalsSyncQueue = this._portalsSyncQueue.then(() =>
this.syncToPortalsYml()); then await that resulting promise and let errors
propagate (remove the .catch(...) that swallows errors) so read-modify-write in
syncToPortalsYml is serialized and failures surface.
In `@web/server/src/modules/sync/router.ts`:
- Around line 9-19: The POST routes for /api/sync/import and /api/sync/export
call getService().importFromMarkdown() and exportToMarkdown() without
authentication; add a preHandler to both route definitions that validates a
shared secret (process.env.SYNC_API_TOKEN) or other auth before calling the
service. Implement a single reusable preHandler function (e.g.,
validateSyncToken) that reads the incoming Authorization header or x-sync-token,
compares it securely to SYNC_API_TOKEN, returns a 401/403 on failure, and attach
that preHandler to the route options for the handlers invoking
importFromMarkdown and exportToMarkdown.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 9b8e6de5-f68f-41c3-aa15-90d428b2c7cc
📒 Files selected for processing (74)
.claude-plugin/plugin.json.env.example.gitignoreCLAUDE.mdbatch-apply.mjsdocs/WEB_DASHBOARD.mddocs/superpowers/plans/2026-05-04-web-dashboard.mddocs/superpowers/specs/2026-05-04-web-dashboard-design.mdmerge-tracker.mjspackage.jsonscan-scrape.mjsscan.mjsweb/.env.exampleweb/client/index.htmlweb/client/package.jsonweb/client/postcss.config.jsweb/client/src/components/applications/applications-table.tsxweb/client/src/components/applications/filters.tsxweb/client/src/components/applications/status-badge.tsxweb/client/src/components/feed/feed-filters.tsxweb/client/src/components/feed/job-card.tsxweb/client/src/components/pipeline/kanban-board.tsxweb/client/src/components/pipeline/kanban-card.tsxweb/client/src/components/pipeline/kanban-column.tsxweb/client/src/components/sources/source-form.tsxweb/client/src/components/sources/source-list.tsxweb/client/src/lib/api.tsweb/client/src/lib/constants.tsweb/client/src/lib/queries.tsweb/client/src/lib/types.tsweb/client/src/lib/utils.tsweb/client/src/main.tsxweb/client/src/routeTree.gen.tsweb/client/src/routes/__root.tsxweb/client/src/routes/feed.tsxweb/client/src/routes/index.tsxweb/client/src/routes/pipeline.tsxweb/client/src/routes/sources.tsxweb/client/src/styles/globals.cssweb/client/tailwind.config.tsweb/client/tsconfig.jsonweb/client/tsconfig.node.jsonweb/client/vite.config.tsweb/docker-compose.ymlweb/server/drizzle.config.tsweb/server/package.jsonweb/server/src/config/env.tsweb/server/src/db/client.tsweb/server/src/db/migrations/0000_icy_marvel_zombies.sqlweb/server/src/db/migrations/meta/0000_snapshot.jsonweb/server/src/db/migrations/meta/_journal.jsonweb/server/src/db/schema.tsweb/server/src/db/seed.tsweb/server/src/index.tsweb/server/src/modules/applications/router.tsweb/server/src/modules/applications/schema.tsweb/server/src/modules/applications/service.tsweb/server/src/modules/jobs/router.tsweb/server/src/modules/jobs/schema.tsweb/server/src/modules/jobs/service.tsweb/server/src/modules/pipeline/router.tsweb/server/src/modules/pipeline/service.tsweb/server/src/modules/sources/router.tsweb/server/src/modules/sources/schema.tsweb/server/src/modules/sources/service.tsweb/server/src/modules/sync/parsers.tsweb/server/src/modules/sync/router.tsweb/server/src/modules/sync/service.tsweb/server/src/shared/errors.tsweb/server/src/shared/middleware.tsweb/server/src/types.tsweb/server/tests/pipeline.test.tsweb/server/tests/sync.test.tsweb/server/tsconfig.json
| .claude/memory/ | ||
| career-dashboard | ||
| package-lock.json No newline at end of file | ||
| package-lock.json |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial | 💤 Low value
package-lock.json is placed under # Claude Code local but the bare pattern matches all subdirectories.
Without a leading /, this rule also silently ignores web/server/package-lock.json and web/client/package-lock.json, preventing lock files for the new npm-based subpackages from ever being committed. If the intent is only to ignore the root-level file (e.g., one generated by Claude Code), use /package-lock.json. If the intent is project-wide (consistent with the existing bun.lock rule), moving it to the # Generated section would be clearer.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In @.gitignore at line 47, The `.gitignore` entry "package-lock.json" is too
broad and unintentionally ignores package-lock.json in subdirectories (e.g.,
web/server/package-lock.json); change the rule to "/package-lock.json" if you
only want to ignore the repo root lockfile, or move the "package-lock.json" line
into the existing "Generated" section (consistent with "bun.lock") if you intend
to ignore all package-lock.json files project-wide; update the
"package-lock.json" entry accordingly.
|
|
||
| const CV_PATH = 'cv.md'; | ||
| const APPS_PATH = 'data/applications.md'; | ||
| const TEMPLATE_PATH = 'templates/cv-template.html'; |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major | ⚡ Quick win
TEMPLATE_PATH is declared but never read.
templates/cv-template.html is referenced as a config constant but generateCvHtml builds the entire HTML string inline, so the template file is unused. This is the same root cause as the hardcoded-user-data issue above: load this template and substitute the parsed CV sections into it instead of duplicating layout in code.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@batch-apply.mjs` at line 32, TEMPLATE_PATH is declared but unused; modify
generateCvHtml to load and use the HTML template file instead of building markup
inline: read TEMPLATE_PATH (the 'templates/cv-template.html' file) at startup or
inside generateCvHtml, parse placeholders in the template (e.g., {{personal}},
{{work}}, {{education}} or similar tokens you add), and inject the parsed CV
sections produced by the existing parsing logic into those placeholders; remove
the duplicated layout strings from generateCvHtml and ensure TEMPLATE_PATH is
actually referenced so the template file drives layout and formatting.
| function extractUrlFromReport(reportPath) { | ||
| if (!reportPath || !existsSync(reportPath)) return null; | ||
| const content = readFileSync(reportPath, 'utf-8'); | ||
| const match = content.match(/\*\*URL:\*\*\s*(https?:\/\/\S+)/); | ||
| if (match) return match[1]; | ||
| // Fallback: find first https URL in the file | ||
| const urlMatch = content.match(/https?:\/\/jobs\.[^\s)]+/); | ||
| return urlMatch ? urlMatch[0] : null; | ||
| } |
There was a problem hiding this comment.
Path traversal risk on reportPath.
reportPath is read directly from a cell of data/applications.md (line 84) and then passed unchecked to existsSync / readFileSync. A malicious or malformed report link such as [r](/etc/passwd) or [r](../../etc/shadow) would cause the script to read arbitrary files on disk and (via the URL regex) potentially exfiltrate them through the JD-fetch step.
As per coding guidelines (**/*.mjs: "Check for command injection, path traversal, and SSRF"), constrain reportPath to the project root and to expected report locations:
function extractUrlFromReport(reportPath) {
if (!reportPath || !existsSync(reportPath)) return null;
+ const resolved = resolve(process.cwd(), reportPath);
+ const root = resolve(process.cwd());
+ if (!resolved.startsWith(root + sep) || !resolved.endsWith('.md')) return null;
- const content = readFileSync(reportPath, 'utf-8');
+ const content = readFileSync(resolved, 'utf-8');(Import resolve and sep from path.)
As per coding guidelines: **/*.mjs: Check for command injection, path traversal, and SSRF.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@batch-apply.mjs` around lines 103 - 111, The extractUrlFromReport function
currently reads reportPath directly and is vulnerable to path traversal; fix it
by importing resolve and sep from 'path' and normalizing/anchoring reportPath to
the repository root (reject absolute paths and paths that resolve outside the
project root), only allow files under an explicit whitelist directory (e.g.,
"reports" or "data/reports") and expected extensions (e.g., ".md"), and if the
resolved path does not start with projectRoot + sep or is not inside an allowed
subfolder return null before calling existsSync/readFileSync; keep the rest of
the URL-extraction logic unchanged.
| async function fetchJd(url) { | ||
| if (!url) return null; | ||
| try { | ||
| const controller = new AbortController(); | ||
| const timer = setTimeout(() => controller.abort(), 15_000); | ||
| const res = await fetch(url, { | ||
| signal: controller.signal, | ||
| headers: { 'User-Agent': 'career-ops/1.0' }, | ||
| }); | ||
| clearTimeout(timer); | ||
| if (!res.ok) return null; | ||
| const html = await res.text(); | ||
| // Extract text content (rough — strip HTML tags) | ||
| return html.replace(/<script[^>]*>[\s\S]*?<\/script>/gi, '') | ||
| .replace(/<style[^>]*>[\s\S]*?<\/style>/gi, '') | ||
| .replace(/<[^>]+>/g, ' ') | ||
| .replace(/\s+/g, ' ') | ||
| .trim() | ||
| .slice(0, 5000); // Cap at 5K chars | ||
| } catch { | ||
| return null; | ||
| } | ||
| } |
There was a problem hiding this comment.
SSRF risk on fetchJd.
The URL fed into fetch originates from arbitrary text in data/<report>.md (extracted by extractUrlFromReport) and is then issued from the host running the script. Without scheme/host validation this enables SSRF: a malicious report can point at http://169.254.169.254/... (cloud metadata), http://localhost:3000/api/... (the dashboard on the same machine), or other internal addresses.
As per coding guidelines (**/*.mjs: "Check for command injection, path traversal, and SSRF"), validate the URL before fetching:
- Require
https:(orhttp:only for explicitly allow-listed hosts). - Reject private/loopback/link-local addresses (
127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,169.254.0.0/16,::1,fc00::/7). - Cap response size in addition to the existing 5 KB string slice (to avoid downloading huge bodies before truncation).
As per coding guidelines: **/*.mjs: Check for command injection, path traversal, and SSRF.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@batch-apply.mjs` around lines 115 - 137, The fetchJd function accepts
untrusted URLs and must validate and constrain them to prevent SSRF: before
calling fetch (in fetchJd), parse the URL and require a safe scheme (https:
only, or http: only for explicit allow-listed hostnames), resolve the hostname
to an IP and reject private/loopback/link-local ranges (127.0.0.0/8, 10.0.0.0/8,
172.16.0.0/12, 192.168.0.0/16, 169.254.0.0/16, ::1, fc00::/7), and reject
unresolved/invalid hosts; also protect against large downloads by streaming the
response body and enforcing a byte cap (smaller than the current 5KB slice) as
you read, aborting when the limit is reached; keep existing
timeout/AbortController behavior and ensure fetch errors return null. Use these
checks where URLs from extractUrlFromReport are passed into fetchJd so untrusted
inputs are validated before any network call.
| return `<!DOCTYPE html> | ||
| <html lang="en"> | ||
| <head> | ||
| <meta charset="UTF-8"> | ||
| <title>Carol El Souki -- CV</title> | ||
| <style> | ||
| @font-face { font-family: 'Space Grotesk'; src: url('./fonts/space-grotesk-latin.woff2') format('woff2'); font-weight: 300 700; font-display: swap; } | ||
| @font-face { font-family: 'DM Sans'; src: url('./fonts/dm-sans-latin.woff2') format('woff2'); font-weight: 100 1000; font-display: swap; } | ||
| * { margin: 0; padding: 0; box-sizing: border-box; } | ||
| html { -webkit-print-color-adjust: exact; print-color-adjust: exact; } | ||
| body { font-family: 'DM Sans', sans-serif; font-size: 11px; line-height: 1.5; color: #1a1a2e; } | ||
| .page { max-width: 800px; margin: 0 auto; padding: 2px 0; } | ||
| .header { margin-bottom: 20px; } | ||
| .header h1 { font-family: 'Space Grotesk', sans-serif; font-size: 28px; font-weight: 700; letter-spacing: -0.02em; margin-bottom: 6px; } | ||
| .header-gradient { height: 2px; background: linear-gradient(to right, hsl(187, 74%, 32%), hsl(270, 70%, 45%)); margin-bottom: 10px; } | ||
| .contact-row { display: flex; flex-wrap: wrap; gap: 8px 14px; font-size: 10.5px; color: #555; } | ||
| .contact-row a { color: #555; text-decoration: none; } | ||
| .separator { color: #ccc; } | ||
| .section { margin-bottom: 18px; } | ||
| .section-title { font-family: 'Space Grotesk', sans-serif; font-size: 12px; font-weight: 700; text-transform: uppercase; letter-spacing: 0.06em; color: hsl(187, 74%, 32%); border-bottom: 1.5px solid #e2e2e2; padding-bottom: 4px; margin-bottom: 10px; } | ||
| .summary-text { font-size: 11px; line-height: 1.7; color: #2f2f2f; } | ||
| .competencies-grid { display: flex; flex-wrap: wrap; gap: 8px; } | ||
| .competency-tag { font-size: 10px; font-weight: 500; color: hsl(187, 74%, 28%); background: hsl(187, 40%, 95%); padding: 4px 10px; border-radius: 3px; border: 1px solid hsl(187, 40%, 88%); } | ||
| .job { margin-bottom: 14px; break-inside: avoid; } | ||
| .job-header { display: flex; justify-content: space-between; align-items: baseline; margin-bottom: 4px; } | ||
| .job-company { font-family: 'Space Grotesk', sans-serif; font-size: 12.5px; font-weight: 600; color: hsl(270, 70%, 45%); } | ||
| .job-period { font-size: 10.5px; color: #777; } | ||
| .job-role { font-size: 11px; font-weight: 600; color: #333; margin-bottom: 6px; } | ||
| .job-location { font-size: 10px; color: #888; } | ||
| .job ul { padding-left: 18px; margin-top: 6px; } | ||
| .job li { font-size: 10.5px; line-height: 1.6; color: #333; margin-bottom: 4px; } | ||
| .job li strong { font-weight: 600; } | ||
| .edu-title { font-weight: 600; font-size: 11px; } | ||
| .edu-org { color: hsl(270, 70%, 45%); font-weight: 500; } | ||
| .edu-desc { font-size: 10px; color: #666; margin-top: 2px; } | ||
| .skills-grid { display: flex; flex-wrap: wrap; gap: 6px 14px; } | ||
| .skill-item { font-size: 10.5px; color: #444; } | ||
| .skill-category { font-weight: 600; color: #333; } | ||
| a { white-space: nowrap; } | ||
| @media print { body { -webkit-print-color-adjust: exact; } } | ||
| </style> | ||
| </head> | ||
| <body> | ||
| <div class="page"> | ||
| <div class="header"> | ||
| <h1>Carol El Souki</h1> | ||
| <div class="header-gradient"></div> | ||
| <div class="contact-row"> | ||
| <span>(974) 71 85 0288</span> | ||
| <span class="separator">|</span> | ||
| <span>elsoukicarol@gmail.com</span> | ||
| <span class="separator">|</span> | ||
| <a href="https://www.linkedin.com/in/carol-el-souki-22a2a825b/">linkedin.com/in/carolelsouki</a> | ||
| <span class="separator">|</span> | ||
| <span>Doha, Qatar -- Open to Relocation</span> | ||
| </div> | ||
| </div> | ||
|
|
||
| <div class="section"> | ||
| <div class="section-title">Professional Summary</div> | ||
| <div class="summary-text">${tailoredSummary}</div> | ||
| </div> | ||
|
|
||
| <div class="section"> | ||
| <div class="section-title">Core Competencies</div> | ||
| <div class="competencies-grid"> | ||
| ${competenciesHtml} | ||
| </div> | ||
| </div> | ||
|
|
||
| <div class="section"> | ||
| <div class="section-title">Professional Experience</div> | ||
| ${experienceHtml} | ||
| </div> | ||
|
|
||
| <div class="section"> | ||
| <div class="section-title">Education</div> | ||
| <div class="edu-title">Bachelor of Science in Computer Science -- <span class="edu-org">University of Balamand</span> <span style="font-size:10px;color:#777">May 2024</span></div> | ||
| <div class="edu-desc">Dean's List (Spring 2022) | Devathon Programming Competition -- 5th Place (2023)</div> | ||
| </div> | ||
|
|
||
| <div class="section"> | ||
| <div class="section-title">Technical Skills</div> | ||
| <div class="skills-grid"> | ||
| <span class="skill-item"><span class="skill-category">Languages:</span> Python, JavaScript, TypeScript, Java, C++</span> | ||
| <span class="skill-item"><span class="skill-category">Backend:</span> Django, Node.js, Express.js, REST APIs, JWT, OAuth 2.0, Redis</span> | ||
| <span class="skill-item"><span class="skill-category">Frontend:</span> React, React Native, HTML5, CSS3</span> | ||
| <span class="skill-item"><span class="skill-category">Cloud:</span> Azure (SDK, Blob Storage), GCP, AWS, CI/CD</span> | ||
| <span class="skill-item"><span class="skill-category">Databases:</span> PostgreSQL, MongoDB, MySQL, Redis, Firebase</span> | ||
| <span class="skill-item"><span class="skill-category">Spoken:</span> Spanish (Native), English (Professional), Arabic (Working)</span> | ||
| </div> | ||
| </div> | ||
| </div> | ||
| </body> | ||
| </html>`; | ||
| } | ||
|
|
||
| // ── Draft common application answers ──────────────────────────────── | ||
|
|
||
| function draftAnswers(app, cv, jdText) { | ||
| const answers = { | ||
| company: app.company, | ||
| role: app.role, | ||
| score: app.score, | ||
|
|
||
| whyCompany: `I'm drawn to ${app.company} because of the opportunity to work on ${app.role.toLowerCase().includes('backend') ? 'backend systems at scale' : 'impactful engineering challenges'}. My experience building a multi-tenant platform for 500+ users and architecting 5 Django ERP modules shows I can own end-to-end delivery. I'm excited to bring that same ownership mindset to ${app.company}.`, | ||
|
|
||
| whyYou: `I bring 2+ years of hands-on backend engineering: Python/Django, Node.js, cloud platforms (Azure, AWS, GCP), and production API design. I've led a multi-tenant document platform serving 500+ users with OAuth 2.0 and RBAC, built asynchronous data pipelines processing 100+ files daily, and optimized API response times by 25% with Redis caching. I also mentor junior engineers and drive technical hiring — I own problems end to end.`, | ||
|
|
||
| hardProblem: `At Autonomous Enterprise Management Systems, I built a document synchronization system between Google Drive and Azure Blob Storage for 500+ users across multiple tenants. The challenge was handling failures gracefully at scale — network timeouts, rate limits, permission conflicts. I designed an idempotent async pipeline with structured error handling (transient vs permanent failures), retry mechanisms, and real-time status tracking. The result: 100+ files daily with zero data loss. The patterns became the team standard.`, | ||
|
|
||
| salary: '$3,000+/month — flexible on structure, focused on total package and growth opportunity.', | ||
|
|
||
| location: 'Based in Doha, Qatar. Open to relocation (Spain preferred) and remote work. Spanish (native), English (professional), Arabic (working).', | ||
| }; |
There was a problem hiding this comment.
Hardcoded personal data in a script file violates project guidelines.
This file embeds the user’s personal data directly:
- Header HTML (lines ~212, 253–262, 285–298): name "Carol El Souki", phone, email, LinkedIn URL, "Doha, Qatar", "University of Balamand", "Devathon … 5th Place (2023)", and the entire Technical Skills section.
draftAnswers(lines 307–325): employer name "Autonomous Enterprise Management Systems", "$3,000+/month", language proficiencies, etc.
As per coding guidelines (**/*.{mjs,js}: "Script files … are auto-updatable system files — NEVER put user data in these files"), all of this must come from cv.md, config/profile.yml, and templates/cv-template.html (which is already declared as TEMPLATE_PATH on line 32 but never actually read). Otherwise, any auto-update of this script will overwrite a different user’s data, and onboarding for any non-Carol user is broken.
Suggested direction:
- Load
templates/cv-template.htmland inject extracted CV sections (you already parse them ingenerateCvHtml); stop hardcoding the contact row and education/skills blocks. - Drive
draftAnswersfromconfig/profile.yml(salary, location, languages) and from CV-derived content (employer + impact bullets).
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@batch-apply.mjs` around lines 208 - 322, The file incorrectly hardcodes
user-specific data in the HTML and answer drafts; update generateCvHtml to read
and use TEMPLATE_PATH (templates/cv-template.html) and inject parsed sections
(contact-row, education, technical skills, header name/location/links) from
cv.md rather than embedding literals, and refactor draftAnswers to pull
salary/location/language/profile fields from config/profile.yml and derive
employer and impact bullets from the parsed CV data (not hardcoded "Autonomous
Enterprise Management Systems"); locate and replace the inline
header/contact/education/skills HTML blocks and the draftAnswers object in the
current module (functions generateCvHtml and draftAnswers, constant
TEMPLATE_PATH) so the script becomes data-driven from templates and profile
YAML.
| // PATCH /api/pipeline/:id/move | ||
| app.patch<{ Params: { id: string }; Body: { toStatus: string } }>( | ||
| "/api/pipeline/:id/move", | ||
| async (request, reply) => { | ||
| const { toStatus } = request.body ?? {}; | ||
| if (!toStatus || typeof toStatus !== "string") { | ||
| throw new ValidationError("toStatus is required"); | ||
| } | ||
| const service = getService(); | ||
| const result = await service.moveCard(request.params.id, toStatus); | ||
| return reply.send(result); | ||
| }, | ||
| ); |
There was a problem hiding this comment.
request.params.id is not validated as a UUID, and toStatus is not checked against the allowed column set.
- A non-UUID
:idproduces a Postgres type error instead of a clean 400 (same pattern as the applications router). toStatusis only checked for truthiness/type; an out-of-range value (e.g.,"Rejected") reachesservice.moveCardunchecked at the router layer.
🔧 Proposed fix
+import { z } from "zod";
+import { KANBAN_COLUMNS } from "./service.js"; // or import the allowed statuses
+const uuidSchema = z.string().uuid();
+const toStatusSchema = z.enum(KANBAN_COLUMNS as unknown as [string, ...string[]]);
app.patch<{ Params: { id: string }; Body: { toStatus: string } }>(
"/api/pipeline/:id/move",
async (request, reply) => {
+ if (!uuidSchema.safeParse(request.params.id).success) {
+ throw new ValidationError("Invalid id format");
+ }
const { toStatus } = request.body ?? {};
- if (!toStatus || typeof toStatus !== "string") {
+ if (!toStatus || !toStatusSchema.safeParse(toStatus).success) {
throw new ValidationError("toStatus is required");
}🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@web/server/src/modules/pipeline/router.ts` around lines 17 - 29, The route
handler for app.patch("/api/pipeline/:id/move") currently skips strict
validation: ensure request.params.id is validated as a UUID and rejected with a
ValidationError if invalid, and check request.body.toStatus against the
canonical set of allowed pipeline statuses/columns (e.g., an exported array/enum
like ALLOWED_PIPELINE_STATUSES or PipelineColumns) before calling
service.moveCard; if toStatus is not one of the allowed values throw a
ValidationError. Update the handler (the app.patch callback that reads
request.params.id and const { toStatus } = request.body) to perform these two
checks and only call service.moveCard(request.params.id, toStatus) after both
validations pass.
| import { z } from "zod"; | ||
|
|
||
| export const createSourceSchema = z.object({ | ||
| name: z.string().min(1), |
There was a problem hiding this comment.
Reject whitespace-only source names at validation time.
Line 4 and Line 11 accept " " as valid names. Trimming in schema avoids dirty values entering the DB.
🐛 Proposed fix
export const createSourceSchema = z.object({
- name: z.string().min(1),
+ name: z.string().trim().min(1),
type: z.enum(["greenhouse", "ashby", "lever", "custom"]),
config: z.record(z.unknown()).default({}),
enabled: z.boolean().default(true),
});
export const updateSourceSchema = z.object({
- name: z.string().min(1).optional(),
+ name: z.string().trim().min(1).optional(),
type: z.enum(["greenhouse", "ashby", "lever", "custom"]).optional(),
config: z.record(z.unknown()).optional(),
enabled: z.boolean().optional(),
});Also applies to: 11-11
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@web/server/src/modules/sources/schema.ts` at line 4, The schema currently
allows names consisting only of whitespace; update the Zod string validators for
the "name" field(s) in the source schema (the z.string().min(1) occurrences) to
trim input and enforce non-empty content (e.g., use a trimmed string or a refine
that checks trimmed length) so whitespace-only names are rejected at validation
time; apply this change to both places where z.string().min(1) is used for
source names.
| tracked_companies, | ||
| }; | ||
|
|
||
| await writeFile(portalsPath, yaml.dump(doc), "utf-8"); |
There was a problem hiding this comment.
Write portals.yml atomically.
portals.yml is the user-facing source of truth. A direct writeFile is not atomic; a crash or signal mid-write can leave the file truncated or partially written, which then poisons subsequent imports and the next read-modify-write cycle. Write to a sibling temp file and rename into place.
🛡️ Suggested fix
- await writeFile(portalsPath, yaml.dump(doc), "utf-8");
+ const tmpPath = `${portalsPath}.${process.pid}.${Date.now()}.tmp`;
+ await writeFile(tmpPath, yaml.dump(doc), "utf-8");
+ await rename(tmpPath, portalsPath);Add rename to the fs/promises import at the top:
-import { readFile, writeFile } from "fs/promises";
+import { readFile, writeFile, rename } from "fs/promises";📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| await writeFile(portalsPath, yaml.dump(doc), "utf-8"); | |
| import { readFile, writeFile, rename } from "fs/promises"; | |
| // ... other code ... | |
| private async syncToPortalsYml(): Promise<void> { | |
| // ... read and modify logic ... | |
| const tmpPath = `${portalsPath}.${process.pid}.${Date.now()}.tmp`; | |
| await writeFile(tmpPath, yaml.dump(doc), "utf-8"); | |
| await rename(tmpPath, portalsPath); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@web/server/src/modules/sources/service.ts` at line 137, The write to
portals.yml via writeFile(portalsPath, yaml.dump(doc), "utf-8") must be made
atomic: import rename from "fs/promises" (add to the existing fs/promises
import), write the YAML to a sibling temp file (e.g. portalsPath + ".tmp" or a
unique suffix) using the existing writeFile call, then call rename(tempPath,
portalsPath) to atomically replace the target; ensure any temp file is cleaned
up on error. Locate the writeFile usage in this module (the call referencing
portalsPath and yaml.dump(doc)) and update it to the temp-write-then-rename
pattern.
| const appsMdPath = join(this.careerOpsRoot, "data", "applications.md"); | ||
| await writeFile(appsMdPath, content, "utf-8"); |
There was a problem hiding this comment.
writeFile will fail if data/ does not exist.
exportToMarkdown writes to <careerOpsRoot>/data/applications.md but never ensures data/ exists. On a fresh checkout (or in CI/sandbox) where the user has not yet created data/, the export will throw ENOENT instead of producing the file. Create the parent directory first:
+ await mkdir(join(this.careerOpsRoot, "data"), { recursive: true });
const appsMdPath = join(this.careerOpsRoot, "data", "applications.md");
await writeFile(appsMdPath, content, "utf-8");(Add mkdir to the fs/promises import.)
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@web/server/src/modules/sync/service.ts` around lines 116 - 117, The export is
writing to appsMdPath using writeFile but doesn't ensure the parent directory
exists, causing ENOENT on fresh checkouts; update the top import from
fs/promises to include mkdir, then before calling writeFile (around the
appsMdPath assignment in the exportToMarkdown / service.ts flow) call
mkdir(join(this.careerOpsRoot, "data"), { recursive: true }) to create the data
directory if missing, then proceed to writeFile(appsMdPath, content, "utf-8").
| describe("validateTransition", () => { | ||
| it("allows Evaluated → Applied", () => { expect(validateTransition("Evaluated", "Applied")).toBe(true); }); | ||
| it("allows Applied → Interview", () => { expect(validateTransition("Applied", "Interview")).toBe(true); }); | ||
| it("allows Interview → Offer", () => { expect(validateTransition("Interview", "Offer")).toBe(true); }); | ||
| it("allows any status → Rejected", () => { | ||
| expect(validateTransition("Applied", "Rejected")).toBe(true); | ||
| expect(validateTransition("Interview", "Rejected")).toBe(true); | ||
| }); | ||
| it("allows any status → Discarded", () => { | ||
| expect(validateTransition("Evaluated", "Discarded")).toBe(true); | ||
| expect(validateTransition("Applied", "Discarded")).toBe(true); | ||
| }); | ||
| it("allows any status → SKIP", () => { expect(validateTransition("Evaluated", "SKIP")).toBe(true); }); | ||
| it("allows Offer → Evaluated (backwards)", () => { expect(validateTransition("Offer", "Evaluated")).toBe(true); }); | ||
| it("rejects same → same", () => { expect(validateTransition("Applied", "Applied")).toBe(false); }); | ||
| }); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial | ⚡ Quick win
Add a negative case for invalid status names.
Per validateTransition in web/server/src/modules/pipeline/service.ts (lines 22-32), the only real branch logic besides same→same is whether both statuses exist in KANBAN_COLUMNS ∪ ARCHIVE_STATUSES. None of the current tests exercise that path, so a regression that lets arbitrary strings through (e.g., from a malformed client payload) would not be caught.
🧪 Suggested additions
it("rejects same → same", () => { expect(validateTransition("Applied", "Applied")).toBe(false); });
+ it("rejects unknown 'from' status", () => {
+ expect(validateTransition("Bogus", "Applied")).toBe(false);
+ });
+ it("rejects unknown 'to' status", () => {
+ expect(validateTransition("Applied", "Bogus")).toBe(false);
+ });
+ it("rejects empty status strings", () => {
+ expect(validateTransition("", "Applied")).toBe(false);
+ expect(validateTransition("Applied", "")).toBe(false);
+ });
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| describe("validateTransition", () => { | |
| it("allows Evaluated → Applied", () => { expect(validateTransition("Evaluated", "Applied")).toBe(true); }); | |
| it("allows Applied → Interview", () => { expect(validateTransition("Applied", "Interview")).toBe(true); }); | |
| it("allows Interview → Offer", () => { expect(validateTransition("Interview", "Offer")).toBe(true); }); | |
| it("allows any status → Rejected", () => { | |
| expect(validateTransition("Applied", "Rejected")).toBe(true); | |
| expect(validateTransition("Interview", "Rejected")).toBe(true); | |
| }); | |
| it("allows any status → Discarded", () => { | |
| expect(validateTransition("Evaluated", "Discarded")).toBe(true); | |
| expect(validateTransition("Applied", "Discarded")).toBe(true); | |
| }); | |
| it("allows any status → SKIP", () => { expect(validateTransition("Evaluated", "SKIP")).toBe(true); }); | |
| it("allows Offer → Evaluated (backwards)", () => { expect(validateTransition("Offer", "Evaluated")).toBe(true); }); | |
| it("rejects same → same", () => { expect(validateTransition("Applied", "Applied")).toBe(false); }); | |
| }); | |
| describe("validateTransition", () => { | |
| it("allows Evaluated → Applied", () => { expect(validateTransition("Evaluated", "Applied")).toBe(true); }); | |
| it("allows Applied → Interview", () => { expect(validateTransition("Applied", "Interview")).toBe(true); }); | |
| it("allows Interview → Offer", () => { expect(validateTransition("Interview", "Offer")).toBe(true); }); | |
| it("allows any status → Rejected", () => { | |
| expect(validateTransition("Applied", "Rejected")).toBe(true); | |
| expect(validateTransition("Interview", "Rejected")).toBe(true); | |
| }); | |
| it("allows any status → Discarded", () => { | |
| expect(validateTransition("Evaluated", "Discarded")).toBe(true); | |
| expect(validateTransition("Applied", "Discarded")).toBe(true); | |
| }); | |
| it("allows any status → SKIP", () => { expect(validateTransition("Evaluated", "SKIP")).toBe(true); }); | |
| it("allows Offer → Evaluated (backwards)", () => { expect(validateTransition("Offer", "Evaluated")).toBe(true); }); | |
| it("rejects same → same", () => { expect(validateTransition("Applied", "Applied")).toBe(false); }); | |
| it("rejects unknown 'from' status", () => { | |
| expect(validateTransition("Bogus", "Applied")).toBe(false); | |
| }); | |
| it("rejects unknown 'to' status", () => { | |
| expect(validateTransition("Applied", "Bogus")).toBe(false); | |
| }); | |
| it("rejects empty status strings", () => { | |
| expect(validateTransition("", "Applied")).toBe(false); | |
| expect(validateTransition("Applied", "")).toBe(false); | |
| }); | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@web/server/tests/pipeline.test.ts` around lines 4 - 19, Add a negative test
that asserts validateTransition rejects unknown status names by verifying that a
transition with any invalid status string returns false; specifically add
expectations like validateTransition("InvalidStatus", "Applied") === false and
validateTransition("Applied", "InvalidStatus") === false (and optionally both
invalid) to exercise the branch that checks membership in KANBAN_COLUMNS ∪
ARCHIVE_STATUSES, referencing the validateTransition function and the
KANBAN_COLUMNS/ARCHIVE_STATUSES constants used in the service.
- Removed authenticated scraper (Google blocks Playwright login, LinkedIn detects automation) - Now uses LinkedIn's public jobs-guest API endpoint (plain HTTP, no Playwright needed) - No login, no cookies, no account risk - Fetches up to 50 results with pagination - Parses HTML fragments with regex (no DOM needed) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add 'Responded' to pipeline docs - Add 2s timeout to all dashboard sync fetches - Fix hardcoded DB credentials (use env vars) - Add accessibility labels throughout UI - Fix cache invalidation in useSendToPipeline - Fix unsafe property access in feed page - Fix CAREER_OPS_ROOT default path - Add error states to route pages - Use DragStartEvent type in kanban board - Various type safety improvements Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 24
♻️ Duplicate comments (23)
.gitignore (1)
47-47:⚠️ Potential issue | 🟡 Minor | ⚡ Quick win
package-lock.jsonwithout a leading/silently ignores lock files in all subdirectories.Without the
/prefix, this rule also suppressesweb/server/package-lock.jsonandweb/client/package-lock.json. If the intent is root-only (e.g., a Claude Code artefact), use/package-lock.json.🔧 Proposed fix
-package-lock.json +/package-lock.json🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In @.gitignore at line 47, The .gitignore entry "package-lock.json" currently matches lockfiles in all subdirectories and therefore unintentionally ignores web/server/package-lock.json and web/client/package-lock.json; update the .gitignore by replacing the bare "package-lock.json" line with "/package-lock.json" to restrict the ignore to the repository root (or explicitly add the subdirectory paths if the intent is to ignore all lockfiles).web/client/src/components/applications/applications-table.tsx (1)
42-49:⚠️ Potential issue | 🟠 Major | ⚡ Quick winUse a real button for sortable headers.
Line 42 makes sorting mouse-only because
<th>is not focusable or keyboard-operable. Put a<button type="button">inside the header cell for the sort trigger, and expose the current direction witharia-sorton the<th>.Possible fix
- <th key={header.id} onClick={header.column.getToggleSortingHandler()} - className="text-left px-4 py-3 text-sm font-medium cursor-pointer select-none"> - <div className="flex items-center gap-1"> + <th + key={header.id} + aria-sort={ + header.column.getIsSorted() === "asc" + ? "ascending" + : header.column.getIsSorted() === "desc" + ? "descending" + : "none" + } + className="text-left px-4 py-3 text-sm font-medium" + > + <button + type="button" + onClick={header.column.getToggleSortingHandler()} + className="flex items-center gap-1 cursor-pointer select-none" + > {flexRender(header.column.columnDef.header, header.getContext())} {header.column.getIsSorted() === "asc" ? <ChevronUp className="h-3 w-3" /> : header.column.getIsSorted() === "desc" ? <ChevronDown className="h-3 w-3" /> : <ChevronsUpDown className="h-3 w-3 text-muted-foreground" />} - </div> + </button> </th>🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@web/client/src/components/applications/applications-table.tsx` around lines 42 - 49, Wrap the sort trigger in a real focusable control: move the onClick handler from the <th> to a <button type="button"> inside the header cell (use header.column.getToggleSortingHandler() as the button's onClick) so the sort control is keyboard operable and has proper semantics; keep the visual structure (flexRender(header.column.columnDef.header, header.getContext()) and the sort icons driven by header.column.getIsSorted()) inside that button; also add aria-sort on the <th> using header.column.getIsSorted() to expose "ascending", "descending" or "none" to assistive tech.web/server/src/modules/sync/service.ts (2)
28-57:⚠️ Potential issue | 🟠 Major | ⚡ Quick winWrap each import snapshot in a transaction.
If one row fails midway, earlier upserts are already committed and the DB no longer reflects a single Markdown/YAML snapshot.
Possible fix
- for (const app of parsed) { - await this.db + await this.db.transaction(async (tx) => { + for (const app of parsed) { + await tx .insert(applications) .values({ number: app.number, @@ - }); - appsCount++; - } + }); + appsCount++; + } + }); @@ - for (const src of parsedSources) { - await this.db + await this.db.transaction(async (tx) => { + for (const src of parsedSources) { + await tx .insert(sources) .values({ name: src.name, @@ - }); - sourcesCount++; - } + }); + sourcesCount++; + } + });Also applies to: 68-87
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@web/server/src/modules/sync/service.ts` around lines 28 - 57, The import loop currently performs multiple upserts directly (for example the block using this.db.insert(applications).values(...).onConflictDoUpdate(...) over parsed), so partial commits can occur; wrap the entire import snapshot in a single transaction (use your DB client's transaction API, e.g. this.db.transaction or runInTransaction) and perform all inserts/upserts against the transaction object (replace this.db with the transactional client), ensuring errors cause a rollback; apply the same transactional wrap to the other import loop instance around lines 68-87 to guarantee the snapshot is atomic.
115-117:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winCreate
data/before exporting.Line 117 will throw
ENOENTon a fresh checkout becausewriteFile()does not create parent directories.Possible fix
-import { readFile, writeFile } from "fs/promises"; +import { mkdir, readFile, writeFile } from "fs/promises"; @@ const content = serializeApplicationsMd(parsed); const appsMdPath = join(this.careerOpsRoot, "data", "applications.md"); + await mkdir(join(this.careerOpsRoot, "data"), { recursive: true }); await writeFile(appsMdPath, content, "utf-8");🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@web/server/src/modules/sync/service.ts` around lines 115 - 117, The writeFile call that writes content from serializeApplicationsMd to appsMdPath will fail if the "data" directory doesn't exist; before calling writeFile (in the same method where serializeApplicationsMd is used and appsMdPath is built from this.careerOpsRoot), ensure the parent directory exists by creating the directory for path dirname(appsMdPath) (e.g., use fs.mkdir or an ensureDir helper with { recursive: true }) and only then call writeFile to write the content.web/server/src/modules/sources/service.ts (2)
137-137:⚠️ Potential issue | 🟠 Major | ⚡ Quick winWrite
portals.ymlatomically.Line 137 overwrites the source-of-truth file in place. If the process dies mid-write,
portals.ymlcan be left truncated or partially written.Possible fix
-import { readFile, writeFile } from "fs/promises"; +import { readFile, writeFile, rename, unlink } from "fs/promises"; @@ - await writeFile(portalsPath, yaml.dump(doc), "utf-8"); + const tmpPath = `${portalsPath}.${process.pid}.${Date.now()}.tmp`; + try { + await writeFile(tmpPath, yaml.dump(doc), "utf-8"); + await rename(tmpPath, portalsPath); + } catch (err) { + await unlink(tmpPath).catch(() => {}); + throw err; + }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@web/server/src/modules/sources/service.ts` at line 137, The current call to writeFile(portalsPath, yaml.dump(doc), "utf-8") can leave portals.yml truncated if the process dies during write; fix by writing atomically: serialize with yaml.dump(doc) to a temp file in the same directory (e.g., portalsPath + ".tmp."+process.pid or a random suffix), fs.writeFile the temp file, fsync the file descriptor (or use fsPromises.writeFile followed by fsync when using open), then fs.rename(tempPath, portalsPath) to atomically replace the original; update the call site that uses writeFile/portalsPath so it performs these steps (or swap in a well-maintained atomic-write helper library) to ensure crash-safe replacement.
52-54:⚠️ Potential issue | 🟠 Major | ⚡ Quick winAwait and serialize
portals.ymlsync.These mutations return success before the source-of-truth file is updated, and concurrent requests can race inside the read/modify/write cycle in
syncToPortalsYml(). A failed or overlapping sync can leave the DB andportals.ymlout of sync.Possible fix
export class SourcesService { + private syncQueue: Promise<void> = Promise.resolve(); + constructor( private db: Database, private careerOpsRoot: string, ) {} + + private enqueuePortalsSync(): Promise<void> { + this.syncQueue = this.syncQueue.then(() => this.syncToPortalsYml()); + return this.syncQueue; + } async create(body: CreateSource) { @@ - this.syncToPortalsYml().catch((err) => - console.error("Failed to sync to portals.yml:", err), - ); + await this.enqueuePortalsSync(); return created; } @@ - this.syncToPortalsYml().catch((err) => - console.error("Failed to sync to portals.yml:", err), - ); + await this.enqueuePortalsSync(); return updated; } @@ - this.syncToPortalsYml().catch((err) => - console.error("Failed to sync to portals.yml:", err), - ); + await this.enqueuePortalsSync(); }Also applies to: 79-81, 96-98, 101-138
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@web/server/src/modules/sources/service.ts` around lines 52 - 54, The current fire-and-forget calls to this.syncToPortalsYml() cause races and return before portals.yml is updated; change call sites to await this.syncToPortalsYml() and propagate errors (do not swallow them with console.error), and make syncToPortalsYml itself serialized by adding a per-instance lock (e.g., this.portalsSyncMutex or a promise chain like this.portalsSyncLock) so concurrent invocations queue and perform a single read/modify/write at a time; update all places that call this.syncToPortalsYml() to be async/await-aware and handle/rethrow errors so mutations only succeed after the file write completes.web/server/src/modules/sync/router.ts (1)
9-19:⚠️ Potential issue | 🔴 Critical | ⚡ Quick winProtect sync mutation endpoints with auth.
Anyone who can call these POST routes can import/export and rewrite dashboard state plus local source-of-truth files. Gate both handlers behind the same authentication/authorization check before invoking
SyncService.Possible direction
export async function syncRouter(app: FastifyInstance) { + const requireSyncAccess = async (request: any, reply: any) => { + const token = request.headers["x-sync-token"]; + if (!app.env.SYNC_API_TOKEN || token !== app.env.SYNC_API_TOKEN) { + return reply.status(401).send({ error: "Unauthorized" }); + } + }; + const getService = () => new SyncService(app.db, resolve(app.env.CAREER_OPS_ROOT)); - app.post("/api/sync/import", async (_request, reply) => { + app.post("/api/sync/import", { preHandler: requireSyncAccess }, async (_request, reply) => { const service = getService(); const result = await service.importFromMarkdown(); return reply.send({ success: true, imported: result }); }); - app.post("/api/sync/export", async (_request, reply) => { + app.post("/api/sync/export", { preHandler: requireSyncAccess }, async (_request, reply) => { const service = getService(); await service.exportToMarkdown(); return reply.send({ success: true }); });🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@web/server/src/modules/sync/router.ts` around lines 9 - 19, The POST handlers for "/api/sync/import" and "/api/sync/export" are unprotected and must require authentication/authorization before calling getService() or SyncService methods; add an auth check (e.g., verifyAuth/ensureAuthenticated middleware or inline token/role validation) at the start of each handler and return 401/403 when unauthorized, then only call getService().importFromMarkdown() and getService().exportToMarkdown() if the caller is authorized; ensure the same auth logic protects both handlers and reference the route handlers that call getService(), importFromMarkdown, and exportToMarkdown.web/client/src/components/pipeline/kanban-card.tsx (1)
27-29:⚠️ Potential issue | 🟠 Major | ⚡ Quick winProvide an explicit accessible name for the icon button.
titleis not a reliable accessible name. Addaria-label(and mark the icon decorative).♿ Suggested fix
- <button onClick={handleClick} className="shrink-0 ml-1 text-muted-foreground hover:text-foreground" title="Open job posting"> - <ExternalLink className="h-3.5 w-3.5" /> + <button + onClick={handleClick} + aria-label="Open job posting" + className="shrink-0 ml-1 text-muted-foreground hover:text-foreground" + > + <ExternalLink className="h-3.5 w-3.5" aria-hidden="true" /> </button>🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@web/client/src/components/pipeline/kanban-card.tsx` around lines 27 - 29, The icon-only button lacks an explicit accessible name; update the button in the KanbanCard component (the element using handleClick and ExternalLink) to include an aria-label such as "Open job posting" and mark the SVG icon decorative (aria-hidden="true" or pass a prop to ExternalLink to render aria-hidden) so screen readers use the button's aria-label rather than the title.web/client/src/components/sources/source-form.tsx (1)
20-26:⚠️ Potential issue | 🟠 Major | ⚡ Quick winAdd explicit labels for form controls.
These controls currently rely on placeholders only, which is not sufficient for accessible form navigation.
♿ Suggested fix
return ( <form onSubmit={handleSubmit} className="flex flex-col gap-3 p-4 border rounded-lg"> - <input placeholder="Company name" value={name} onChange={(e) => setName(e.target.value)} required + <label htmlFor="source-name" className="sr-only">Company name</label> + <input id="source-name" placeholder="Company name" value={name} onChange={(e) => setName(e.target.value)} required className="rounded-md border bg-background px-3 py-2 text-sm" /> - <select value={type} onChange={(e) => setType(e.target.value)} className="rounded-md border bg-background px-3 py-2 text-sm"> + <label htmlFor="source-type" className="sr-only">Source type</label> + <select id="source-type" value={type} onChange={(e) => setType(e.target.value)} className="rounded-md border bg-background px-3 py-2 text-sm"> {SOURCE_TYPES.map((t) => <option key={t} value={t}>{t}</option>)} </select> - <input placeholder="Careers URL (e.g. https://jobs.ashbyhq.com/company)" value={careersUrl} onChange={(e) => setCareersUrl(e.target.value)} + <label htmlFor="careers-url" className="sr-only">Careers URL</label> + <input id="careers-url" placeholder="Careers URL (e.g. https://jobs.ashbyhq.com/company)" value={careersUrl} onChange={(e) => setCareersUrl(e.target.value)} className="rounded-md border bg-background px-3 py-2 text-sm" />🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@web/client/src/components/sources/source-form.tsx` around lines 20 - 26, The inputs in source-form.tsx (the Company name input bound to name/setName, the type select bound to type/setType and SOURCE_TYPES, and the Careers URL input bound to careersUrl/setCareersUrl) rely only on placeholders and need explicit labels for accessibility; add <label htmlFor="..."> elements for each control and give each input/select a matching id, or use visually-hidden labels if you want to preserve current visual styling, and keep existing attributes like required and className unchanged so screen readers can correctly associate the labels with the controls.web/server/tests/sync.test.ts (1)
83-121:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winAdd regression coverage for empty/null portals YAML input.
Please add a
parsePortalsYml("")/parsePortalsYml("null")assertion to lock in expected empty-array behavior.🧪 Suggested test
describe("parsePortalsYml", () => { + it("returns empty array for empty YAML content", () => { + expect(parsePortalsYml("")).toEqual([]); + expect(parsePortalsYml("null")).toEqual([]); + }); + it("parses tracked companies into source records", () => {🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@web/server/tests/sync.test.ts` around lines 83 - 121, The test lacks regression coverage for empty/null YAML inputs for parsePortalsYml; add assertions calling parsePortalsYml("") and parsePortalsYml("null") in the describe("parsePortalsYml") block (e.g., inside the existing it("parses tracked companies into source records") or a new it) and assert both return an empty array (expect(...).toEqual([]) or toHaveLength(0)) so parsePortalsYml's empty-input behavior is locked in.web/server/src/modules/applications/service.ts (3)
85-110:⚠️ Potential issue | 🟠 Major | ⚡ Quick winWrap application update and status-history insert in one transaction.
These statements must commit/rollback together; otherwise a failed history insert leaves mutated application state without audit history.
🧩 Suggested fix
async update(id: string, body: PatchBody) { - const [existing] = await this.db - .select() - .from(applications) - .where(eq(applications.id, id)); - - if (!existing) throw new NotFoundError("Application", id); - - const updateData: Record<string, unknown> = { updatedAt: new Date() }; - if (body.status !== undefined) updateData.status = body.status; - if (body.notes !== undefined) updateData.notes = body.notes; - - const [updated] = await this.db - .update(applications) - .set(updateData) - .where(eq(applications.id, id)) - .returning(); - - if (body.status !== undefined && body.status !== existing.status) { - await this.db.insert(statusHistory).values({ - applicationId: id, - fromStatus: existing.status, - toStatus: body.status, - source: "dashboard", - }); - } + const updated = await this.db.transaction(async (tx) => { + const [existing] = await tx + .select() + .from(applications) + .where(eq(applications.id, id)); + + if (!existing) throw new NotFoundError("Application", id); + + const updateData: Record<string, unknown> = { updatedAt: new Date() }; + if (body.status !== undefined) updateData.status = body.status; + if (body.notes !== undefined) updateData.notes = body.notes; + + const [row] = await tx + .update(applications) + .set(updateData) + .where(eq(applications.id, id)) + .returning(); + + if (body.status !== undefined && body.status !== existing.status) { + await tx.insert(statusHistory).values({ + applicationId: id, + fromStatus: existing.status, + toStatus: body.status, + source: "dashboard", + }); + } + + return row; + });🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@web/server/src/modules/applications/service.ts` around lines 85 - 110, The update method currently performs the application update and the statusHistory insert separately; wrap the select/update/insert logic in a single DB transaction so both commit or rollback together. Use the database transaction API (e.g. this.db.transaction().execute(async (trx) => { ... })) and replace this.db calls inside the transaction with the transaction handle (trx.select()/trx.update()/trx.insert()) so the initial select, the .update(applications).set(updateData).returning(), and the conditional trx.insert(statusHistory) run inside the same transaction and any thrown error rolls everything back; return the updated row from inside the transaction.
112-114:⚠️ Potential issue | 🟠 Major | 🏗️ Heavy liftSerialize markdown sync writes to prevent lost updates.
Concurrent
syncToMarkdownexecutions can race on read-modify-write and overwrite each other inapplications.md.Also applies to: 141-195
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@web/server/src/modules/applications/service.ts` around lines 112 - 114, Concurrent calls to syncToMarkdown are causing read-modify-write races on applications.md; make syncToMarkdown executions serial by introducing a class-level async mutex/queue (e.g., this._markdownLock or a small FIFO Promise chain) and wrap the entire body of syncToMarkdown so it acquires the lock before reading/modifying/writing applications.md and releases it after the write; update all call sites that currently call this.syncToMarkdown().catch(...) (and the other callers in the same module) to call the locked wrapper or to await the promise returned by the serialized method so writes cannot overlap.
35-40:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winEscape
%and_in user search before buildingILIKEpatterns.Current behavior treats user input as wildcard syntax, which broadens matches unexpectedly.
🔎 Suggested fix
if (search) { + const safe = search.replace(/[\\%_]/g, (c) => `\\${c}`); conditions.push( or( - ilike(applications.company, `%${search}%`), - ilike(applications.role, `%${search}%`), + ilike(applications.company, `%${safe}%`), + ilike(applications.role, `%${safe}%`), ), ); }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@web/server/src/modules/applications/service.ts` around lines 35 - 40, The user-provided search string is being interpolated directly into ILIKE patterns causing `%` and `_` in input to act as wildcards; before building the pattern used in the ilike calls (where you push into conditions for applications.company and applications.role), escape backslashes then escape `%` and `_` in the search variable (e.g., replace `\` → `\\`, `%` → `\%`, `_` → `\_`), then wrap the escaped value with the surrounding `%` for substring matching (`%${escapedSearch}%`) and pass that into ilike so user text is matched literally rather than treated as wildcard.web/server/src/modules/applications/router.ts (1)
30-49:⚠️ Potential issue | 🟠 Major | ⚡ Quick winValidate
:idas UUID before calling service methods.Invalid identifiers should be rejected as
400at the router boundary, not forwarded to DB operations.🔐 Suggested fix
import type { FastifyInstance } from "fastify"; import { resolve } from "path"; +import { z } from "zod"; import { ApplicationsService } from "./service.js"; import { listQuerySchema, patchBodySchema } from "./schema.js"; import { ValidationError } from "../../shared/errors.js"; +const idSchema = z.string().uuid(); + export async function applicationsRouter(app: FastifyInstance) { @@ app.get<{ Params: { id: string } }>( "/api/applications/:id", async (request, reply) => { + if (!idSchema.safeParse(request.params.id).success) { + throw new ValidationError("Invalid id format"); + } const service = getService(); const result = await service.getById(request.params.id); return reply.send(result); @@ app.patch<{ Params: { id: string } }>( "/api/applications/:id", async (request, reply) => { + if (!idSchema.safeParse(request.params.id).success) { + throw new ValidationError("Invalid id format"); + } const parsed = patchBodySchema.safeParse(request.body);🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@web/server/src/modules/applications/router.ts` around lines 30 - 49, Validate the route param id as a UUID in both the GET and PATCH handlers before calling getService()/service.getById or service.update: add a UUID check (e.g., using a UUID regex, validator.isUUID, or zod) at the start of the async handlers for app.get and app.patch, and if invalid throw the same ValidationError (or reply with status 400) with a clear message so invalid ids are rejected at the router boundary; keep the existing patchBodySchema.safeParse logic for PATCH after the id validation.web/server/src/modules/applications/schema.ts (1)
14-25:⚠️ Potential issue | 🟠 Major | ⚡ Quick winConstrain list
statusto canonical values.
listQuerySchemaaccepts any string while updates enforce canonical statuses; invalid filters should fail validation instead of returning silent empty results.✅ Suggested fix
export const listQuerySchema = z.object({ - status: z.string().optional(), + status: z.enum(CANONICAL_STATUSES).optional(), minScore: z.coerce.number().optional(), maxScore: z.coerce.number().optional(),🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@web/server/src/modules/applications/schema.ts` around lines 14 - 25, Replace the free-form status field in listQuerySchema with the canonical enum so filters validate against allowed statuses: change the status definition in listQuerySchema to use z.enum(CANONICAL_STATUSES).optional() (same enum used by patchBodySchema) so invalid status filters fail validation instead of returning silent empty results.web/server/tests/pipeline.test.ts (1)
4-19: 🧹 Nitpick | 🔵 Trivial | ⚡ Quick winNegative cases for invalid status names still missing.
validateTransition's membership check (KANBAN_COLUMNS ∪ ARCHIVE_STATUSES) is the only branch besidesfrom === to, and it remains uncovered. A regression that lets arbitrary strings through (e.g., from a malformed router payload, see related comment inrouter.ts) would still pass this suite.🧪 Suggested additions
it("rejects same → same", () => { expect(validateTransition("Applied", "Applied")).toBe(false); }); + it("rejects unknown 'from' status", () => { + expect(validateTransition("Bogus", "Applied")).toBe(false); + }); + it("rejects unknown 'to' status", () => { + expect(validateTransition("Applied", "Bogus")).toBe(false); + }); + it("rejects empty status strings", () => { + expect(validateTransition("", "Applied")).toBe(false); + expect(validateTransition("Applied", "")).toBe(false); + }); });🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@web/server/tests/pipeline.test.ts` around lines 4 - 19, Add negative tests in pipeline.test.ts to cover the membership check of KANBAN_COLUMNS ∪ ARCHIVE_STATUSES for validateTransition: assert that unknown/invalid status strings (e.g., "Bogus", "", "123-INVALID") used as either from or to cause validateTransition(...) to return false, and also add cases where one side is valid and the other is invalid to ensure those are rejected; reference the validateTransition function and the KANBAN_COLUMNS/ARCHIVE_STATUSES sets when locating the logic to test.scan.mjs (2)
425-475:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winFiltered-by-title metric is still undercounted for the new sources.
fetchAdzuna,fetchRemoteOk,fetchRemotive, andfetchWeWorkRemotelyall applytitleFilterinternally and return only matches. The Section 4b/4c blocks then only dototalFound += jobs.lengthand never incrementtotalFiltered, so theTotal jobs foundandFiltered by titlenumbers in the summary are inconsistent with how the Greenhouse/Ashby/Lever loop above them reports them. Either return raw arrays from the helpers and filter in the caller (matching the pattern at lines 396-403), or return{ jobs, filteredCount }and addtotalFiltered += filteredCounthere.
510-525:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winDashboard ingest still treats non-2xx responses as success.
Timeout via
AbortControllerwas added (good), but there's still noif (!res.ok)guard, so a 4xx/5xx from/api/jobs/ingest(e.g., schema-mismatched payload) is silently ignored and offers will not be ingested without any signal to the user. Also,clearTimeout(timeout)should run infinallyso an aborted/failed fetch doesn't leak the timer until it fires.As per coding guidelines:
**/*.mjs: Check for command injection, path traversal, and SSRF. Ensure scripts handle missing data/ directories gracefully.🔧 Suggested patch
- try { - const payload = newOffers.map(o => ({ - title: o.title, company: o.company, url: o.url, - source: o.source, location: o.location || undefined, - })); - const controller = new AbortController(); - const timeout = setTimeout(() => controller.abort(), 2000); - await fetch('http://localhost:3000/api/jobs/ingest', { - method: 'POST', - headers: { 'Content-Type': 'application/json' }, - body: JSON.stringify(payload), - signal: controller.signal, - }); - clearTimeout(timeout); - } catch { /* dashboard not running — no-op */ } + const controller = new AbortController(); + const timeout = setTimeout(() => controller.abort(), 2000); + try { + const payload = newOffers.map(o => ({ + title: o.title, company: o.company, url: o.url, + source: o.source, location: o.location || undefined, + })); + const res = await fetch('http://localhost:3000/api/jobs/ingest', { + method: 'POST', + headers: { 'Content-Type': 'application/json' }, + body: JSON.stringify(payload), + signal: controller.signal, + }); + if (!res.ok) throw new Error(`Dashboard ingest failed: HTTP ${res.status}`); + } catch { /* dashboard not running or rejected — no-op */ } finally { + clearTimeout(timeout); + }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@scan.mjs` around lines 510 - 525, The fetch to the dashboard currently ignores non-2xx responses and can leak the timeout; update the block that builds payload from newOffers and uses AbortController/fetch('http://localhost:3000/api/jobs/ingest') to (1) await the response into a variable (e.g., res) and check if (!res.ok) then log or surface the failure including res.status and await res.text() for diagnostics instead of treating it as success, (2) move clearTimeout(timeout) into a finally block so the timer is always cleared even on abort/error, and (3) in the catch keep ignoring the dashboard-not-running case but accept the error parameter so you can distinguish/conditionally log non-AbortErrors.web/server/src/modules/jobs/service.ts (2)
79-99:⚠️ Potential issue | 🟠 Major | ⚡ Quick winDuplicate detection still relies on substring sniffing of error messages.
This block still throws-and-catches and inspects
err.messagefor"unique" | "duplicate" | "23505". That is fragile across Postgres driver versions and locales (the message can be localized or reworded), can misclassify other constraint violations as duplicates, and incurs the cost of a thrown exception per duplicate. Drizzle 0.44 supports the explicitonConflictDoNothing({ target: discoveredJobs.url })primitive — use it and decideinsertedvsskippedbased on.returning()length.#!/bin/bash # Confirm `url` has a unique constraint/index that onConflictDoNothing can target. rg -nP -C3 'discoveredJobs|discovered_jobs' web/server/src/db/schema.ts rg -nP -C2 '\.unique\(\)|uniqueIndex|primaryKey\(\)' web/server/src/db/schema.ts🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@web/server/src/modules/jobs/service.ts` around lines 79 - 99, Replace the thrown-exception duplicate detection in the this.db.insert(...) block (which currently inspects err.message) with Drizzle's onConflictDoNothing targeting discoveredJobs.url; call this.db.insert(discoveredJobs).values(...).onConflictDoNothing({ target: discoveredJobs.url }).returning({ id: discoveredJobs.id }) (or a minimal returning projection), then increment inserted by the number of returned rows and increment skipped by (1 - returnedRows) per attempted insert (or compute skipped = attempts - inserted after a batch), removing the try/catch and message sniffing entirely.
145-163:⚠️ Potential issue | 🟠 Major | ⚡ Quick win
sendToPipelinestill has the header-on-last-line bug and is non-atomic.Two issues persist:
- Lines 147-148: when
## Pendientesis the last line ofcontentwithout a trailing newline,content.indexOf("\n", idx)returns-1, soafterHeader = 0. The new bullet is then prepended to the entire file and the original header content is duplicated below it. Handle the-1case explicitly and ensure a separating newline exists.- Lines 155-161:
writeFileruns before the DB status update. If the DB update fails (or the URL is already in## Pendientesfrom a previous partial run), the markdown gets the URL appended again on the next call while the row stays instatus: "new", producing duplicate bullets. Either (a) update the DB first and write the file after, or (b) check whetherjob.urlalready appears incontentbefore appending so retries are idempotent.🔧 Minimal patch for the EOF + idempotency cases
- } else { - // Append under the ## Pendientes section - const idx = content.indexOf(pendientesSectionHeader); - const afterHeader = content.indexOf("\n", idx) + 1; - content = - content.slice(0, afterHeader) + - `\n- ${job.url}\n` + - content.slice(afterHeader); - } + } else if (!content.includes(`- ${job.url}`)) { + // Append under the ## Pendientes section + const idx = content.indexOf(pendientesSectionHeader); + const nl = content.indexOf("\n", idx); + const afterHeader = nl === -1 ? content.length : nl + 1; + const prefix = content.slice(0, afterHeader); + const suffix = content.slice(afterHeader); + content = `${prefix}${prefix.endsWith("\n") ? "" : "\n"}\n- ${job.url}\n${suffix}`; + }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@web/server/src/modules/jobs/service.ts` around lines 145 - 163, sendToPipeline currently miscomputes afterHeader when pendientesSectionHeader is the last line and is non-atomic, causing duplicated prepends and duplicate bullets on retries; fix by first performing the DB update (the this.db.update(...).set({ status: "sent_to_pipeline" }).where(eq(discoveredJobs.id, id)).returning()) before writing the file, then re-read pipelinePath content, check whether job.url already exists in content (idempotency) and only append if missing, and when computing afterHeader from content.indexOf("\n", idx) handle the -1 case explicitly by setting afterHeader = idx + pendientesSectionHeader.length + 1 and ensuring there is exactly one separating newline before the inserted bullet so the header isn't duplicated or the bullet prepended to the whole file; use pendientesSectionHeader, pipelinePath, writeFile, and discoveredJobs to locate the code to change.web/server/src/modules/pipeline/router.ts (1)
18-29:⚠️ Potential issue | 🟠 Major | ⚡ Quick win
request.params.idUUID validation andtoStatusenum check still missing.
/:id/movestill passesrequest.params.idstraight through toservice.moveCardand only checkstoStatusfor truthiness/type. A malformed id will surface as a Postgres type error (HTTP 500) instead of a clean 400, and an arbitrarytoStatusstring reaches the service layer. Validateidwithz.string().uuid()andtoStatuswithz.enum([...KANBAN_COLUMNS, ...ARCHIVE_STATUSES])(or a narrower allowed set) at the router boundary so error handling is consistent with the other routers.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@web/server/src/modules/pipeline/router.ts` around lines 18 - 29, The route handler is not validating request.params.id as a UUID nor constraining toStatus to the allowed enums before calling service.moveCard; update the handler registered with app.patch (the "/api/pipeline/:id/move" route) to parse and validate both values using z: validate request.params.id with z.string().uuid() and validate request.body.toStatus with z.enum([...KANBAN_COLUMNS, ...ARCHIVE_STATUSES]) (or a narrower enum), throw a ValidationError on failure, and only call getService().moveCard(request.params.id, toStatus) after successful validation so malformed ids produce 400s and invalid statuses never reach the service layer.web/client/src/main.tsx (1)
22-22:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winNon-null assertion on
getElementById("root")— still unaddressed from the previous review.If the mount point is absent, React throws
Cannot read properties of null (reading '_internalRoot')with no hint about what's missing. A guarded throw produces a clear message.🐛 Proposed fix
-ReactDOM.createRoot(document.getElementById("root")!).render( +const rootEl = document.getElementById("root"); +if (!rootEl) throw new Error("Mount point `#root` not found in index.html"); +ReactDOM.createRoot(rootEl).render(🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@web/client/src/main.tsx` at line 22, ReactDOM.createRoot is currently called with a non-null assertion on document.getElementById("root") which will throw a confusing NPE if the mount element is missing; change the code to first fetch const rootEl = document.getElementById("root"), check if rootEl is null and if so throw a clear Error like "Root element '#root' not found - ensure index.html contains <div id=\"root\">", then pass rootEl to ReactDOM.createRoot; update the site entry in main.tsx to use this guarded rootEl instead of the non-null assertion.web/client/src/components/pipeline/kanban-board.tsx (1)
47-47:⚠️ Potential issue | 🟠 Major | ⚡ Quick win
KanbanCardrendered insideDragOverlaywithoutisOverlayprop — previously flagged, still unaddressed.
KanbanCardcallsuseSortable({ id: app.id })internally. Rendering the same component underDragOverlaywith an identicalidcauses both the ghost (in-column placeholder) and the floating preview to reportisDragging: true, so both render atopacity: 0.5simultaneously.Per the dnd-kit docs, components using
useSortableshould not be rendered insideDragOverlay. The fix is to accept anisOverlayprop inKanbanCardand bypass the opacity reduction when it is set:🐛 Proposed fix (two-file change)
In
web/client/src/components/pipeline/kanban-card.tsx:-export function KanbanCard({ app }: { app: Application }) { +export function KanbanCard({ app, isOverlay = false }: { app: Application; isOverlay?: boolean }) { const { attributes, listeners, setNodeRef, transform, transition, isDragging } = useSortable({ id: app.id, data: { status: app.status } }); - const style = { transform: CSS.Transform.toString(transform), transition, opacity: isDragging ? 0.5 : 1 }; + const style = { transform: CSS.Transform.toString(transform), transition, opacity: isDragging && !isOverlay ? 0.5 : 1 };In
web/client/src/components/pipeline/kanban-board.tsx(line 47):-<DragOverlay>{activeApp && <KanbanCard app={activeApp} />}</DragOverlay> +<DragOverlay>{activeApp && <KanbanCard app={activeApp} isOverlay />}</DragOverlay>🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@web/client/src/components/pipeline/kanban-board.tsx` at line 47, KanbanCard is being rendered inside DragOverlay while also using useSortable({ id: app.id }), causing both the ghost and overlay to think they are dragging and show reduced opacity; modify KanbanCard (component name KanbanCard) to accept an optional boolean prop isOverlay and, inside its render logic where it uses useSortable and applies the drag opacity (e.g., className or style driven by isDragging), skip or override the opacity reduction when isOverlay is true; then update the usage in the kanban-board component (where DragOverlay renders <KanbanCard app={activeApp} />) to pass isOverlay={true} to the overlay instance.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 87373e51-c94c-41ff-afe8-8c3cb38b2aaa
📒 Files selected for processing (78)
.claude-plugin/plugin.json.env.example.gitignoreCLAUDE.mdbatch-apply.mjsdocs/WEB_DASHBOARD.mddocs/superpowers/plans/2026-05-04-web-dashboard.mddocs/superpowers/specs/2026-05-04-web-dashboard-design.mdmerge-tracker.mjspackage.jsonscan-scrape.mjsscan.mjsweb/.env.exampleweb/client/index.htmlweb/client/package.jsonweb/client/postcss.config.jsweb/client/src/components/applications/applications-table.tsxweb/client/src/components/applications/filters.tsxweb/client/src/components/applications/status-badge.tsxweb/client/src/components/feed/feed-filters.tsxweb/client/src/components/feed/job-card.tsxweb/client/src/components/pipeline/kanban-board.tsxweb/client/src/components/pipeline/kanban-card.tsxweb/client/src/components/pipeline/kanban-column.tsxweb/client/src/components/sources/source-form.tsxweb/client/src/components/sources/source-list.tsxweb/client/src/lib/api.tsweb/client/src/lib/constants.tsweb/client/src/lib/queries.tsweb/client/src/lib/types.tsweb/client/src/lib/utils.tsweb/client/src/main.tsxweb/client/src/routeTree.gen.tsweb/client/src/routes/__root.tsxweb/client/src/routes/feed.tsxweb/client/src/routes/index.tsxweb/client/src/routes/pipeline.tsxweb/client/src/routes/sources.tsxweb/client/src/styles/globals.cssweb/client/tailwind.config.tsweb/client/tsconfig.jsonweb/client/tsconfig.node.jsonweb/client/tsconfig.node.tsbuildinfoweb/client/tsconfig.tsbuildinfoweb/client/vite.config.d.tsweb/client/vite.config.jsweb/client/vite.config.tsweb/docker-compose.ymlweb/server/drizzle.config.tsweb/server/package.jsonweb/server/src/config/env.tsweb/server/src/db/client.tsweb/server/src/db/migrations/0000_icy_marvel_zombies.sqlweb/server/src/db/migrations/meta/0000_snapshot.jsonweb/server/src/db/migrations/meta/_journal.jsonweb/server/src/db/schema.tsweb/server/src/db/seed.tsweb/server/src/index.tsweb/server/src/modules/applications/router.tsweb/server/src/modules/applications/schema.tsweb/server/src/modules/applications/service.tsweb/server/src/modules/jobs/router.tsweb/server/src/modules/jobs/schema.tsweb/server/src/modules/jobs/service.tsweb/server/src/modules/pipeline/router.tsweb/server/src/modules/pipeline/service.tsweb/server/src/modules/sources/router.tsweb/server/src/modules/sources/schema.tsweb/server/src/modules/sources/service.tsweb/server/src/modules/sync/parsers.tsweb/server/src/modules/sync/router.tsweb/server/src/modules/sync/service.tsweb/server/src/shared/errors.tsweb/server/src/shared/middleware.tsweb/server/src/types.tsweb/server/tests/pipeline.test.tsweb/server/tests/sync.test.tsweb/server/tsconfig.json
| # Web dashboard | ||
| web/server/dist/ | ||
| web/client/dist/ | ||
| web/.env | ||
| web/server/.env No newline at end of file |
There was a problem hiding this comment.
Web dashboard block is missing web/client/.env and *.tsbuildinfo entries.
Two gaps in the new web dashboard section:
-
web/client/.env— Vite projects commonly place a.env(e.g.,VITE_API_URL=…) at the package root. Currently onlyweb/.envandweb/server/.envare covered; aweb/client/.envwould be committed unguarded. -
*.tsbuildinfo— TypeScript incremental build artifacts.web/client/tsconfig.tsbuildinfois actively being committed in this PR as evidence that this entry is missing; it will produce noisy, always-changing diffs on every build.
🔧 Proposed additions
# Web dashboard
web/server/dist/
web/client/dist/
web/.env
web/server/.env
+web/client/.env
+web/client/.env.local
+*.tsbuildinfo🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In @.gitignore around lines 49 - 53, Add missing ignore patterns to the web
dashboard section in .gitignore: include "web/client/.env" to prevent committing
client package env files and add "*.tsbuildinfo" (or specifically
"web/client/tsconfig.tsbuildinfo") to stop TypeScript incremental build
artifacts from being tracked; update the existing web dashboard block (the lines
listing web/server/dist/, web/client/dist/, web/.env, web/server/.env) to also
contain these two entries.
| const minScore = parseFloat(getArg('--min-score', '4.0')); | ||
| const statusFilter = getArg('--status', 'Evaluated'); | ||
| const numsArg = getArg('--nums', null); | ||
| const selectedNums = numsArg ? numsArg.split(',').map(Number) : null; | ||
| const concurrency = parseInt(getArg('--concurrency', '5')); | ||
|
|
There was a problem hiding this comment.
Reject invalid --concurrency values.
--concurrency 0, a negative number, or NaN produces zero workers in parallelProcess, so the script can print a successful summary without processing any applications.
Suggested fix
-const concurrency = parseInt(getArg('--concurrency', '5'));
+const concurrency = Number.parseInt(getArg('--concurrency', '5'), 10);
+if (!Number.isInteger(concurrency) || concurrency < 1) {
+ console.error('Error: --concurrency must be a positive integer.');
+ process.exit(1);
+}Also applies to: 410-423
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@batch-apply.mjs` around lines 45 - 50, The script currently accepts invalid
--concurrency values (0, negative, NaN) which lead to zero workers; validate the
parsed concurrency after calling parseInt(getArg('--concurrency', '5')) into the
concurrency variable and reject values <= 0 or NaN: when invalid, log a clear
error via your logger/console and exit with a non-zero code instead of
proceeding to parallelProcess; update any other places where concurrency is
parsed (the other block around lines 410-423) to use the same validation logic
so parallelProcess always receives a positive integer.
| const tailoredSummary = summary | ||
| .replace(/Software Engineer/i, `Software Engineer targeting ${app.role} opportunities`) | ||
| + (app.company ? ` Excited to bring this experience to ${app.company}.` : ''); | ||
|
|
||
| // Convert markdown bullets to HTML | ||
| function bulletsToHtml(text) { | ||
| return text.split('\n') | ||
| .filter(l => l.trim().startsWith('-')) | ||
| .map(l => `<li>${l.replace(/^-\s*/, '').replace(/\*\*([^*]+)\*\*/g, '<strong>$1</strong>')}</li>`) | ||
| .join('\n'); |
There was a problem hiding this comment.
Escape untrusted content before writing cv.html.
app.company, app.role, and CV-derived sections are interpolated straight into HTML that Playwright opens. A malicious value from scanned job data can inject script into the generated file.
Suggested fix
+function escapeHtml(value = '') {
+ return value
+ .replaceAll('&', '&')
+ .replaceAll('<', '<')
+ .replaceAll('>', '>')
+ .replaceAll('"', '"')
+ .replaceAll("'", ''');
+}
+
+function formatInlineMarkdown(text) {
+ return escapeHtml(text).replace(/\*\*([^*]+)\*\*/g, '<strong>$1</strong>');
+}
+
const tailoredSummary = summary
.replace(/Software Engineer/i, `Software Engineer targeting ${app.role} opportunities`)
+ (app.company ? ` Excited to bring this experience to ${app.company}.` : '');
@@
- .map(l => `<li>${l.replace(/^-\s*/, '').replace(/\*\*([^*]+)\*\*/g, '<strong>$1</strong>')}</li>`)
+ .map(l => `<li>${formatInlineMarkdown(l.replace(/^-\s*/, ''))}</li>`)
@@
- const bulletHtml = bullets
- .map(l => `<li>${l.replace(/^-\s*/, '').replace(/\*\*([^*]+)\*\*/g, '<strong>$1</strong>')}</li>`)
+ const bulletHtml = bullets
+ .map(l => `<li>${formatInlineMarkdown(l.replace(/^-\s*/, ''))}</li>`)
.join('\n');
@@
- .map(s => `<span class="competency-tag">${s}</span>`)
+ .map(s => `<span class="competency-tag">${escapeHtml(s)}</span>`)
@@
- <span class="job-company">${exp.company}</span>
- <span class="job-period">${exp.date}</span>
+ <span class="job-company">${escapeHtml(exp.company)}</span>
+ <span class="job-period">${escapeHtml(exp.date)}</span>
@@
- <div class="job-role">Backend Software Engineer <span class="job-location">| ${exp.location}</span></div>
+ <div class="job-role">Backend Software Engineer <span class="job-location">| ${escapeHtml(exp.location)}</span></div>
@@
- <div class="summary-text">${tailoredSummary}</div>
+ <div class="summary-text">${escapeHtml(tailoredSummary)}</div>Also applies to: 180-196, 208-302
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@batch-apply.mjs` around lines 156 - 165, The code directly interpolates
untrusted values (app.company, app.role and CV-derived text) into the generated
cv.html (see tailoredSummary and bulletsToHtml), enabling XSS; fix by
HTML-escaping all user/job/CV strings before inserting into HTML and when
building list items in bulletsToHtml (escape then apply safe markdown-to-HTML
transformations only for allowed tags like <strong>), and ensure any other HTML
assembly paths that write cv.html or its sections perform the same
escaping/sanitization so no raw input is written unescaped.
| try { | ||
| const page = await browser.newPage(); | ||
| await page.goto(`file://${join(process.cwd(), htmlPath)}`, { waitUntil: 'networkidle' }); | ||
| await page.pdf({ | ||
| path: pdfPath, | ||
| format: 'A4', | ||
| margin: { top: '40px', bottom: '40px', left: '40px', right: '40px' }, | ||
| printBackground: true, | ||
| }); | ||
| await page.close(); |
There was a problem hiding this comment.
Always close the per-job Playwright page.
If goto() or pdf() throws, page.close() is skipped. With concurrent jobs, failed runs will accumulate open tabs and eventually exhaust browser resources.
Suggested fix
- try {
- const page = await browser.newPage();
+ let page;
+ try {
+ page = await browser.newPage();
await page.goto(`file://${join(process.cwd(), htmlPath)}`, { waitUntil: 'networkidle' });
await page.pdf({
path: pdfPath,
format: 'A4',
margin: { top: '40px', bottom: '40px', left: '40px', right: '40px' },
printBackground: true,
});
- await page.close();
console.log(` ✓ PDF: ${pdfPath}`);
} catch (err) {
console.log(` ✗ PDF failed: ${err.message}`);
+ } finally {
+ await page?.close().catch(() => {});
}🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@batch-apply.mjs` around lines 349 - 358, The code creates a Playwright page
with browser.newPage() and calls page.goto() and page.pdf(), but page.close() is
only in the try block so it won't run if goto() or pdf() throw; wrap the page
lifecycle in a try/finally (create the page, then in finally call page.close()
only if page is defined) so every job always closes its page even on errors;
reference the browser.newPage(), page.goto(), page.pdf(), and page.close() calls
and ensure the finally block handles a possibly undefined page and
suppresses/propagates errors appropriately.
| │ │ │ │ └── client.ts # DB connection | ||
| │ │ │ ├── modules/ | ||
| │ │ │ │ ├── applications/ | ||
| ��� │ │ │ │ ├── router.ts # GET list, GET :id, PATCH :id, GET stats |
There was a problem hiding this comment.
Replace the garbled character in the directory tree line.
The ��� character corrupts the rendered tree and should be replaced with a normal box-drawing pipe.
Proposed fix
-��� │ │ │ │ ├── router.ts # GET list, GET :id, PATCH :id, GET stats
+│ │ │ │ ├── router.ts # GET list, GET :id, PATCH :id, GET stats🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@docs/superpowers/specs/2026-05-04-web-dashboard-design.md` at line 41,
Replace the garbled "���" characters at the start of the directory tree line
with a normal vertical box-drawing pipe so the tree renders properly; edit the
line containing "router.ts # GET list, GET :id, PATCH :id, GET stats" and
change the leading "���" to a single box-drawing vertical bar (pipe) character
so the ASCII tree uses consistent box-drawing symbols.
| if (!line.startsWith("|")) continue; | ||
| if (line.includes("---")) continue; | ||
|
|
||
| const cells = line.split("|").map((s) => s.trim()); |
There was a problem hiding this comment.
Don't skip any row that happens to contain ---.
Line 21 currently treats every table row containing --- as a separator. A notes value like follow-up --- recruiter requested portfolio would be silently dropped during import. Match only actual separator rows instead.
💡 Minimal fix
for (const line of lines) {
if (!line.startsWith("|")) continue;
- if (line.includes("---")) continue;
+ if (/^\|\s*:?-{3,}:?\s*(\|\s*:?-{3,}:?\s*)+\|$/.test(line.trim())) continue;
const cells = line.split("|").map((s) => s.trim());📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (!line.startsWith("|")) continue; | |
| if (line.includes("---")) continue; | |
| const cells = line.split("|").map((s) => s.trim()); | |
| if (!line.startsWith("|")) continue; | |
| if (/^\|\s*:?-{3,}:?\s*(\|\s*:?-{3,}:?\s*)+\|$/.test(line.trim())) continue; | |
| const cells = line.split("|").map((s) => s.trim()); |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@web/server/src/modules/sync/parsers.ts` around lines 20 - 23, The current
check uses line.includes("---") which incorrectly skips any row containing
triple dashes inside a cell; change this to detect actual Markdown table
separator rows by inspecting the parsed cells instead: replace the
line.includes("---") guard with a separator test on the generated cells
(variable cells) that returns true only when every cell is a table-separator
token (e.g., consists solely of dashes with optional leading/trailing
colons/spaces) and skip only when that test is true; update the logic around
line and cells (the spot where line.includes("---") and const cells =
line.split("|").map(...) appear) to apply this more precise separator check.
| const companies = (doc.tracked_companies || []) as Record<string, unknown>[]; | ||
| const titleFilter = doc.title_filter || {}; | ||
|
|
||
| return companies.map((c) => { | ||
| const careersUrl = (c.careers_url as string) || ""; | ||
| const api = (c.api as string) || ""; | ||
| let type = "custom"; | ||
|
|
||
| if (api.includes("greenhouse") || careersUrl.match(/greenhouse\.io/)) { | ||
| type = "greenhouse"; | ||
| } else if (careersUrl.match(/ashbyhq\.com/)) { | ||
| type = "ashby"; | ||
| } else if (careersUrl.match(/lever\.co/)) { | ||
| type = "lever"; | ||
| } | ||
|
|
||
| const config: Record<string, unknown> = { title_filter: titleFilter }; | ||
| if (careersUrl) config.careers_url = careersUrl; | ||
| if (api) config.api = api; | ||
|
|
||
| return { | ||
| name: c.name as string, | ||
| type, | ||
| config, | ||
| enabled: c.enabled !== false, | ||
| }; |
There was a problem hiding this comment.
Validate tracked_companies before mapping it.
If tracked_companies is not an array, or one element is null or missing name, this either throws or returns { name: undefined }, which will fail later during source sync. Since portals.yml is user-edited, coerce non-arrays to [] and skip invalid entries here.
💡 Minimal fix
- const companies = (doc.tracked_companies || []) as Record<string, unknown>[];
+ const companies = Array.isArray(doc.tracked_companies) ? doc.tracked_companies : [];
const titleFilter = doc.title_filter || {};
- return companies.map((c) => {
- const careersUrl = (c.careers_url as string) || "";
- const api = (c.api as string) || "";
+ return companies.flatMap((c) => {
+ if (!c || typeof c !== "object") return [];
+ const source = c as Record<string, unknown>;
+ if (typeof source.name !== "string" || source.name.trim().length === 0) return [];
+
+ const careersUrl = typeof source.careers_url === "string" ? source.careers_url : "";
+ const api = typeof source.api === "string" ? source.api : "";
let type = "custom";
if (api.includes("greenhouse") || careersUrl.match(/greenhouse\.io/)) {
@@
- name: c.name as string,
+ name: source.name.trim(),
type,
config,
- enabled: c.enabled !== false,
+ enabled: source.enabled !== false,
};
});🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@web/server/src/modules/sync/parsers.ts` around lines 74 - 99, The current
mapping over tracked_companies can throw or produce entries with undefined
names; update the code around the companies variable (where tracked_companies is
read and mapped) to coerce non-arrays to [] and filter out invalid entries
before mapping: ensure companies is set to Array.isArray(doc.tracked_companies)
? doc.tracked_companies : [], then filter out falsy items and objects missing a
valid string name (the item referenced as c and c.name) before the existing
mapping logic that uses c.careers_url, c.api and builds config/type/enabled.
This prevents nulls or entries without name from being returned or causing
runtime errors during source sync.
| async importFromMarkdown(): Promise<{ apps: number; sources: number }> { | ||
| const appsMdPath = join(this.careerOpsRoot, "data", "applications.md"); | ||
| const portalsYmlPath = join(this.careerOpsRoot, "portals.yml"); | ||
|
|
||
| // Import applications | ||
| let appsCount = 0; | ||
| try { | ||
| const mdContent = await readFile(appsMdPath, "utf-8"); | ||
| const parsed = parseApplicationsMd(mdContent); | ||
|
|
||
| for (const app of parsed) { | ||
| await this.db | ||
| .insert(applications) | ||
| .values({ | ||
| number: app.number, | ||
| company: app.company, | ||
| role: app.role, | ||
| score: app.score, | ||
| status: app.status, | ||
| reportPath: app.reportPath, | ||
| pdfGenerated: app.pdfGenerated, | ||
| notes: app.notes, | ||
| appliedAt: app.date ? new Date(app.date) : null, | ||
| }) | ||
| .onConflictDoUpdate({ | ||
| target: applications.number, | ||
| set: { | ||
| company: app.company, | ||
| role: app.role, | ||
| score: app.score, | ||
| status: app.status, | ||
| reportPath: app.reportPath, | ||
| pdfGenerated: app.pdfGenerated, | ||
| notes: app.notes, | ||
| appliedAt: app.date ? new Date(app.date) : null, | ||
| updatedAt: new Date(), | ||
| }, | ||
| }); | ||
| appsCount++; | ||
| } | ||
| } catch (err: unknown) { | ||
| if ((err as NodeJS.ErrnoException).code !== "ENOENT") throw err; | ||
| } | ||
|
|
||
| // Import sources | ||
| let sourcesCount = 0; | ||
| try { | ||
| const ymlContent = await readFile(portalsYmlPath, "utf-8"); | ||
| const parsedSources = parsePortalsYml(ymlContent); | ||
|
|
||
| for (const src of parsedSources) { | ||
| await this.db | ||
| .insert(sources) | ||
| .values({ | ||
| name: src.name, | ||
| type: src.type, | ||
| config: src.config, | ||
| enabled: src.enabled, | ||
| }) | ||
| .onConflictDoUpdate({ | ||
| target: sources.name, | ||
| set: { | ||
| type: src.type, | ||
| config: src.config, | ||
| enabled: src.enabled, | ||
| updatedAt: new Date(), | ||
| }, | ||
| }); | ||
| sourcesCount++; | ||
| } | ||
| } catch (err: unknown) { | ||
| if ((err as NodeJS.ErrnoException).code !== "ENOENT") throw err; | ||
| } | ||
|
|
||
| return { apps: appsCount, sources: sourcesCount }; | ||
| } |
There was a problem hiding this comment.
Reconcile deletions and YAML-side renames during import.
importFromMarkdown() only upserts. If an application/source is removed from applications.md/portals.yml, or a source is renamed in YAML, the old DB row survives and the dashboard keeps showing data that no longer exists in the source-of-truth files.
After parsing a present file, delete or archive rows whose number/name is absent from that snapshot in the same transaction. For sources, this likely needs an explicit plan for dependent discoveredJobs rows as well.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@web/server/src/modules/sync/service.ts` around lines 18 - 93,
importFromMarkdown currently only upserts and never removes DB rows for entries
deleted or renamed in applications.md or portals.yml; change importFromMarkdown
to run the insert/upsert logic inside a single transaction and, after parsing
each file, compute the set of parsed keys (application numbers and source names)
and delete or archive any rows in the applications or sources tables whose
number/name are NOT in that parsed set (do this using this.db.transaction and
delete queries targeting applications.number and sources.name). For sources, add
a plan in the same transaction to handle dependent discoveredJobs (either delete
those rows, reassign them to the new source name when a rename is detected, or
mark them archived) to avoid orphaned FK data; ensure you identify renames by
comparing existing DB names to parsed names and treat them accordingly rather
than blind deletion.
| export class NotFoundError extends AppError { | ||
| constructor(resource: string, id: string) { | ||
| super(`${resource} '${id}' not found`, 404, "NOT_FOUND"); | ||
| } | ||
| } | ||
|
|
||
| export class ValidationError extends AppError { | ||
| constructor(message: string) { | ||
| super(message, 400, "VALIDATION_ERROR"); | ||
| } | ||
| } |
There was a problem hiding this comment.
Subclasses inherit name = "AppError" — set this.name in each subclass.
AppError sets this.name = "AppError" and neither NotFoundError nor ValidationError overrides it, so stack traces and logs will always report AppError regardless of the actual subtype, making triage harder.
🐛 Proposed fix
export class NotFoundError extends AppError {
constructor(resource: string, id: string) {
super(`${resource} '${id}' not found`, 404, "NOT_FOUND");
+ this.name = "NotFoundError";
}
}
export class ValidationError extends AppError {
constructor(message: string) {
super(message, 400, "VALIDATION_ERROR");
+ this.name = "ValidationError";
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export class NotFoundError extends AppError { | |
| constructor(resource: string, id: string) { | |
| super(`${resource} '${id}' not found`, 404, "NOT_FOUND"); | |
| } | |
| } | |
| export class ValidationError extends AppError { | |
| constructor(message: string) { | |
| super(message, 400, "VALIDATION_ERROR"); | |
| } | |
| } | |
| export class NotFoundError extends AppError { | |
| constructor(resource: string, id: string) { | |
| super(`${resource} '${id}' not found`, 404, "NOT_FOUND"); | |
| this.name = "NotFoundError"; | |
| } | |
| } | |
| export class ValidationError extends AppError { | |
| constructor(message: string) { | |
| super(message, 400, "VALIDATION_ERROR"); | |
| this.name = "ValidationError"; | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@web/server/src/shared/errors.ts` around lines 8 - 18, The NotFoundError and
ValidationError constructors don't override the base AppError name, so set
this.name explicitly after calling super to reflect the actual subclass; update
the NotFoundError constructor to set this.name = "NotFoundError" and the
ValidationError constructor to set this.name = "ValidationError" (ensure the
assignment happens after super(...) is called in each constructor).
| "module": "ESNext", | ||
| "moduleResolution": "bundler", |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "1) Detect server package type (ESM/CommonJS):"
fd -a package.json | rg -n "web/server/package.json" || true
if [ -f web/server/package.json ]; then
rg -n '"type"\s*:\s*"module"' web/server/package.json || true
fi
echo
echo "2) Find relative imports in server source that omit file extensions:"
# These are commonly tolerated by bundler resolution but problematic in Node ESM runtime.
rg -nP --type=ts --type=tsx \
'^\s*import\s+.*from\s+["'\'']\./[^"'\''.]*(?:/[^"'\''.]*)?["'\''];?\s*$|^\s*import\s+["'\'']\./[^"'\''.]*(?:/[^"'\''.]*)?["'\''];?\s*$' \
web/server/src || true
echo
echo "3) Check whether tsconfig in server is currently using bundler resolution:"
rg -n '"module"\s*:\s*"|moduleResolution"\s*:\s*"' web/server/tsconfig.jsonRepository: santifer/career-ops
Length of output: 431
🏁 Script executed:
cat -n web/server/tsconfig.jsonRepository: santifer/career-ops
Length of output: 651
🏁 Script executed:
cat -n web/server/package.json | head -40Repository: santifer/career-ops
Length of output: 1014
🏁 Script executed:
if [ -f web/server/src ]; then
find web/server/src -type f \( -name "*.ts" -o -name "*.js" \) | head -5 | xargs -I {} sh -c 'echo "=== {} ===" && head -15 "{}"'
fiRepository: santifer/career-ops
Length of output: 45
🏁 Script executed:
find web/server/src -type f -name "*.ts" -exec head -20 {} \; | head -40Repository: santifer/career-ops
Length of output: 1996
Use Node-aligned module settings for the server runtime.
Line 5 uses "moduleResolution": "bundler" which is semantically incorrect for direct Node ESM execution via tsc and node dist/index.js. While the codebase currently mitigates this by using .js extensions in relative imports, the configuration should align with the Node ESM runtime by using NodeNext settings. This prevents future issues if extensionless imports are added while relying on bundler resolution at type-check time.
Suggested config change
- "module": "ESNext",
- "moduleResolution": "bundler",
+ "module": "NodeNext",
+ "moduleResolution": "NodeNext",📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "module": "ESNext", | |
| "moduleResolution": "bundler", | |
| "module": "NodeNext", | |
| "moduleResolution": "NodeNext", |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@web/server/tsconfig.json` around lines 4 - 5, Update the TypeScript compiler
options in tsconfig.json to use Node-aligned ESM settings: change
"moduleResolution" from "bundler" to "NodeNext" and also set "module" to a
Node-compatible target such as "NodeNext" (instead of "ESNext") so
tsc/type-checking matches direct Node ESM execution; update the
"moduleResolution" and "module" keys accordingly to avoid relying on bundler
resolution while running node dist/index.js.
Summary
web/directory with a full-stack dashboard (Fastify API + React client + PostgreSQL)merge-tracker.mjsandscan.mjs(fail silently if dashboard is down)Architecture
The dashboard layers on top of the existing markdown-based system without modifying core plugin functionality. A sync module keeps markdown files and the Postgres database in agreement. If the dashboard isn't running, everything works exactly as before.
Tech Stack
Pages
/)/pipeline)/feed)/sources)Job Sources (11 total)
scan.mjs(existing)scan.mjs(existing)scan.mjs(existing)scan.mjs(new)scan.mjs(new)scan.mjs(new)scan.mjs(new)scan-scrape.mjs(new)scan-scrape.mjs(new)scan-scrape.mjs(new)scan-scrape.mjs(new)New Scripts
scan-scrape.mjsbatch-apply.mjsChanges to existing files
merge-tracker.mjsfetch()to sync API with 2s timeout (try/catch, silent fail)scan.mjsCLAUDE.mdpackage.jsonweb:*,scan:scrape,batch-applyscripts +concurrentlydev dependency.gitignoreweb/server/dist/,web/client/dist/,web/.env.claude-plugin/plugin.json.env.exampleNo mode files, templates, evaluation logic, or plugin system files were modified.
Quick start
Test plan
npm run web:test— 14 server tests pass (sync parsers + pipeline transitions)npm run web:db:up && npm run web:migrate && npm run web:seed && npm run web:dev— dashboard loadsnode test-all.mjsstill passes (64/65, same pre-existing Go dashboard build skip)node scan.mjs→ finds jobs from 9 sources, new jobs appear in dashboard Feednode scan-scrape.mjs --board tecnoempleo→ scrapes Spain job boardsnode batch-apply.mjs --dry-run→ lists eligible jobs for batch prep🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Documentation
Chores