Code improvement description
The smoke workflow runs one job per test scenario via a matrix strategy. Each job independently calls script/build <ecosystem> before running its scenario.
For ecosystems with multiple scenarios, this means the same Docker image is built N times in parallel on N separate runners. go_modules currently has 11 smoke scenarios, so its ecosystem image is built 11 times per PR.
The image is identical across all jobs for a given ecosystem — script/build is deterministic given the same commit SHA, Dockerfile, and build args. There's an easy opportunity to build it once and share it.
Proposed fix
Split into two stages:
build-images job (after discover, before e2e): build each unique ecosystem image once, export via docker save | gzip, and upload as a workflow artifact.
e2e jobs: download and docker load the artifact instead of calling script/build.
Simpler alternative
Add Docker BuildKit's GHA cache backend (--cache-to type=gha,mode=max / --cache-from type=gha) to script/build. The first job to complete populates a shared layer cache; subsequent parallel jobs get near-instant hits with no workflow restructuring needed.
The existing Download cache step already handles the Dependabot CLI proxy cache — Docker image caching is the missing piece. Happy to put together a PR for either approach if useful.
Code improvement description
The smoke workflow runs one job per test scenario via a matrix strategy. Each job independently calls
script/build <ecosystem>before running its scenario.For ecosystems with multiple scenarios, this means the same Docker image is built N times in parallel on N separate runners.
go_modulescurrently has 11 smoke scenarios, so its ecosystem image is built 11 times per PR.The image is identical across all jobs for a given ecosystem —
script/buildis deterministic given the same commit SHA, Dockerfile, and build args. There's an easy opportunity to build it once and share it.Proposed fix
Split into two stages:
build-imagesjob (afterdiscover, beforee2e): build each unique ecosystem image once, export viadocker save | gzip, and upload as a workflow artifact.e2ejobs: download anddocker loadthe artifact instead of callingscript/build.Simpler alternative
Add Docker BuildKit's GHA cache backend (
--cache-to type=gha,mode=max/--cache-from type=gha) toscript/build. The first job to complete populates a shared layer cache; subsequent parallel jobs get near-instant hits with no workflow restructuring needed.The existing
Download cachestep already handles the Dependabot CLI proxy cache — Docker image caching is the missing piece. Happy to put together a PR for either approach if useful.