Fine tuning the DevOps agent

While working with the DevOps agent, I discovered something uncomfortable: all three of its Docker CI build patterns had gaps. The service pattern, the CLI pattern, and the library pattern (which was missing entirely) all needed concrete, working examples to validate and improve them.
The real problem: patterns without proof¶
The DevOps agent had documented patterns for building Go services and CLIs, but when I looked closely, I realized these patterns existed mostly as abstract descriptions. There were no concrete, tested examples demonstrating that they actually worked end-to-end. The service pattern had some rough edges. The CLI pattern had untested assumptions about cross-compilation and dependency handling. And the library pattern? It did not exist at all.
Why Docker for CI builds?¶
Before diving into the patterns, here is why I use Docker and docker-compose as my CI foundation:
- Reproducibility - Same environment locally and in CI. No "works on my machine" problems.
- Isolation - No pollution from the host system. Clean slate every build.
- Portability - Works on any CI platform: GitHub Actions, GitLab, Jenkins, whatever.
- Self-documenting - The Dockerfile IS the build environment specification. No hidden dependencies.
Without Docker - every step lives in CI config:
flowchart LR
ci1[CI YAML] --> s1[Setup Go] --> s2[Install deps] --> s3[Run linter] --> s4[Run tests] --> s5[Build binary] --> s6[Start services] --> s7[Run E2E] --> s8[Build container]
style ci1 fill:#fecaca,stroke:#f43f5e,color:#0f172a
style s1 fill:#e2e8f0,stroke:#334155,color:#0f172a
style s2 fill:#e2e8f0,stroke:#334155,color:#0f172a
style s3 fill:#e2e8f0,stroke:#334155,color:#0f172a
style s4 fill:#e2e8f0,stroke:#334155,color:#0f172a
style s5 fill:#e2e8f0,stroke:#334155,color:#0f172a
style s6 fill:#e2e8f0,stroke:#334155,color:#0f172a
style s7 fill:#e2e8f0,stroke:#334155,color:#0f172a
style s8 fill:#e2e8f0,stroke:#334155,color:#0f172a
With Docker - CI just runs a script:
flowchart LR
ci2[CI YAML] --> script["./build/build.sh"] --> compose["Dockerfile + docker-compose"] --> done[Done]
style ci2 fill:#dcfce7,stroke:#22c55e
style script fill:#e0f2fe,stroke:#0284c7
style compose fill:#ccfbf1,stroke:#0d9488
style done fill:#dcfce7,stroke:#22c55e
These properties matter more to me than raw speed. I will take a slightly slower build that I can trust and debug over a faster one that breaks mysteriously.
Building the test bed¶
I decided to build the docker-ci-build monorepo for two purposes:
- Demonstrate all three patterns with working code that actually builds and passes tests
- Provide a test bed for ongoing DevOps agent tuning - a place to validate pattern changes before rolling them into the agent's knowledge base
This project will be my go-to reference for fine-tuning the agent's patterns going forward.
The missing library pattern¶
Of the three patterns, the library gap was the most obvious. The DevOps agent (kinda) knew how to:
- Build a Go service with multi-stage Dockerfile, run E2E tests, produce a Docker image
- Build a Go CLI with cross-compilation, export the binary, run integration tests
But what about a shared library? A library does not produce an artifact. There is no binary to export, no container to run, no E2E tests against a live service. The only question is: does the code compile and do the tests pass?
Three patterns, one monorepo¶
The demo project uses a multi-module Go monorepo with a clear dependency chain:
flowchart LR
timelib["timelib<br/>(library)"] --> api["api<br/>(service)"] --> cli["cli<br/>(tool)"]
style timelib fill:#f1f5f9,stroke:#64748b
style api fill:#ccfbf1,stroke:#0d9488
style cli fill:#e0f2fe,stroke:#0284c7
Here is how the CI patterns differ:
| Aspect | Library (timelib) | Service (api) | CLI (cli) |
|---|---|---|---|
| Dockerfile stages | Single stage | Multi-stage (builder + runtime) | Multi-stage (builder + export) |
| Tests in CI | Unit only | Unit + E2E | Unit + E2E |
| E2E approach | N/A | docker-compose against container | Run binary against api container |
| Artifact | None | Docker image | Binary + test container |
| Build complexity | Simplest | Medium | Most complex (has dependency on api) |
Technical decisions worth noting¶
Multi-module with go.work: Local development uses go.work to link the modules together. This keeps each module independent with its own go.mod while allowing them to import each other during development.
go mod edit -replace in Dockerfile: The tricky part was handling local module resolution inside Docker builds. The solution: use go mod edit -replace inside the Dockerfile to point to the local paths within the build context. This keeps the committed go.mod files clean (no local path references that would break on other machines).
Each component owns its build: Every module has its own build/build.sh script that orchestrates Docker build and E2E tests. The GitHub Actions workflows just call these scripts.
Path filters in CI: GitHub Actions workflows use path filters so changes to timelib/ only trigger the library build, changes to api/ trigger the service build, and so on. The CLI workflow is special: it builds the api image first since the CLI's E2E tests need a running API to call.
The insight¶
Library CI is refreshingly simple. No multi-stage builds, no artifact management, no E2E test infrastructure. Just:
- Pull the code
- Run the tests
- Verify it compiles
That simplicity is worth documenting explicitly. When you are setting up CI for a new Go library, you do not need the complexity of service or CLI patterns. A single-stage Dockerfile that runs go test and go build is all you need.
The DevOps agent now has this pattern in its knowledge base. Next time it is asked to set up CI for a Go library, it will not try to shoehorn in multi-stage builds or E2E test infrastructure that does not make sense for the use case.
Related: Claude Code Setup | Docker Golang Tooling