Building otelx with Agent Collaboration

I just finished a milestone with otelx, a Go library for unified OpenTelemetry initialization. What made this project interesting was not just the technical outcome, but how the agent workflow enabled rapid, high-quality development.
I had an old, outdated Go project for taking care of OpenTelemetry boilerplate that needed a complete rewrite. So, I asked Claude to use the agents to explore the latest release of opentelemetry-go and see what needs to happen to simplify using otel for observability within a Go project. You can see the initial, high-level plan (more of an outline) for the proposed solution: otelx-project.md. For some reason, though, I had to remind Claude to use the agents. Some days Claude does great and uses the agents without any "encouragement". Some days, like in the image below, I had to remind Claude to use the agent. The only difference I can think up is that I switched to 4.5 Opus from Sonnet. But I can't imagine why that would make a difference.

The project¶
otelx wraps the OpenTelemetry Go SDK to provide single-function initialization of logging, metrics, and tracing. Instead of configuring each signal separately, you call Initialize() once with functional options:
tel, err := otelx.Initialize(ctx,
otelx.WithService("my-service", "1.0.0", "production"),
otelx.WithMetrics(9090),
otelx.WithTracing(),
)
The library itself was straightforward. The interesting part was building two example projects demonstrating different integration patterns: one with embedded telemetry, another using the decorator pattern.
How agents contributed¶
Four specialist agents collaborated on this project:
| Agent | Contribution |
|---|---|
go-software-agent |
Code reviews, bug fixes, decorator refactoring |
go-architect-agent |
Go-specific design decisions |
documentation-agent |
README files, pattern documentation |
go-devops-agent |
Docker and docker-compose fixes |
The workflow was not agents writing code from scratch. Instead, I implemented the initial versions, then delegated reviews and refinements to the specialists.
Iterative review cycles¶
The decorator pattern example went through several review cycles that caught real bugs:
Double JSON responses: The initial implementation had the core handler calling c.JSON(), but so did one of the decorators. The agent caught this during review and identified that only the innermost handler should write the response.
Missing parentheses: A subtle syntax error in the decorator chain composition that compiled but did not work as expected. The agent's careful review caught what my eyes missed.
Context propagation: The tracing decorator was not properly propagating the span context to inner handlers. The agent identified this and showed the correct pattern:
func (th *tracingHandler) GetUUID(c *gin.Context) {
ctx, span := th.tracer.Start(c.Request.Context(), "generate-uuid")
defer span.End()
c.Request = c.Request.WithContext(ctx) // This line was missing
th.innerHandler.GetUUID(c)
}
Splitting the observability handler¶
My original implementation of the gingonic-decorator example had a single observabilityHandler that combined tracing, metrics, and logging. Once I had it working like I wanted it to - to set the example - I had the go-software-agent review this and split the single decorator into three separate decorators:
tracingHandler: Creates spans and propagates contextmetricsHandler: Records counters and histogramsloggingHandler: Structured logging with trace correlation
This made the example easier to read, and (at least in my opinion) provides a good example of using the decorator pattern in Go.
// Full observability stack
engine.GET("/api/uuid", loggingHandler.GetUUID)
// Just tracing and metrics, no logging
engine.GET("/api/health", metricsHandler.Health)
Cross-concern collaboration¶
Different agents handled different aspects without stepping on each other:
The documentation-agent created a consolidated README in examples/ that documents both patterns with a comparison table. This replaced duplicate content that had started appearing in each example's individual README.
The go-devops-agent fixed Docker issues I introduced. The Dockerfile paths were incorrect for the monorepo structure, and the docker-compose networking was not configured correctly for the OpenTelemetry Collector to receive traces.
What worked well¶
Human-first implementation: I wrote the initial code, which meant I understood what I was building. The agents refined it rather than creating something unfamiliar.
Specialist delegation: Main Claude coordinated without doing implementation work. When I asked for a code review, it delegated to go-software-agent. When I needed Docker fixes, it delegated to go-devops-agent.
Iterative feedback: Multiple review cycles caught progressively subtler issues. The first review found obvious bugs, the second found architectural improvements, the third caught edge cases.
Separation of concerns: Different agents owned different concerns. The code agent did not try to write documentation; the docs agent did not try to fix code bugs.
What I learned¶
Be specific about what you want: "Review this code" is vague. "Review the decorator pattern implementation for proper context propagation and test coverage" gets better results.
Let the agent finish its thought: When an agent identifies a problem, let it propose the solution before jumping in. The context propagation fix came from the agent, not from me interrupting with my own guess.
Trust the delegation: When Main Claude says it will delegate to a specialist, let it. The specialists have domain-specific patterns that Main Claude does not.
The outcome¶
The otelx examples now have:
- Clean decorator pattern with three separate, testable decorators
- Working Docker Compose setup with Jaeger, Prometheus, and Grafana
- Comprehensive documentation comparing simple vs decorator patterns
- Consistent code style across both examples
The project is still in "emerging" maturity, but the foundation is solid. The agent workflow made it possible to iterate quickly while maintaining quality - something that usually requires either more time or more reviewers.
Related: OtelX project page | Claude Code Setup