Go 1.20 Coverage Profiling Support for Kubernetes Apps
During the pre-release phase of the new Go release (1.20), one particular feature caught my attention which I immediately tried out. Now, coverage profiles can be generated for programs (binaries), as opposed to just unit tests. This can be useful for debugging and, more importantly, for integration and end-to-end tests during local development and in CI pipelines.
The Go team also created a landing page with FAQs, explaining how to configure and
enable this new thingy. In a nutshell, a Go binary compiled with GOFLAGS=-cover
and executed with the environment
variable GOCOVERDIR=<output_dir>
will create a coverage metafile during execution (helpful with restarts) and
individual coverage profiles when the application terminates, irrespective of the exit code.
With the documentation provided by the Go team, I was quickly able to create coverage profiles on my local machine. Success 🥳
However, most of my work these days still happens in (on) Kubernetes, such as writing controllers for the AWS ACK project. And this is where using the new Go feature wasn’t as straightforward.
In a Kubernetes application (Pod
), state is ephemeral unless it is written to an external location. I thought about
using persistent volumes, but it would have complicated the matter as I would have to know when the application has
successfully terminated before extracting the data from the persistent volume with a helper application. Alternatively,
I could have used a sidecar container running in the application Pod
, but again I would have to write coordination
logic to know when the application (test) was successful to read and write the coverage data to an external location,
e.g. NFS.
There must be an easier solution… 🧐
My Development Flow
My typical setup to develop on Kubernetes involves kind
(Kubernetes in Docker) to create
local Kubernetes clusters and ko
to build and deploy container images without having
to create and worry about a Dockerfile
🤷
You might be familiar with kind
as it’s also often used in CI, such as Github Actions. ko
’s adoption is growing
slowly but steadily and I can only encourage you to take a look at it as I can’t live without it anymore.
For integration and end-to-end tests I use the excellent and minimalistic E2E Framework from Kubernetes SIG Testing (Special Interest Group), created by my friend Vladimir Vivien. Vladimir wrote a nice article how easy it is to get started here.
But independent from the framework you use for your integration and end-to-end tests in Kubernetes, so far it wasn’t possible1 to generate coverage reports for compiled Go applications. With the new 1.20 release of Go and a little bit of Docker and Kubernetes trickery, we now have all the pieces in place to create coverage reports for integration and end-to-end tests 🤩
Creating Coverage Reports in Kubernetes
The following steps describe how to create coverage profiles for Go binaries packaged as containers and deployed to a
Kubernetes cluster using kind
. To keep things simple and tangible, I’ll use a small Go package I created to interact
with the VMware vSphere APIs as an example. You don’t have to be an expert in VMware technology or the package itself as
the focus is on creating coverage reports. The full code, including running end-to-end tests with coverage in Github
Actions is available in github.com/embano1/vsphere.
You might be wondering why I wrote E2E tests for this package since I have good unit test coverage already. Well, unit tests with client/server API mocks can only get us so far. Deploying and verifying an end-to-end setup in a container environment, such as Kubernetes, makes me more confident that users of my package won’t face the typical “works on my machine” issues, such as network permissions, authentication (secrets), keep-alives, etc. - while also avoiding to write and maintain brittle mocks.
The simple E2E test used for our coverage example creates a vSphere server (simulator) and a client application deployed
as a Kubernetes Job
to perform a login using the vsphere
package. Once the client has successfully connected, it
will exit with code 0
and the Job
moves to the completed state (which the test suite asserts). The full code
is in the test folder.
As described earlier, the tricky part is getting the coverage data out of the Kubernetes test application. Luckily, with
kind
we can use Docker Volumes to mount a folder from the host, such as my
MacBook or a Github Actions runner, to a Kubernetes worker, a container created by kind
. Then we can mount that volume
into a Kubernetes Job
i.e., our test application, using the Kubernetes
HostPathVolumeSource
. If you feel like being right
inside the movie “Inception”, you are not alone…
kind
. For remote Kubernetes clusters, you would have to use a persistent volume and some coordination
logic. I’ll leave this one up for you, smart reader 😜Step by Step
First we need to check out the example source code and create a folder where the final coverage data will be stored:
|
|
Next, we’ll create a Kubernetes cluster with a kind
configuration file mapping our local coverage folder to the worker
node:
|
|
hostPath
folderWith a Kubernetes cluster running, we can compile the test application binary with active coverage, create a container
image and upload it into the Kubernetes cluster. Sounds complicated? Meet ko
!
|
|
Next, we need to instruct the Kubernetes E2E test suite to create a Job
for the test client which uses the container
image created above, the mounted coverage volume and set GOCOVERDIR
accordingly.
Expand the code block below to see the specific lines in the E2E function which creates the test client Job
.
|
|
Now we can run our E2E tests as usual:
|
|
Let’s check if we got some coverage data…
|
|
Heureka! But wait, we’re not done yet. We need to create a human-readable coverage report as these are binary files.
|
|
This is a damn cool new feature if you ask me! In fact, it spotted an error I had in one of the AWS ACK controllers I’m currently writing where the E2E tests showed green but a critical code path was never executed. I wouldn’t have caught this without being able to inspect the HTML coverage report for the controller binary now available with Go 1.20.
Bonus: since we’re using tools available in many CI environments, such as Github Actions, it’s super easy to create
these coverage reports on pull requests and upload the coverage files to the CI check summary page.
Here’s an example from the vsphere
repository.
If you enjoyed this post, share it with your friends, and hit me up on Twitter.
Credits
Photo by ShareGrid on Unsplash
-
Well, of course Filippo Valsorda got it working even before 1.20… ↩︎