Back to blog
Jun 24, 2025
5 min read

From CLI Tool to Cloud-Native Service

A Platform Engineering Journey from a manual CLI tool to a scheduled, cloud-native service.

From CLI Tool to Cloud-Native Service: A Platform Engineering Journey

As software engineers, we often build useful command-line (CLI) tools to solve specific problems. But how do we take a simple tool and elevate it into a reliable, automated, and secure service fit for a modern cloud environment? This was the exact challenge I tackled with my cost-tracker project, a Go-based tool for monitoring AWS expenses.

This journey from a manual CLI tool to a scheduled, cloud-native service is a perfect showcase of the skills and mindset at the heart of Platform Engineering and Site Reliability Engineering (SRE). Here are the key learnings and takeaways from that process.

Step 1: The Foundation - Repeatable Builds with CI/CD

Before you can run a service, you need a reliable way to build and package it. The first step was to stop building the binary on my local machine and automate the process.

Containerization with Docker: I created a multi-stage Dockerfile. This is a crucial best practice for compiled languages like Go. The first stage uses the full Go toolchain to build the application, and the second, final stage copies just the single, compiled binary into a minimal alpine image. This results in a tiny, secure production container with a minimal attack surface.

Automation with GitHub Actions: I built a CI/CD pipeline that triggers on every push to the main branch. It automatically runs the unit tests, and upon success, builds the Docker image and pushes it to the GitHub Container Registry (GHCR).

Key Takeaway: Platform engineering begins with automation. A solid CI/CD pipeline ensures that every change is tested and packaged in a consistent and repeatable way, eliminating “it works on my machine” problems and establishing a foundation for reliable deployments.

Step 2: Running on a Schedule with Kubernetes CronJob

With a container image being automatically published, the next question was how to run it. Since cost-tracker needs to run periodically (e.g., once a day), the perfect tool for the job was a Kubernetes CronJob.

This involved creating a few Kubernetes manifests:

  • ConfigMap: To store non-sensitive configuration like the number of days to look back for costs. This externalizes configuration from the application code.

  • CronJob: To define the schedule (0 2 * * * for 2 AM daily) and specify which container image to run, pointing it to the one in our GHCR. It also links the ConfigMap to the container as environment variables.

Key Takeaway: Thinking about how an application should run is as important as the application itself. Kubernetes provides powerful primitives like CronJob that allow you to manage the lifecycle of an application declaratively.

Step 3: Solving the Secrets Problem with Sealed Secrets

The cost-tracker needs a Slack webhook URL to send notifications. Committing this secret directly into a public Git repository is a major security risk. How do you manage secrets in a repository without exposing them?

This is where I introduced Sealed Secrets, a fantastic tool for GitOps workflows.

A controller running in the cluster holds a private key.

On my local machine, I use the kubeseal CLI to fetch the controller’s public key.

kubeseal encrypts my local secret.yaml file into a new SealedSecret manifest.

This SealedSecret is safe to commit to Git because its data is encrypted.

When I apply the SealedSecret to the cluster, only the controller can decrypt it with its private key, creating the regular Kubernetes Secret that my application can then use.

Key Takeaway: Securely managing secrets is a non-negotiable part of production readiness. Tools like Sealed Secrets enable a true GitOps workflow where your entire application state, including secrets, can be version-controlled and deployed from a Git repository without compromising security.

Step 4: The Reality of Debugging in Kubernetes

Making all of this work was not a straight line. The debugging process was where the most valuable learning occurred. I encountered a classic chain of Kubernetes errors:

CrashLoopBackOff: The application was crashing. A quick kubectl logs showed it was an AWS authentication error. The application couldn’t find credentials inside the cluster.

ImagePullBackOff / ContainerCreating: Before I could even fix the auth issue, I realized the pod wasn’t even starting. kubectl describe pod revealed the cluster couldn’t pull my container image because its package visibility was set to “private” on GitHub by default.

cannot fetch certificate: When setting up Sealed Secrets, the kubeseal CLI couldn’t talk to the controller. A kubectl get pods -n kube-system showed the controller itself hadn’t started correctly.

Key Takeaway: Debugging in a distributed system like Kubernetes is a layered process. You learn to use a core set of commands (kubectl logs, kubectl describe, kubectl get pods -w) to peel back the layers and find the root cause, moving from the application itself to its configuration, its packaging, and even the cluster components it depends on.

Conclusion and Next Steps

This project evolved from a simple tool into a robust, automated service. It now has a CI/CD pipeline, runs on a schedule in Kubernetes, and manages its secrets securely. This journey is a microcosm of the work a Platform Engineer does: providing the tools, automation, and infrastructure to run applications reliably and securely.

The next logical step? Solving the AWS authentication issue natively within the cluster using IAM Roles for Service Accounts (IRSA), which will complete the production-readiness story.