04 Oct Enabling developers to deploy cloud infrastructure with gitops
Typically, developers in a company are highly dependent on operators to set up new applications or create the necessary infrastructure for their applications (like databases, load balancers, cloud storage buckets, etc.). Depending on a dedicated platform team either stems from a restricted set of permissions the developers are granted or simply from a lack of knowledge concerning the setup of infrastructure. This is true for cloud environments as well, even though a database could be set up by clicking a web form in the online console. As a result, setting up prototypes, new applications or extending existing ones has been time-consuming lately and the need for a better solution within our company arose.
We wanted to enable developers to set up new applications and the underlying infrastructure with as little effort and as fast as possible. There should be no need for excessive onboarding sessions, knowledge transfers, or dependencies to SREs to get to know the details on cloud services and how they work internally for getting everything up and running. At the same time, we wanted to maintain a certain amount of consistency and control of what the provisioned infrastructure looks like.
The solution we had in mind to achieve this vision is as simple as executing a single CLI command for a new service plus every infrastructure component the service depends on, setting some parameters for the service, and triggering a CI/CD pipeline.
Let’s dive deeper into all the moving parts involved to make this solution work.
Config sync is a GKE add-on that enables a Kubernetes cluster to sync one or more git repositories with a Kubernetes cluster. The synced repository contains a set of declarative Kubernetes configs (Kubernetes YAML files). Config sync keeps those Kubernetes configs in sync with the Kubernetes resources running in the cluster by applying these configs to the cluster and continuously reconciling those resources.
Kubernetes config connector
Kubernetes config connector (aka KCC) is based on config sync. KCC extends the Kubernetes cluster with a set of Kubernetes custom resources. Each of these resources represents a GCP resource (for example, a cloud SQL database, cloud storage bucket, …). So KCC enables you to manage and deploy GCP resources the same way you manage your ordinary Kubernetes resources (like deployments, Kubernetes services, and so on). Together with the previously introduced config sync, it is now possible to describe ordinary kubernetes resources as well as cloud resources in a single git repository.
The final element which completes the holy trinity of config management is KPT. KPT is a package manager for Kubernetes configuration files. It offers functionalities for fetching and updating packages from git repositories and features for rendering and transforming packages (a set of k8s YAML files) using containerized functions.
How everything fits together
- SREs are providing application deployment and infrastructure blueprints. These blueprints are KPT packages that can be configured later by developers according to their needs. Those blueprints are getting pushed to a git repository containing all the packages developers need to deploy their applications (for example, for provisioning databases, ingresses, Artemis, Kafka, …).
- Developers are building their services in the according application git repositories.
- Once an application is ready to deploy, devs choose from the blueprints repository the needed blueprints. Blueprints are checked out with a KPT 1-liner from the repository and configured with a config.yaml provided with the KPT package. The developers are finally pushing the “application bundle” to the infrastructure source repository.
- For the application deployment with its infrastructure, the deployment pipeline from the infrastructure source repository has to be triggered from the application repository.
- The deployment repository pipeline uses KPT to render the KPT package and its configurations to the final YAML files. In the end, the infrastructure deployment repo contains a declarative description of the complete cloud infrastructure and Kubernetes workloads.
- The config sync and Kubernetes config connector are frequently pulling the infrastructure deployment repository, and their reconciliation processes are taking care of applying the desired state of the declaratively described resources to actual k8s workloads and cloud services.
And this is how it looks like from a developer’s perspective
The following shows an example of the steps a developer has to execute in order to deploy a completely new service that needs some RDS, cloud storage, plus external ingress to the service:
- Read the documentation of provided blueprints to get an idea of which blueprints are available and which parameters they offer for customization.
- Check out the infrastructure source repository
- Create the application deployment:
kpt pkg get firstname.lastname@example.org:infrastructure/blueprints.git/application-deployments/foundation ./my-new-application
- Set mandatory parameters in the package’s config.yaml like memory and CPU settings for the deployed container:
apiVersion: v1 kind: ConfigMap metadata: # kpt-merge: /setters name: setters annotations: config.kubernetes.io/local-config: true data: cpu-limits: 1500m cpu-requests: 100m memory-limits: 2000Mi memory-requests: 2000Mi prometheus-enabled: true probes-base-path: /actuator/health probes-port: 8081
- Create the database:
kpt pkg get email@example.com:infrastructure/blueprints.git/application-deployments/extensions/db-plugin ./
- Create the cloud storage:
kpt pkg get firstname.lastname@example.org:infrastructure/blueprints.git/application-deployments/extensions/cloudstorage ./
- Create ingress:
kpt pkg get email@example.com:infrastructure/blueprints.git/application-deployments/extensions/ingress ./
And that’s it! After triggering the deployment pipeline, the new application can be reached from the internet. It can persist its data in a dedicated database and a cloud storage bucket exclusively created for this application.
This was just a very rough high-level overview of our deployment infrastructure. If you are interested in more details, for example how we are using this setup to dynamically provide short-lived environments (for example just for executing tests against feature branches) or how our blueprints are working in detail, just leave us a comment and we are more than happy to address your questions in follow-up blog posts!