Providing sensitive information to applications in Kubernetes environments using init containers


Providing sensitive information to applications in Kubernetes environments using init containers

The challenge
We need to pass sensitive information like secrets, certificates, API keys, etc., to applications that are using Kubernetes (Google Kubernetes Engine in our case) as their orchestration environment. This information needs to be stored in a secure place and must not be committed to any source code repository used for our GitOps approach.


The idea
Instead of using third-party solutions like Hashicorp Vault, we want to utilize Google Secret Manager as the place to store secret information for all our environments and provide this information to our application deployments during pod startup using an init container.


Secret Manager
The Google Cloud Platform provides a managed service to securely store sensitive data and make it accessible using IAM permissions. This service is called Secret Manager. As we are using GitOps to provision our infrastructure, stages, and applications, we decided to create a separate cloud project inside our organization structure to store only this secret information. This project and the information in there are set up and managed manually by our SRE team. More information about the Google Secret Manager can be found here.


We also have to enable the Secret Manager API in the Google Cloud Project used to store secrets which can be done here.


Stages and environments
Being able to store information for multiple stages and environments in one place can be achieved by using labels and optionally, a prefix in the secret name followed by an UUIDv4 string. The actual process to inject secrets into the application containers will then read all secrets and decide, which secret to be used finally. For referencing a secret, we decided to use an URL with a custom scheme:


In our environment, this will be equivalent to a secret that has the following labels:

secret-name: maps-api-key
app: common
stage: stable

The “name” field of the secret itself is not important and only used inside the Secret Manager itself. We created a convention for the “secret-name” label: This is the only label that is required and the init container will use it to identify the secret. This offers the possibility to use the same secret name multiple times (which cannot be done with the name property of the secret inside the Secret Manager).


Getting vs. accessing secrets
For IAM permissions, there are two different methods to be understood prior using the Secret Manager:

  • Getting a secret (+ Listing): Reading properties and metadata of the secret
  • Accessing a secret: Read the secret value stored in a given version of the secret

This is why all of the service accounts used by application deployments can Get and List all secrets, but can only access the secrets we define when creating the deployment for each application.


GitOPS tasks and the init container
When it comes to GitOPS, there are quite a few solutions that can be used. We use a tool called kpt to hydrate a source repository containing all configurations for all stages to one destination repository per stage that is then ultimately used to configure the stage. This is done using a pipeline script inside the source repository. Our Kubernetes clusters on Google Cloud Platform (called GKE) use config-sync that takes care of keeping the cluster workloads in sync with the configuration it gets from the destination repository for the corresponding stage – like Kubernetes deployments.


As we cannot inject secret information during the hydration step (it would then be visible inside the destination repository), we instead use an init container alongside our application containers, which basically has 2 options to provide secrets:

  • Replace sm:// URLs inside application config files
  • Inject environment variables from a secret source file called .env (also defined as sm:// URLs)

Because file handling inside a Kubernetes Cluster can also be done using ConfigMaps, we use this feature to provide the source directory with the application config files and the .env file. Additionally, an empty Kubernetes Secret has to be defined, if there is a need to store environment variables securely. We can then use this Secret as a reference for environment variables inside the application container.


For the task of providing secret information during startup of the pods – which is the last step where we can provide this information if we do not want this implemented in our applications – we use a small init container image that runs a utility written by us in Go that does the job of replacing sm:// URLs with the actual secrets/secret information right before the application container is started. We are able to handle all kinds of config files we use and environment variables using this secrets-initializer tool.


Let’s get our hands dirty … demo time!
Let’s assume, we have a web application (“website”) and run it in 2 stages:


The application uses Google Maps API and therefore the customer provided 2 different API Keys (one for each stage) to access the Maps API. We want to store these API Keys in the Secret Manager and use each of them in both applications as environment variables because our applications read this API Key from an environment variable called MAPS_SECRET_API_KEY.

Setting up the secrets in Secret Manager
We need to set up 2 secrets separated only by labels, one for each stage. The sm:// URLs would be like



The name properties in Secret Manager could be:



but are not important for our approach, the prefix maps will help us with setting up IAM access permissions for the service accounts running the applications. We are using UUID v4 strings to make the name unique inside Secret Manager.

The API Keys themselves will be stored in a so-called “version” inside the secret in Secret Manager.

We are using gcloud CLI for creating the secrets in our secret manager project called:


The user running these commands must have the IAM project role Secret Manager Admin to be able to create secrets and add versions of secret values.

gcloud secrets create maps-7cbc4be3-dbfd-4024-87cb-dc66fd630876 \
-–project=secrets-project \
-–replication-policy=automatic \

gcloud secrets create maps-e172c8e7-e370-482e-982e-17d19f781f4f \
-–project=secrets-project \
-–replication-policy=automatic \

Next, we will add the actual secret values (= the Maps API Key) as versions to the secrets. Let’s assume we have the Maps API Keys in 2 text files in the current directory:

gcloud secrets versions add maps-7cbc4be3-dbfd-4024-87cb-dc66fd630876 \
gcloud secrets versions add maps-e172c8e7-e370-482e-982e-17d19f781f4f \

Now we can check the secrets inside the Google Cloud Console (you might have to select the project on top of the page).

Setting up Workload Identity and IAM Service Accounts
We need Workload Identity set up in our GKE cluster, so the Kubernetes service accounts are mapped to IAM Service Accounts to provide access to the Google Resources – in our case the Secret Manager. Setting this up is out of the scope of this article, but there is an article inside Google docs on how to do this. We presume the Kubernetes service account “sa-website” is correctly mapped to an IAM Service Account that has the necessary permissions to access the secrets in Secret Manager.

Defining the reference to the secrets inside the Kubernetes YAMLs
As noted earlier, we will at least need a few Kubernetes resources for each stage:

  • A ConfigMap that holds the reference to the secrets
  • An empty Secret to provide the secrets to the environment of the application containers
  • A ServiceAccount to run the application
  • Role and RoleBinding for accessing and modifying ConfigMaps and Secrets inside the same namespace
  • A Deployment that makes use of the init container and to spin up the web application


apiVersion: v1
kind: ConfigMap
name: application-config-template
namespace: website
.env: |

Empty Secret:

apiVersion: v1
kind: Secret
name: app-env
namespace: website


apiVersion: v1
kind: ServiceAccount
name: sa-website
namespace: website

Role and RoleBinding:

kind: Role
  name: website-application-role
  namespace: website
  - apiGroups: [""]
    resources: ["configmaps", "secrets"]
    verbs: ["get", "update"]
kind: RoleBinding
  name: website-application-rolebinding
  namespace: website
  - kind: ServiceAccount
    name: sa-website
    namespace: website
  kind: Role
  name: website-application-role




apiVersion: apps/v1
kind: Deployment
  name: website
  namespace: website
    app: website
      app: website
  replicas: 1
        app: website
      serviceAccountName: sa-website
        - name: config-templates
            name: application-config-template
        - name: website-init
          imagePullPolicy: Always
            - name: APP_LOG_LEVEL
              value: DEBUG
            - name: TEMPLATE_DIR
              value: "/data.tmpl"
            - name: ENV_SECRET
              value: "app-env"
            - name: config-templates
              mountPath: "/data.tmpl"
        - name: website
          image: debian:11
          command: ["/bin/bash", "-c", "--"]
          args: ["while true; do sleep 30; done;"]
            - secretRef:
                name: app-env

We added a plain Debian container instead of the actual website container so we are able to see if everything works as expected and the environment variable MAPS_SECRET_API_KEY is set to the value we defined in Secret Manager.


An extended example can be found in the examples directory of the git repository of the secrets-initializer.


Once familiar with Secret Manager and all the parts involved in our solution, we find this as a solid solution minimizing the security risks when providing secret information to application containers.
Presumpting proper security configuration of the Kubernetes Cluster and the Cloud Project, it will not be possible for attackers to access sensitive information in plain text (for example by accessing a command shell inside the deployed containers or directly accessing Secret Manager Data).