Convert Helm-based Kubernetes Development to Kustomize

Background

In contrast to the steps outlined in the Kustomize Guide to guide the conversion of the service’s main, "static" Kubernetes manifests (stored in the /kubernetes directory) to that of a set of Kustomize overlays, there is also a need to transition away from the strictly dev/test-oriented Kubernetes resources specified by each service’s Helm chart and its associated Kubernetes templates.

Having to maintain these two distinct but similar sets of Kubernetes resources in parallel introduces risk that these Kubernetes configurations diverge and so the "official" Kubernetes resources can never be fully verified by the Helm-based Kubernetes resources. Kustomize provides a unified path forward that enables both cases (and many more) to be specified in a consistent way.

It is highly recommended to check out the guides, presentations, and examples at https://kustomize.io to become acquianted with Kustomize before starting on this work, and to frequently consult https://kubectl.docs.kubernetes.io/references/kustomize/ during development for information about the syntax of the various Kustomize file formats, and for access to informative usage guides.

Goals

The steps described in this guide only apply to migrating from the Helm-based test Kubernetes resources to the dev/test-specific Kustomize overlay located at the standard project path kubernetes/overlays/dev, which defines the Kubernetes configurations for just the service itself, deployable in any namespace to a local Kubernetes cluster.

The intent of defining the dev overlay first for each service is to provide a simple way to deploy a minimally viable set of Kubernetes resources that represent the service itself, and which can be composed with other dev services to create a fully-functional deployment. This composition will be defined in each service’s dev-with-deps overlay and is out of scope here.

Project Setup

The latest ci-file-generator and ckm-deployment-templates versions should be used as there has been important refactoring and cleanup performed of the Kustomize templates to be downloaded, and will result in a better experience.

Since these components are strictly only needed to retrieve the Kustomize templates, any updates to the pom.xml can be reverted immediately after the initializing.

For jersey-service-parent-based projects, these components are already included in the Maven plugin configuration, so they will just need to be overridden with the following set in the pom.xml:

<project>
    <!-- ... -->
    <properties>
        <ci-file-generator.version>5.0.0.RC1</ci-file-generator.version>
        <ckm-deployment-templates.branch>Release/3.18</ckm-deployment-templates.branch>
    </properties>
    <!-- ... -->
</project>

For mobile-framework-based projects or projects that do not specify a Maven parent, one of the following tiles should be included in the <plugins> section within the tile-maven-plugin configuration. If the ngss-service-build-conventions-tile is already present, just update its version to 1.2.0, otherwise add the kubernetes-generator-tile:1.2.0:

<project>
    <build>
        <plugins>
            <plugin>
                <groupId>io.repaint.maven</groupId>
                <artifactId>tiles-maven-plugin</artifactId>
                <extensions>true</extensions>
                <configuration>
                    <tiles>
                        <!-- choose one -->
                        <tile>gov.va.mobile.tools.maven:kubernetes-generator-tile:1.2.0</tile>
                        <tile>gov.va.mobile.tools.maven:ngss-service-build-conventions-tile:1.2.0</tile>
                    </tiles>
                </configuration>
            </plugin>
        </plugins>
    </build>
</project>

Initializing the Local Kustomize Development Environment

Like the Kustomize Guide, the first step is to pull down the "starter" Kustomize template files from ckm-deployment-templates and interpolate all variables with actual project values. This is handled automatically by ci-file-generator during the execution of the mvn package -Dkustomize.init (package is required because the generation of Kubernetes and Kustomize files is bound to the Maven package phase).

mvn clean package -Dkustomize.init

This should download the Kustomize templates and have all variables set according to the Maven pom.xml and metadata.yaml values. The following file structure should now exist:

📒 kubernetes
  📂 base (1)
    📄 deployment.yaml
    📄 kustomization.yaml
    📄 service.yaml
  📁 components
  📂 overlays
    📁 deploy
    📂 dev (2)
      📂 patches (3)
        📄 add-image-secrets.yaml
        📄 disable-consul.yaml
        📄 env-vars.env
        📄 update-imagePullPolicy.yaml
      📄 kustomization.yaml (4)
    📁 dev-with-deps
    📁 jenkins-staging
  📁 patches-optional
  📄 deployment.yaml (5)
  📄 horizontalpodautoscaler.yaml
  📄 service.yaml (6)
1 The base Kustomization which defines the Kubernetes manifests with common or standard configs that are transformed in other Kustomize overlays based on usage and/or environment. These can be updated if necessary.
2 The dev Kustomize overlay which defines the updates needed to the base resources in order to be deployed to the local Kubernetes cluster
3 Kustomize patch files which are transformation specifications that create or update any resources needed
4 The main configuration file for this overlay which specify which patches are to be included and any other transformations to make to the base resources
5 Existing Kubernetes Deployment manifest that will not be updated but will be consulted in order to replicate its configuration as much as possible
6 Existing Kubernetes Service manifest that will not be updated but will be consulted in order to replicate its configuration as much as possible

Kubernetes Cluster Setup

Since we are only needing to verify that the service can be deployed successfully to the local cluster, the default namespace can be used, though it is a matter of preference if another namespace is used. If that’s the case, a new namespace will need to be created and every subsequent kubectl command will need to include -n <namespace>.

To create a new namespace, run:

kubectl create namespace <namespace>

In order to be able to pull any images needed from the Sandbox DTR, a Kubernetes Secret needs to be set up before any deployments occur. This can be done in two ways: via the kubectl CLI or a .dockerconfigjson file with hard-coded credentials.

The first (more secure) method is via the kubectl CLI:

kubectl create secret docker-registry dtrsecret \
  --docker-server=https://dtr.mapsandbox.net --docker-username="$DTR_USER" \
  --docker-password="$DTR_PWD" [-n <namespace]

This uses the DTR_USER and DTR_PWD environment variables to create the dtrsecret to download from the Sandbox DTR, if needed.

The second method is via a Kustomize component that is to be included in the dev/kustomization.yaml. This requires you to hard code the DTR username and password into the kubernetes/components/local-secret/secrets/.dockerconfigjson file like so:

.dockerconfigjson
{
  "auths": {
    "https://dtr.mapsandbox.net": {
      "username": "my_username",
      "password": "my_password"
    }
  }
}

To include the local-secret component, add it to the dev/kustomization.yaml components field:

dev/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - ../../base

components:
  - ../../components/configmap-env-vars
  - ../../components/local-secret (1)
1 Configures the dtrsecret from your DTR credentials

Example Kustomize Development Workflow

Concepts

Now that the Kustomize starter files have been downloaded, it should provide an easier starting point from which to make changes.

The main goal is to, as closely and as feasibly as possible, recreate the Deployment and Service configurations in the existing Kubernetes YAML manifests in the top level of the kubernetes directory, minus any environment-specific configurations.

For example, given the following deployment.yaml,

apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: $NAMESPACE
  labels:
    service: vista-500-v1
  name: vista-500-v1
spec:
  replicas: 1
  selector:
    matchLabels:
      service: vista-500-v1
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        service: vista-500-v1
    spec:
      containers:
      - env:
        - name: SITE
          value: "500"
        - name: CACHE_KEY
          value: W0NvbmZpZ0ZpbGVdCkZp...
        - name: NAMESPACE
          value: $NAMESPACE
        image: $DTR_URL/ckm/vista:$VERSION
        imagePullPolicy: Always
        name: vista-500-v1
        ports:
        - containerPort: 9430
          protocol: TCP
        - containerPort: 9081
          protocol: TCP
        - containerPort: 8001
          protocol: TCP
        resources:
          limits:
            cpu: 1000m
            memory: 1024Mi
          requests:
            cpu: 500m
            memory: 512Mi
        readinessProbe:
          httpGet:
            path: /vpr?dfn=1&domain=patient
            port: 9081
          initialDelaySeconds: 180
          periodSeconds: 10
          timeoutSeconds: 5
        livenessProbe:
          httpGet:
            path: /vpr?dfn=1&domain=patient
            port: 9081
          initialDelaySeconds: 180
          periodSeconds: 10
          timeoutSeconds: 5
        terminationMessagePath: /dev/termination-log
      dnsPolicy: ClusterFirst
      imagePullSecrets:
      - name: $IMAGE_PULL_SECRET
      restartPolicy: Always
      securityContext: {}
      terminationGracePeriodSeconds: 30

…​we know that the environment variable references (ex: $DTR_URL, $VERSION) should be discarded in favor of concrete values, since Kustomize’s entire raison d’etre is to be a "template-free" Kubernetes configuration tool (in contrast to a tool like Helm).

Therefore, each property that contains a template variable reference should be refactored to be set to "reasonable defaults" for the base Kustomization, with any variations specified in the corresponding overlay via patches and other transformations. These defaults are defined in the templates by hard-coding the default DTR URL to dtr.mapsandbox.net since this is the DTR used for local and sandbox deployments. The $VERSION variable is replaced with the actual version as set in the Maven pom.xml for this project.

Since namespace is a strictly environment-oriented property, any namespace: …​ properties should be removed, especially because the namespace being deployed to can easily be provided via the kubectl CLI.

Generate "First-Pass" Render of the base Kustomization

To be able to compare what needs to be updated in the base Kustomization, the following command can be executed to generate a YAML file that contains all Kubernetes resources defined in kubernetes/base:

kustomize build kubernetes/base > base.yaml

In our example case, the result of the Deployment generation is

base.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: vista
    service: vista
  name: vista
spec:
  replicas: 1
  selector:
    matchLabels:
      app: vista
      service: vista
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: vista
        service: vista
        version: 1.13.2
    spec:
      containers:
      - image: dtr.mapsandbox.net/ckm/vista:1.13.2
        imagePullPolicy: Always
        livenessProbe:
          failureThreshold: 3
          initialDelaySeconds: 0
          periodSeconds: 20
          tcpSocket:
            port: 8080
          timeoutSeconds: 5
        name: vista
        ports:
        - containerPort: 8080
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: ${basePath}/system/health/readiness
            port: 8080
          initialDelaySeconds: 0
          periodSeconds: 10
          timeoutSeconds: 5
        resources:
          limits:
            cpu: 1000m
            memory: 1Gi
          requests:
            cpu: 250m
            memory: 512Mi
        startupProbe:
          failureThreshold: 300
          initialDelaySeconds: 0
          periodSeconds: 1
          tcpSocket:
            port: 8080
          timeoutSeconds: 1
        terminationMessagePath: /dev/termination-log
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      securityContext: {}
      terminationGracePeriodSeconds: 30

Taking the diff between the existing deployment.yml and base.yaml (with Service removed) yields:

diff

Here are a few of the differences and what needs to be done to handle them:

Field Original Value Kustomized Value Notes

name

vista-500-v1

vista

Since the template used the Maven parent project name, vista, this is what is used for the name, instead of the module names. This needs to be corrected in the base/deployment.yaml to vista-500 (we no longer use the v1 suffix for deployments)

namespace

$NAMESPACE

Not present

As mentioned above, the namespace to be deployed to is more easily managed (at least in this case) by setting it via the CLI. If a specific environment or use-case requires this field directly in the resource, it can be easily added as a patch in the Kustomization

image

$DTR_URL/ckm/vista:$VERSION

dtr.mapsandbox.net/ckm/vista:1.13.2

This is correct, since the base Kustomization sets the "reasonable default" to that of the Sandbox DTR and the image version to that of the Maven pom.xml

env

- name: SITE
  value: "500"
- name: CACHE_KEY
  value: ...
- name: NAMESPACE
  value: $NAMESPACE

Not present

These environment variables can be set via a Kustomize component like components/configmap-env-vars which populate the env field with ConfigMap-based variables.

ports

- containerPort: 9430
  protocol: TCP
- containerPort: 9081
  protocol: TCP
- containerPort: 8001
  protocol: TCP
- containerPort: 8080
  protocol: TCP

Since the default port for services is 8080 and vista's ports need to be set to something different, this field should be updated to align with vista

Repeat this exercise for the Service resource (i.e., comparing kubernetes/service.yaml and the result of the Kustomize render of kubernetes/base/service.yaml), and make changes to the Kustomize base resources to make them as close as possible (minus the purposefully-omitted fields described above).

Kustomize "Overlay" Design

The dev/kustomization.yaml defines the Kustomization that is used mainly for local development, and inherits the base Kustomization via the resources field:

dev/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- ../../base

Kustomize first collects the resources specified in base/kustomization.yaml, then applies the transformations specified in each inheriting kustomization.yaml, like that of dev. It is therefore possible to have multiple stages or layers of Kustomization, hence the name "overlay". However, the more layers there are, the more complex the model becomes, making it harder to understand and debug. Therefore, the preference is one or two layers of Kustomization.

Generate and Compare Render of the dev Overlay

Like with the base Kustomization above, run:

kustomize build kubernetes/overlays/dev > dev.yaml

Here are the differences between the base.yaml and dev.yaml, outside of the additional ConfigMaps created for use in setting the environment variables:

The dev.yaml env field is now populated with several environment variables including from a ConfigMap:

dev.yaml
- env:
  - name: USE_ENVCONSUL
    value: "false"
  - name: NAMESPACE
    value: dummy
    envFrom:
    - configMapRef:
        name: vista-env-9t552gdfh4

The NAMESPACE is set to dummy because some versions of the image makes an explicit check to verify it exists, otherwise it exits.

The "hard-coded" variables are a result of applying the disable-consul.yaml patch in dev/patches:

dev/patches/disable-consul.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: vista
spec:
  template:
    spec:
      containers:
        - name: vista
          env:
            - name: USE_ENVCONSUL
              value: "false"
            - name: NAMESPACE
              value: dummy # workaround for java-service-base check

and the configMapRef field references this ConfigMap, which is generated from dev/kustomization.yaml and the associated dev/patches/env-vars.env file:

dev/kustomization.yaml
# Additional environment variables if needed
configMapGenerator:
  - name: vista-env
    behavior: merge
    envs:
      - patches/env-vars.env
  - name: consul-vault-urls #TODO This is a workaround due to the hard-coded java-service-base startup script check for consul
    literals:
      - CONSUL_HTTP_ADDR=consul:8500
      - CONSUL_URL=http://consul:8500
dev/patches/env-vars.env
JWT_PUBLIC_KEY=MIIBIjANBg...

Any "development-only" environment variables should be set here. Any other variables needed by the service to run in every environment should be set in the base/deployment.yaml file.

The other notable difference is that the imagePullSecrets field was added in dev.yaml with the value dtrsecret:

dev.yaml
imagePullSecrets:
- name: dtrsecret

This makes it possible to download images from the DTR when necessary, using the credentials specified in the Kubernetes Cluster Setup.

Testing and Verifying the dev Overlay

To test your changes and verify that the deployment and service resources are correct, first deploy the dev Kustomization:

kubectl apply -k kubernetes/overlays/dev [-n <namespace>]

You should see the list of all Kubernetes resources as they are created or updated on your local cluster.

Then, using a tool like octant or the Kubernetes Dashboard, navigate to the applicable namespace. Monitor the resources as they start up, looking for any errors like `Image Pull Backoff`s or application errors present in the container logs.

If the deployment becomes "green" (Ready) and the container log(s) indicate that the application is running correctly, then the dev overlay can be considered operational.