Continuous Deployment II

The next step is to update our previously deployed application on ArgoCD which currently reflects the initial setup. We want the deployed application to reflect modifications to storage capacities and the persistence configuration as we have done in the Kubernetes/Storage section.

Thus, let’s update our subfolder todolist-app to include the corresponding changes.

Once again, you can either use the WEB IDE or modify the files locally.

Make sure to replace the contents of postgres.yaml with the following which was extended to contain persistent storage functionality

Solution

postgres.yaml should contain the following:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgresdb
spec:
  replicas: 1
  selector:
    matchLabels:
      app: postgresdb
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: postgresdb
        tier: database
    spec:
      volumes:
        - name: db-data
          persistentVolumeClaim:
            claimName: postgres-db-data
      containers:
        - image: postgres
          name: postgresdb
          env:
            - name: POSTGRES_USER
              valueFrom:
                secretKeyRef:
                  name: db-security
                  key: db.user.name
            - name: POSTGRES_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: db-security
                  key: db.user.password
            - name: POSTGRES_DB
              valueFrom:
                configMapKeyRef:
                  name: postgres-config
                  key: postgres.db.name
          volumeMounts:
            - name: db-data
              mountPath: /var/lib/postgresql/data
              subPath: postgresdb
        - image: postgres
          name: pg-dump
          command:
            - bash
            - -c
            - while sleep 1h; do pg_dump --host=127.0.0.1 --username=$POSTGRES_USER --dbname=$POSTGRES_DB --file=/pg_dump/$(date +%s).sql; done
          env:
            - name: POSTGRES_USER
              valueFrom:
                secretKeyRef:
                  name: db-security
                  key: db.user.name
            - name: POSTGRES_DB
              valueFrom:
                configMapKeyRef:
                  name: postgres-config
                  key: postgres.db.name
          volumeMounts:
            - name: db-data
              mountPath: /pg_dump
              subPath: pg-dump

Also make sure to add a new file postgres-pvc.yaml containing the following:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgres-db-data
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

But let’s carefully reflect what this change will trigger: when adjusting the database specification we can expect the database Pod to restart. Do we have everything in place for our application stack to handle such a database restart?

When examining the Kubernetes Runtime we learned about liveness checks, so let’s make sure our todobackend has such a livenessProbe defined. Check the file todobackend.yaml whether it contains the required definition, and if missing please add it:

livenessProbe:
  httpGet:
    path: /todos/
    port: 8080
  initialDelaySeconds: 30
  timeoutSeconds: 1
  periodSeconds: 10
  failureThreshold: 3

Yes, the correct amount of whitespace is really important, as is the placement of this block: it belongs into spec.containers with livenessProbe being aligned with name and/or image of the todobackend container.

UI namespace filters

At this point, there will most likely be quite a few applications that have been deployed to the current project so finding your application can be tedious. Luckily, the sidebar offers an option to filter the visible applications based on the namespace in which they are in.

CreateNewApp CreateNewApp

Go ahead and in the namespace option input your applications namespace.

Now you will see only the applications that you have created and are responsible for.

Auto-Sync vs Manual Pull policy

Taking a look at our application in ArgoCD, you will notice that the status of the application will now show as “out of sync”. (And if it doesn’t yet, please just manually trigger a Refresh.) As we have pushed changes to the repository that are not yet present in the deployed version of the application, ArgoCD notices the discrepancy and displays the application as out of sync.

Why does it not continuously deploy the new status of our application?

That is because when we deployed our application, we chose to opt in for manual syncing. Thus, ArgoCD will notice and inform you that an application is out of sync with the changes made to the repo, but leaves it up to you to decide when to update it.

When choosing the Auto-Sync option in the application configuration, ArgoCD will switch to, as the name implies, automatically synchronize and update the deployed application whenever changes to the repository are made.

Let’s test the Auto-Sync option by going “into” your application tab and then to the App Details in the top-left corner, then scrolling down and enabling Auto-Sync.

Now you should see that the application in ArgoCD changes its status to Syncing and after a short time it should change it to Healthy as well.

What has changed

Let’s take a closer look at the newly deployed application and at the difference in comparison to the initial setup. So open the details on the deployed storage application to view the components and try to spot which ones are new or different.

We’ll include the two setups here as well for a quick comparison.

CreateNewApp CreateNewApp

CreateNewApp CreateNewApp

You should notice that the storage setup includes a new component: postgres-db-data of type pvc (persistent volume claim).

Add Ingress capabilities

Let’s make a further change to the deployed application. During this section of the exercise our goal is to extend the application with the Ingress networking capabilities as you have learned in the corresponding kubernetes chapter .

As you might have noticed, the todoui-service in the todolist-initial and todolist-storage setup was changed to ClusterIP instead of LoadBalancer, meaning that it currently is not directly exposed. Therefore we currently have no means to access the UI directly. To change that, we want to modify the application once more to expose the UI via Ingress.

In our repository, create a new yaml file todoui-ingress.yaml within the todolist-app directory and try to fill it yourself in order to expose the todoui service using Ingress.

Again you can either use the Web IDE or the VMs terminal.

Solution

Here is a possible solution for an Ingress yaml. Make sure to fill the blanks (----) with the correct name (e.g. studentX-todoui) and host definition. If you don’t have your own domain available for configuring DNS (and most probably you don’t), let’s just make use of [nip.io][nip-io]’s wildcard DNS: Determine which external IP address the Ingress Controller pre-deployed to the Kubernetes cluster has (or ask your instructor to simply tell you) and configure the host to studentX.<ingress_ip>.nip.io (make sure to also replace your studentId).

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ------
spec:
  ingressClassName: nginx
  rules:
    - host: ---------------
      http:
        paths:
          - backend:
              service:
                name: todoui
                port:
                  number: 8090
            path: /
            pathType: Prefix

Commit and push your changes to your fork. Now you should soon see that the todolist-app application will display out of sync message.

If the application is still set to Auto-Sync it should start the sync process right away. Otherwise, sync it manually once more and wait until the sync finishes.

After some time you should see a that the application was extended with the Ingress component.

CreateNewApp CreateNewApp

The hyperlink-icon within that component should get you to the UI. It is the very same host that you have configured in your Ingress ressource, of course.

CreateNewApp CreateNewApp

Congratulations! You’ve successfully deployed your application and made it accessible from the outside world.