Skip to content

VOLUMES

In this Lab we are deploying a slightly different version of the guestbook application. We are making configurations to the app. Also will we persist data on a Persistent Volume, provided by AKS.

What you will learn:

  • Repetition: Deployment of an application
  • Repetition: Expose Deployment via Load Balancer Service
  • Create Namespaces
  • Add configuration by Environment Variables
  • Mount Volumes into Containers
  • Create Persistent Volume Claims
  • Use Storage Classes
  • Dynamic Provisioning
  • Migrate the Deployment to a Stateful Set


1. Create new Namespace

Create a new namespace volumes.

HINT

$ kubectl create namespace volumes


Check the result by listing all namespaces. Also list all objects within the newly created namespace - is it empty?

HINT

1. List all namespaces:

Solution

$ kubectl get namespaces
$ kubectl get ns


2. Get all objects in the volumes namespace:
Solution

$ kubectl --namespace volumes get all
$ kubectl -n volumes get all



Going forward, pay attention to always specify the correct namespace in your commands. We want to create all following objects within the volumes namespace!

Bonus task: Find a way to set the volumes namespace as default in the kube-config.



2. Add Environment Variables

Again, we will deploy the guestbook application. This time make some configurations to the Deployment by adding the following Environment Variables to the example-application container:

TP_BACKEND = json
TP_CONFIGURATION_JSON_PATH = messages
TP_CONFIGURATION_JSON_FILE = messages.json
HINT

1. Make the entries under spec.templates.spec.containers[0].env
2. One entry should look like this:

Solution

env:
  - name: TP_BACKEND
    value: json


- Alternatively, take 7 - Volumes/example-app-deploy.yaml as a reference.


One more time, create the Load Balancer Service within the volumes namespace, in order to access the guestbook application.

HINT

1. If you created a Blueprint in the second chapter, you can re-use it. Make sure to define the volumes namespace this time:

Solution

$ kubectl -n volumes create -f [name-of-blueprint-file]


2. If not, you could also copy the content of the service from the default namespace into a new blueprint file through the output command:
Solution

$ kubectl -n default get service [name-of-service] -o yaml > [name-of-new-blueprint-file]


Now that we have configured the guestbook app to write new entries into a directory on the container, discuss, if the data is persisted now. What would happen to the messages, when we delete the pod again?



3. Bind Volumes

As you could see, the guestbook entries still weren’t persisted. When the pod dies, all data written into the container is lost.

Therefore we will now bind a Persistent Volume to our Deployment. The messages will be written on this volume of a storage provider. As our cluster is hosted on AKS, the PVs will be provisioned by Microsoft Azure within the AKS cluster. Take a look at the Microsoft Documentation for storage options.

We choose to automatically provision PVs. As soon as a Pod gets created it claims storage for the application it’s hosting. That’s handled by a Persistent Volume Claim that is bound to the Pod. The PVC will connect to the defined Storage Class, which then provides a Volume dynamically.

As a first step, inspect all Storage Classes, that are already available in your cluster.

HINT

$ kubectl get storageclasses
or
$ k get sc


We choose azurefile-csi!

Now, create the Persistent Volume Claim that will connect to the azurefile-csi Storage Class, which is already configured by AKS to provide Persistent Volumes for us as we need them.

name: entries-pvc
storageClassName: azurefile-csi
accessModes: ReadWriteOnce
storage: 100Mi
HINT

- Take the blueprint from the Kubernetes Documentation and edit the specs
- Also check the Microsoft Documentation for dynamic Azure Files storage
- Take 7 - Volumes/entries-pvc.yaml as a reference.


Finally, configure the Deployment of our guestbook application to make use of the Persistent Volume Claim. That way a Persistent Volume dynamically gets provisioned and bound to the Persistent Volume Claim. Our application can now make use of the external storage from AKS.

So, let us add the volumes and volumeMounts sections to the Deployment spec at the right positions:

volumes:
    - name: entries-storage
      persistentVolumeClaim:
       claimName: entries-pvc
volumeMounts:
    - name: entries-storage
      mountPath: app/k8s-example-app/messages/
HINT

1. You may edit the blueprint file and apply the changes.

Solution

$ vi [name-of-deployment-blueprint]
$ kubectl -n volumes create -f [name-of-deployment-blueprint]


2. It's also possible to edit active Deployment directly:
Solution

$ kubectl -n volumes edit deployment example-app


- See the structure of the Pod definition in this example
- Take 7 - Volumes/example-app-env-deploy.yaml as a reference
3. The volumes specs should look like this:
Solution

spec:
  volumes:
    - name: entries-storage
      persistentVolumeClaim:
        claimName: entries-pvc
  containers:
    - image: ghcr.io/thinkportrepo/k8s-example-app:latest
      name: k8s-example-app
      volumeMounts:
        - mountPath: /app/k8s-example-app/messages/
          name: entries-storage



4. Access Application

Let us now access the guestbook application. Again, create some entries and see, if they are being listed correctly.

Discuss the following topics with a partner:

  • What would happen to the messages, when the Pod gets deleted and recreated now?
  • What would happen to the messages, when we scale up the Deployment to 2 or more replicas? What happens, when we create more entries now?
  • What if we change the accessMode to ReadWriteMany instead of RWO?
HINT

- Lookup Stateful Sets in the Kubernetes Documentation
- Lookup Access Modes in the Kubernetes Documentation



5. Scale Application

We want to scale the guestbook application. As we discussed in the last chapter, it might make sense to change our setup from Deployment to Stateful Set. Thus we avoid interference of the different instances on one single volume.

Replace the application Deployment and instead create a Stateful Set with the same specs as our Deployment before. Except now we want to scale it to 2 Replicas from the start!

HINT 1

- A Stateful Set is very similar to a Deployment. Therefore you can use your blueprint from the Deployment and simply switch the kind from Deployment to StatefulSet


HINT 2

- Get rid of some of the spec.strategy key-value pair that has been created automatically by the imperative command


HINT 3

- Another thing you have to add to the Stateful Set definition is the spec.serviceName. It should point to our LoadBalancer Service example-svc-lb


Solution

- Take 7 - Volumes/example-app-env-sts.yaml as a reference


When you deployed the Stateful Set, find out if everything is still running (check Pods, Services, Endpoints, … ).

Also check the application in your browser and make some entries, refresh the page, delete the pods, etc. There still seems to be an issue with the different replicas accessing only one volume.

Look at the Kubernetes Documentation Example of a Stateful Set.

We will replace our PVC with the spec.volumeClaimTemplates section within the Stateful Set definition. That way, the Stateful Set makes sure to create an individual PVC for each replica.

Your task is to move the information of the PVC into the Stateful Set at the right position.

HINT

- Move the metadata and spec sections of the PVC as one list entry into the Stateful Set definition under the spec.volumeClaimTemplates section.

Solution

spec:
    - volumeClaimTemplates:
       metadata:
         name: entries-storage
         namespace: volumes
      spec:
         storageClassName: azurefile-csi
         accessModes:
           - ReadWriteOnce
         resources:
           requests:
             storage: 100Mi


- Take 7 - Volumes/example-app-env-sts.yaml as a reference.


You can delete the PVC and PV now, as the Stateful Set makes use of the Volume Claim Templates section.

Again, check if everything is running as expected.

Now, access the guestbook application and create some entries multiple times (refresh the page or use different tabs).

Can you figure out, how the messages are saved?

Next, delete one of the applications Pods and inspect what happened to the guestbook entries for that instance.

Discuss the results in the group!



6. Extra information about Storage Classes

In chapters 3. and 4. we implement the concept of Storage Classes in order to claim storage from an external storage provider - in this case Microsoft Azures’ AKS.

The Storage Class was defined in the PVC definition or in case of the Stateful Set under the Volume Claim Templates section.

Storage Classes allow us to make use of Dynamic Provisioning. When scaling up, each replica still needs an individual PVC. With a Deployment we would have to manually provide these PVCs to each instance.

The good news is, that the Stateful Set will take over that task, if we fill out the Volume Claim Templates section correctly. Each instance will be provided with an individual PVC.

  • Discuss, if this setup is the right match for a typical guestbook application?
  • What happens, if one Pod fails and comes back to live after a fairly long period of time?
  • Can you think of other applications / containers that would benefit of this setup?
HINT

Think about the data stored in applications. Do we need individual instances of the storage or do we want to synchronize across all instances?
- Should every instance hold different guestbook entries, or all point to the same state?
- Should every player of a game have it's own state or should the score be the same across all players?


We will come to another implementation of Stateful Set in combination with Storage Classes in the next Lab!


END