Yet Another Kubernetes Intro - Part 2 - The Pod

In the previous post, I gave a very high level overview of the Kubernetes landscape. Now it is time to go a little deeper, become a bit more practical, and look at some of the building blocks that we use when we build applications that run in Kubernetes.

The first “construct” we will have a look at the pod. I know I covered pods briefly in the previous post, but this time I want to dive a bit deeper. However, I must admit that I am a bit torn when starting this blog post. A part of me knows that the pod pretty much has to be the first resource to look at. On the other hand, it feels a bit weird to show how one creates pods on their own.

And why would that be odd? Well…because we very rarely create pods ourselves. They are the smallest building blocks we have, and all solutions are built using them, but we generally use “larger” constructs when building applications.

Anyhow…with that caveat out of the way, let’s ignore my apprehension and look at what we can do with pods. Because, even though we often use larger constructs when building our solutions, the pod is always there. And they will always be integral parts of everything we build, so you need to know how it works!

The Pod

As I mentioned in the previous post, and in the previous paragraphs, the pod is the smallest building block that we have to play with. We do not create individual containers in Kubernetes. Instead, we create pods. A pod in turn, consists of one or more containers, that are all scheduled on the same node, and share the same “context”. This allows the containers to work together to solve whatever they are trying to solve. But remember, this also means that they are scaled together as a pod as well. So when you ask for 2 instances of a pod, you will get 2 instances of each of the containers defined in the pod. Because of this, we need to make sure that the containers sharing a pod are designed either to be able to share resources, or at least be scaled together as a unit.

The most basic definition of a pod looks like this

apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
  - name: my-container
    image: zerokoll/helloworld

It defines a pod called my-pod, containing a single container called my-container (based on the image zerokoll/helloworld).

Using kubectl to manage pods

Once you have a YAML file like the one above, you can go ahead and run

kubectl create -f ./my-pod.yml

This will cause a pod resource being created in etcd. This new resource is then picked up by the scheduler controller. The scheduler then finds a node in the cluster that has resources enough to run the containers defined in the pod. It then updates the pod configuration to reflect which node is responsible for hosting it. The worker node picks up this change, and uses Docker to download the image zerokoll/helloworld and start the required container.

Once that has completed, you can have a look at the pods in the cluster by running

kubectl get pods

This should return something similar to this

NAME     READY   STATUS    RESTARTS   AGE 
my-pod   1/1     Running   0          21s

The 1/1 in the READY column tells us that the desired state is to have 1 instance of the pod my-pod running, and that there is currently one doing so. The output also tells us that it hasn’t been restarted a single time since it started 21 seconds ago.

The kubectl client is pretty simple and easy to understand. The main usage is to create/apply/update/delete different types of K8s resources. By calling kubectl get pods we are doing just that, getting all pods in the Kubernetes cluster. And in the same way, running something like kubectl get deployments will get you all deployments in the cluster, and kubectl delete pod my-pod will delete the pod called my-pod

Note: It is not quite true that we get all pods. We get all pods in the default namespace of the K8s cluster. But ignore that for now. I will talk about namespaces later on…

Tip: The default for all commands executed by kubectl is to give a somewhat human readable table format. However, this view filters out a lot of information. You can change the output in several ways to get more information, or get a very particular piece of information about a resource. For example, you can try to add a -o yaml or -o json parameter to your kubectl calls. This will give the response in YAML or JSON format, and will contain a lot more information. On top of that, you can also use “templates” and things to get specific values. Something that can be very useful when creating scripts etc.

In most cases you create/delete/update your resources (pods etc) using YAML files like we did here, by passing in -f <FILENAME>. This allows us to insert a lot of configuration, as well as multiple resources, in a structured way using YAML, instead of having to pass in a ton of parameters to the kubectl call. And on top of that, it also allows us to store the definitions in source control.

However, it is possible start ad-hoc pods in your cluster by passing parameters to kubectl… To start a pod containing a single container based on the zerokoll/helloworld image, you can execute

kubectl create deployment my-deployment --image zerokoll/helloworld

As you can see from the command, this doesn’t actually create a pod. Instead, it actually creates a Deployment. So it isn’t quite the same thing… And on top of that, it actually ends up creating something called a ReplicaSet as well. And it does this, because using a deployment and a replicaset is a better way to run a pod than just running the pod on its own. But generally, managing resources imperatively like this should be discouraged and only be done while experimenting

If you want to see everything that was created by running the kubectl create command, or rather, everything that is in your cluster right now, you can run

kubectl get all

This returns something similar to this

NAME                                READY   STATUS    RESTARTS   AGE
pod/my-deployment-fbb5c68f7-zmxv7   1/1     Running   0          5s
pod/my-pod                          1/1     Running   0          9m1s 

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   5h10m

NAME                            READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/my-deployment   1/1     1            1           5s

NAME                                      DESIRED   CURRENT   READY   AGE
replicaset.apps/my-deployment-fbb5c68f7   1         1         1       5s

Sorry for the potentially less than awesome formatting… But as you can see, after creating my pod using the my-pod.yml file, and a deployment using the kubectl create command, I end up having 2 pods, a deployment and a replicaset in my cluster. You can ignore the service called service/kubernetes. It’s a default service that is always available in K8s, and not the result of something that we did.

The main thing to understand here, is that when we use kubectl create deployment we get more things in our cluster than just the pod we wanted. More useful things to be honest, but let’s dive deeper in that later on.

To remove the deployment we just created, we can run

kubectl delete deployment my-deployment

If you are quick to run kubectl get all after that command, you might see something like this

NAME                                READY   STATUS        RESTARTS   AGE
pod/my-deployment-fbb5c68f7-zmxv7   0/1     Terminating   0          4m33s
pod/my-pod                          1/1     Running       0          13m

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   5h15m

As you can see, the deployment and replicaset have been removed already, but the pod is in a Terminating state. This is due to the fact that when we are removing containers, K8s do allow for graceful shutdown. Because of this, it might take a little while befor the pod goes away completely. During that time, the pod will by in a Terminating state. This means that it isn’t available to use, it’s just waiting for the containers in the pod to gracefully shut down. As soon as the termination has finished, it will disappear from the list of pods.

To remove the pod that we created by using the YAML file, we can either call

kubectl delete pod my-pod

Or, you can re-use the YAML file to get it done, by running

kubectl delete -f ./my-pod.yml

For a single pod like this, running the first command is fine, but once you start having a bit more complicated deployments, the second one might be useful as it allows us to remove more than one resource at a time.

Configuring pods/containers

There are quite a few settings that you can configure on a pod. I can’t cover them all, but I thought I would at least cover some of the most useful or commonly used ones.

Passing config values through environment variables

Just as when we are running Docker without Kubernetes, you can pass configuration and other data into your container through environment variables. And even though I personally found envrionment variables a bit old school when I started using Docker, they are actually pretty awesome. Not to mention that they are still the default way to handle a lot of things…

So, how do we set environment variables in our containers when we are running Kubernetes? Well, the simplest way is to just define them in the container spec like this


spec:
  containers:
  - name: hello-world
    image: zerokoll/helloworld
    env:
    - name: MY_ENVIRONMENT_VARIABLE
      value: "My environment variable name"
    - name: ANOTHER_VARIABLE
      value: "This is awesome…ish"  

However, as simple and as useful as that might be, it is a bit limiting. It pretty much means that we are hard-coding the values in our deployment, which is kind of contradictory to the way that environment variables should work. They should be dependent on your environment, not be put into your environment with the deployment. Not only does that mean that you have to re-deploy your application if the configuration changes, but depending on what you are putting in your environment variables, it might also mean that you are adding potentially sensitive data into source control. Neither are great things!

Sure, in a lot of cases, using semi hard-coded values like, or replacing them during deployment, works fine. But in a lot of cases, we want the values to be picked up from the K8s environment that the pod is deployed to.

The way that you get configuration values from your K8s environment is by using something called a ConfigMap. This is a K8s resource that basically represents a named key-value pair dictionary that is stored inside your K8s environment. Values from this dictionary can then be mapped into your containers as environment variables.

Ignoring the creation of the actual ConfigMap, mapping it to environment variables looks something like this


spec:
  containers:
  - name: hello-world
    image: zerokoll/helloworld
    env:
    - name: MY_ENVIRONMENT_VARIABLE
      valueFrom:
            configMapKeyRef:
              name: my-config-map
              key: CONFIG_MAP_KEY

So, instead of setting the environment value property, you set a more complex valueFrom property. And instead of just defining the value, we define the name of the ConfigMap that we want to use, and the key of the value in that ConfigMap that we want to use to set the environment variable.

In the above case, the environment variable MY_ENVIRONMENT_VARIABLE is set to the value with the key CONFIG_MAP_KEY in the ConfigMap called my-config-map.

Ok, that’s pretty sweet! But how does one create a ConfigMap? Well, there are a couple of ways, but the simplest way is to just create another YAML file like this

apiVersion: v1
kind: ConfigMap
metadata:
  name: my-config-map
data:
  CONFIG_MAP_KEY: "My config value!!!"

and pushing that to to Kubernetes like this

kubectl create -f ./config-map.yml

This will create a ConfigMap resource in the Kubernetes cluster, which we can then use in the pod definition like we did above. So in the sample above, we end up with a pod containing a single container called hello-world, in which there is an environment variable called MY_ENVIRONMENT_VARIABLE set to the value My config value!!!. And the value for the environment variable is defined in the ConfigMap called my-config-map.

Note: I will cover ConfigMaps in more depth in a later post. But for now, this should get you going at least…

Second note: For quite obvious reasons, the ConfigMap needs to be defined before you create your pod that uses it…

Last note: You can also map in configuration as a file using volumes which is covered a bit below… More about this when we have a look at the ConfigMap resource in a later post.

Liveness probes

One of the things that makes Kubernetes so interesting, is the ability to create resilient solutions that can fairly easily handle some of the more common problems that might arise in production. For example, in some cases we end up with containers “locking up”, or entering a failed state that’s causing problems to the functionality, but isn’t enough to cause the container to actually fail. In most of these cases, the problem can be sorted out by restarting the container. A task that obviously can be done manually. However, that has some obvious down sides, like having to have 24/7 monitoring and people on stand by to do the restart. And in a lot of situations, having to have someone on stand by 24/7 to restart containers, is just not a feasible solution.

One of the solutions to this problem in Kubernetes, is to use something called a liveness probe. A liveness probe is pretty much what is sounds like. It is a “thing” that “probes” the container to see if it is working as it should.

There are many ways to do this, but the simplest one is to have K8s make an HTTP call to the container to verify it’s ability to respond. And as long as it gets an HTTP code that is >=200 and <400 back, the container is considered being in a healthy state. Anything else is considered an error, and a potential reason for the container to be restarted. The probes can be quite a bit more complex than that though, but let’s focus on a “simple” HTTP GET probe.

Note: If you need more complex liveness probes than a simple HTTP call, have a look at the Kubernetes documentation

Besides defining what HTTP call to make, you can also define things like an initial delay and a interval (or period). The initialDelaySeconds defines how long K8s should wait before sending the first probe after starting a container, and the periodSeconds defines how often it should send probe requests. On top of that, you and also set the failureThreshold to define how many failures that have to happen in a row before the container is considered to be in a failed state, and is restarted.

Here is an example of how you can configure a liveness probe for a container


spec:
  containers:
  - name: hello-world
    image: zerokoll/helloworld
    livenessProbe:
      httpGet:
        path: /
      initialDelaySeconds: 10
      periodSeconds: 3
      failureThreshold: 3

In this case, it is teeling K8s to make an HTTP GET request to “/” every 3 seconds, after an initial delay of 10 seconds. This allows your application 10 seconds to start up, which is probably a good idea in a lot of cases. It also says that it should not consider the container to be in a failed state until it has sent 3 failed probes.

There are several other ways to do liveness probing, ranging from using TCP instead of HTTP, to running custom scripts.

There are 2 other forms of probes available in K8s. They all work in the same way, but target slightly different scenarios.

The first one is called a startup probe. This probe is responsible for “protecting” slow starting containers. It runs as the container starts up, instead of the liveness probe. As soon as the startup probe finds the container responsive, the startup probe takes over. This allows us to have a “slower” probe interval while starting up, and then move to a faster one when the container is up and running. This makes the response to a deadlocked container fast when up and running, while still allowing for a slow start up.

On top of the liveness and liveness probes, there is something called a readiness probe. The readiness probe is very similar to the other probes. But instead of being used to determine if a container needs to restart, it’s used by a different K8s construct called a service. And because I intend to cover services in a later post, I have decided to postpone talking about readiness probes until I talk about services in a later post. But at a high level, it is used to make sure containers are not receiving requests if they can’t handle it at the moment. Basically removing the container from the load-balancing rotation temporarily.

Setting resource limitations

When creating pods, you are able to limit the resources the containers inside the pod are allowed to use. The two main types of resources that can be limited are CPU and memory.

By default a container has no limits. This means that containers are allowed to use as much resources as it feels like using. It also means that a resource heavy container might hog all the available resources on a node, causing other containers to slow down, or even fail. So setting up reasonable resource limits is really important!

There are 2 different ways you can affect the resources a container is allowed to use.

First, you can set a request amount. This is the minimum amount that needs to be available on a node for the scheduler to allow it to run there. Basically, this is the amount you are guaranteed to have at your disposal. And as long as there are more resources available on the node, you can use more. However, if a node runs out of resources, any container that is using more resources than the requested amount is likely to be restarted to free up resources.

Secondly, you can set a limit. This defines the maximum amount of resource that the container is allowed to use. This means that if the container requests more resources than the set limit, it might be terminated/restarted.

Note: This is fairly straight forward when it comes to memory usage. CPU on the other hand is a bit more complicated. Using more CPU than the limit doesn’t necessarily mean that the container is terminated. But K8s won’t be able to to schedule a new pod on a node that cannot satisfy the total CPU request of all the containers in the pod, based on what is left over after summing up all of the requested CPU for all the currently running containers.

This is a complicated topic that you can read more about here

Note: If you only set the limit, the request amount will be the same amount, giving you a fixed amounto to work with.

To set the resource requests and limits for a container in pod, you can add resources entry in the YAML file like this

apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
  - name: my-container
    image: zerokoll/helloworld
    resources:
      requests:
        memory: "50Mi"
        cpu: "0.5"
      limits:
        memory: "100Mi"
        cpu: "1"

With this defintion, the pod will only be scheduled on a node that has at least 50 Mb of memory and half a CPU available. And the container will be restarted if it tries to use more than 100 Mb or memory.

Note: A thing to note here is the defintion of a CPU. In Azure for example, a CPU equals a vCore, while when you are running on Intel-based physical hardware, it equals one Hyperthread.

Warning: Requesting the scheduling of a pod that requires more resources than is available on any of your nodes will cause the pod scheduling to fail. And also, remember that the resource requests for a pod is the combined resources requested by the containers defined in the pod.

But…what if my image is in a private image registry?

So far I have dodged the question about using private image registries. By using an image that’s located in a public registry like Docker Hub, retrieving the required image is a piece of cake. However, a lot of images can’t be put in a public location for different reasons. Who wants to make all their intellectual property publicly available? Pretty much no one I would say. And that means that images are often held in private registries. And with private registries, you need to provide credentials to access the repos. So how do we solve that in Kubernetes, where the image retrieval is done by the system and not an interactive user?

Well, Kubernetes has a resource calles a secret. A secret is very similar to a ConfigMap, except for the fact that the values are supposed to be secret, and are Base64 encoded when stored(no, that doesn’t mean that they are secure, I know…). K8s also has a special form of secret that can be used to store registry credentials. This is called a docker-registry secret.

A secret like this can be created in several ways, but the simplest way is to make a call that looks like this

kubectl create secret docker-registry my-registry-credentials \
              --docker-server=<your-registry-server> --docker-username=<your-name> \
              --docker-password=<your-pword> --docker-email=<your-email>

This will create a new docker-registry credentials secret called my-registry-credentials containing the name of the registry to use, the username, the password and the user’s e-mail address. This information can then be used by K8s to get access to a private registry when deploying new containers. All you need to do, is to add the credentials information to the pod definition like this

...
spec:
  containers:
  - name: hello-world
    image: zerokoll.azurecr.io/zerokoll/helloworld
  imagePullSecrets:
    - name: my-registry-credentials

This tells Kubernetes to sign into my private repo (in this case my Azure Container Registry repo) using the credentials I defined in the my-registry-credentials credentials in the previous step.

What about storage?

Well, storage in Kubernetes is a whole chapter of its own. So I intend to give this a whole lot more coverage in a future post. It is a bit too complicated to just cover quickly right now. But if you know how to use Docker, as I assume you do, I can say that it is based on Docker volumes. And just as when using Docker, you should make sure that you use volumes that are backed by some external storage, and not just store it on the current node.

But as I said, I will get back to this topic in a later post!

Putting multiple containers in a single pod has side-effects

It’s important to know that putting multiple containers in a pod means that they share some things. At the beginning of the post I said that they shared “context”. What does this mean?

Well…first of all, they share a networkspace. This basically means that they share the same IP address and port span, which means that multiple containers in a pod cannot use the same port. It also means that the containers can reach eachother using localhost.

As for storage, they have separate storage, but can share data through a shared volume. As mentioned previously, I will look at storage and volumes in a later post. But it is good to know that sharing a pod has “side effects” or “benefits” in this area…

It might also be good to know that having a single container in a pod is a VERY common scenario. It is probably more common than running multiple container in a single pod to be honest. At least according to my anecdotal evidence… ;)

That’s it for the second part of my introduction to Kubernetes! I hope you learnt something new, and didn’t just spend your time unecessarily! If nothing else, it forced me to reasearch some things, and learn some new things. So it was useful for me if nothing else…

The third part of this series is availble here.

zerokoll

Chris

Developer-Badass-as-a-Service at your service