Getting Started with MicroK8S — Up and Running Kubernetes Locally
Overview
Kubernetes is one of orchestration system which are getting popular for last five years. It’s originally made by Google but now it’s maintained by it’s huge community from all over the world. That’s why it might be become a new standard for container orchestration.
But having Kubernetes cluster might be a bit expensive. You have to run at least two or 3 nodes to run the cluster and try several demo projects on that cluster. That could be overkill for someone. But how if we have a Kubernetes that might run in our local computer? That should be an interesting thing.
There are several options to run Kubernetes locally such as:
- Minikube
- MicroK8S
- K3S
- etc.
But I choose MicroK8S since I am using Ubuntu as well on my laptop. Then, I tried some tutorials and guides from MicroK8S official documentation[1]. And I could try some hello world projects within this story.
Having Kubernetes in local might help some people to start learning Kubernetes locally withouth having a cluster. So it could reduce the cost and just focus to learn what we can do in local first.
A. Requirements and Installation
This article was produced by using these requirements:
- Notebook with 4Gb of RAM and 4 cores of CPU (Intel Core i3)
- Ubuntu with version ≥ 18.04
- Docker Community Edition for Ubuntu
- MicroK8S
You may follow the installation instruction of MicroK8S in the official documentation https://microk8s.io/docs/. For the Docker installation, you may follow the instruction here https://docs.docker.com/engine/install/ubuntu/.
Once you are ready, you may get back to this article to follow the rest of my story.
B. Running Kubernetes Dashboard
MicroK8S itself has a clear documentation for running Kubernetes Dashboard within it [2]. You only need to enable the dashboard by using this command below:
$ microk8s enable dashboard
Then, you have to forward the port of the dashboard to host port:
$ microk8s kubectl port-forward -n kube-system service/kubernetes-dashboard 10443:443
Now you can open the dashboard on web browser via https://127.0.0.1:10443/ and you can start explore it for further purposes. But you might be prompted by the authentication form. There are two options to access the dashboard. First, configure the authentication via configuration file. Second, just specify the default token that MicroK8S provided. The easiest one to start is the second option.
So you can use these commands to get the token:
token=$(microk8s kubectl -n kube-system get secret | grep default-token | cut -d " " -f1)
microk8s kubectl -n kube-system describe secret $token
On your console, find that token and specify it when you attempt to access to the dashboard.
C. Deploy Some Docker Images
It’s time to deploy some Docker services to MicroK8S. I have two “Hello World” samples to show you how the deployment worked on my MicroK8S. The first one is Microbot which mentioned at MicroK8S official tutorial [3]. And the second one is “hello world” service created by Tutum.
Let’s move.
C.1. Deploying Microbot
In order to deploy the Microbot, we have to create the deployment first then exposing that deployment via service definition. Microbot require port 80 on the container (pods) to run it’s service. But it could be mapped into any port in host via NodePort
. Also we specify microbot-service
as the name of service.
These commands below will deploy Microbot to our Local Kubernetes:
$ microk8s kubectl create deployment microbot --image=dontrebootme/microbot:v1
$ microk8s kubectl expose deployment microbot --type=NodePort --port=80 --name=microbot-service
Once you have ran those commands above. You may check the deployment using microk8s kubectl get all --all-namespaces | grep microbot
. Then, you will find microbot-service
is mapped to port 31895
. Then, you may open Microbot service in web browser via http://localhost:31895.
You will get a web page such the picture below:
C.2. Deploying Tutum — Hello World
Now let’s try the second attempt. We will run “Hello world” project created by Tutum. It’s pretty similar with the section C.1 for the deployment. There are a differences for deployment which are the deployment name and service name.
In this second attempt, we specify hello-world
as deployment name and hello-world-service
as service name. Let’s take a look to these commands below:
$ microk8s kubectl create deployment hello-world --image=tutum/hello-world:latest
$ microk8s kubectl expose deployment hello-world --type=NodePort --port=80 --name=hello-world-service
Once you have ran those commands above. You may check the deployment using microk8s kubectl get all --all-namespaces | grep hello-world
. Then, you will find hello-world-service
is mapped to port 31197
. Then, you may open Microbot service in web browser via http://localhost:31197.
You will get a web page such the picture below:
Both for the section C.1 and C.2 might have different port mapping for every service exposure attempt. So you need to check them through microk8s kubectl get -all --all-namespaces
.
That’s all for the hello world. Now we are going to deploy our own local image to MicroK8S.
D. Deploy My Own Docker Image
It might have a different way to start with. You have to build your image first in your local machine then we have to push it to MicroK8S internal registry. Afterwards, we could deploy our own local image to MicroK8S.
D.1. Register my local Docker image to MicroK8S
Before we push our image to MicroK8S, we have to save built image as a Tarball. Once you have built your image using docker build
, you are able to save the image as Tarball using docker save
command. The Tarball will be pushed to MicroK8S and will be reusable as long you specify “never pull” policy on the deployment configuration.
Now you may push the image to MicroK8S by execute those commands below:
$ sudo docker save pokemon-api > pokemon-api.tar
$ microk8s ctr images ls | grep pokemon
You will receive an information from MicroK8S such as in the picture below once the image is pushed to MicroK8S:
You may check the image existence on MicroK8S using microk8s ctr images ls | grep pokemon
.
It’s time to deploy our local image as a service to our Kubernetes.
D.2. Deploy my own Docker Image to MicroK8S
It might be a quite long journey, but now we could see our deployment definition that define the deployment of our pokemon-api
to Kubernetes. It’s very basic and I adopted it from “Install A Local Kubernetes with MicroK8S” article [3]. More or less, it would deploy image with name pokemon-api
which has name pokemon-api
, image pull policy never
and listen to port 8000
. Do you see that? Pokemon are everywhere.
apiVersion: apps/v1
kind: Deployment
metadata:
name: pokemon-api
labels:
app: pokemon-api
spec:
selector:
matchLabels:
app: pokemon-api
template:
metadata:
labels:
app: pokemon-api
spec:
containers:
- name: pokemon-api
image: pokemon-api:latest
imagePullPolicy: Never
ports:
- containerPort: 8000
You may save that configuration above to file with name pokemon-api.yml
then we will start the deployment by using this command below:
$ microk8s kubectl apply -f pokemon-api.yml
Once we success with the deployment, we will have a successful message such as in this picture below:
You can see the deployment and pods on dashboard as well in the “Overview” page:
You might also see the replica sets named with “pokemon-api-*”:
That’s all for the deployment. Now let’s going deeper by inspecting the deployment via kubectl
.
D.3. Inspect and expose the deployment
From the deployment above, you might want to inspect the result. What were created after the deployment applied. You might inspect it with kubectl get
by specified pods
to inspect how much pods created during the deployment. Or you can specified deployments
to inspect how much deployment which has successfully applied.
$ microk8s kubectl get pods
$ microk8s kubectl get deployments
So far, we have deployed our pods before. They are not ready yet to receive requests from the client. We have to allow them accessible from public and let client send requests into it. We will use kubectl expose
to make the pods accessible. Here is the command:
$ microk8s kubectl expose deployment pokemon-api --type=NodePort --port=8000 --name=pokemon-api-service
From the previous command, we should specify the deployment name which is pokemon-api
, we use NodePort
to let Kubernetes choose an available port for this pods, the pods need port 8000
from their internal environment. Finally, we named the service as pokemon-api-service
.
Afterwards, we could check it by using kubectl get all
to see if our pokemon-api
is accessible for client. Here is the command:
$ microk8s kubectl get all --all-namespaces | grep pokemon
You can see the illustration of the command above in this following picture:
Now let’s open our deployment in web browser via http://localhost:32487/ . You have to see something like in this picture below:
D.4. Scaling the deployment
Another thing that we could perform on Kubernetes is scale your pod into your desired capacity. For example, you have three nodes of cluster member for Kubernetes. It can scale for you at least for three replicas. It might be distributed to every nodes for high availability. Different to traditional deployment that you have to do it manually. By using container orchestration, you only do it in single command line.
Let’s try to scale pokemon-api
into three replicas with this command:
$ microk8s kubectl scale deployment pokemon-api --replicas=3
Then, you could inspect the scaling using kubectl get pods
. You might see Pending
status while they are being created. You can see the illustration as in this picture below:
Once your pods have scaled, you might see them running in your Kubernetes.
You can inspect event messages on the dashboard as well to see what’s the detail during scaling.
Finally, you will have three pods running and displayed in our Kubernetes Dashboard.
If you want to revert it back, just set replicas into one replica such in this command below:
$ microk8s kubectl scale deployment pokemon-api --replicas=1
If you inspect it with kubectl get pods
, you might see them have a Terminating
status.
If you are able to see the scale down process on the dashboard, you might see new messages during the scale down process.
Conclusion
Having Kubernetes in my local computer with MicroK8S might provide a great chance to get more knowledge about Kubernetes without set up a cluster. For some points, I could start quickly to learn several parts of Kubernetes for free.
So, everyone could start to learn Kubernetes with MicroK8S in their local computer and try Kubernetes faster.
References
[2] https://microk8s.io/docs/addon-dashboard
[3] https://ubuntu.com/tutorials/install-a-local-kubernetes-with-microk8s#1-overview
[4] https://microk8s.io/docs/registry-images
[5] https://docs.docker.com/engine/install/ubuntu/
Thanks to Charalambos Paschalides for proofreading my article.