Kubernetes Home lab with K3s
At work, my team and I are evaluating Kubernetes distributions to determine if there is a compelling offering on the marketplace to create a foundation for a new platform offering for our software development staff. Unfortunately, there is no Heroku on Kubernetes offering available today and that makes me sad. I am constantly concerned and worried about working on any task that is not part of the company's key differentiators. As fun as it could be, I'm highly skeptical that building a highly opinionated cloud-like experience on-premises would be successful or worth the time and effort versus differentiating the business in the market.
Buy versus Build?
The reason any of this is an option is that the organization I work for is hesitant to move any significant workloads to the cloud because it values the cost savings of its existing data centers and is concerned about the existential threat to its brand due to loss of customer data. Based on those constraints my current interpretation is that either we evaluate what is necessary to run a Kubernetes based platform on-premises, find a managed offering, or reevaluate if we can automate what we have today until the need arises for embracing Kubernetes or tomorrow's next platform.
Around 2012-2013 I started working in Software Development full-time and around this time at iMatrix I excitedly shared with my colleagues this news about something called docker. It did not seem to gather much interest. The stack at the time was a monolithic PHP/MySQL content management system in use by 10,000 businesses, specifically Veterinarians and Chiropractors. Over time I used docker for personal projects and learning because it eliminated the pain of dependency management and installation. Now it is 2020 and I am finally on the cusp of enabling containerized application development in my organization. My dream is to offer an opinionated offering, not unlike many cloud experiences in application development such as Heroku.
My fear is that accomplishing this lofty goal might be too far outside the core business model. Ultimately it is best for a business to stick with its strengths and invest mostly in differentiators that benefit its customers. Platforms are commodities by nature and I want to offer a good experience to the software development teams I serve. Kubernetes out of the box is not an amazing experience though it is powerful for those that are willing to learn. The teams I work with do not have a lavish amount of time to spend on the intricacies of CI/CD, build pipelines, and types of storage provisioning. Delivering applications to production is incredibly tricky and while there are opinionated platforms out there that will work with Kubernetes like Cloud Foundry the command line experience is not a great fit for everyone and everything. There might be a middle ground but if I have the opportunity to buy or assemble a platform, I would take it instead of attempting to navigate a treacherous path of trying to get it right and enable the business to scale.
In the past few years, I kept an eye on the Kubernetes education space and took some courses on its use and operation. I took Google's Coursera certification Architecting with Google Kubernetes Engine and QwikLabs made it an easy experience. IBM's Cognitive Class.AI Getting started with Microservices with Istio and IBM Cloud Kubernetes Service and Beyond the Basics: Istio and IBM Cloud Kubernetes Service. Another course to keep an eye on that is in development is Container and Kubernetes Essentials which is still in beta at this time. A nice feature of IBM's cloud is that it gave me some experience working with the Cloud Foundry API as part of its Kubernetes/Cloud platform. Nothing beats implementing it yourself and deploying an application. One area that is lacking in education and still changing quickly is the use of persistent storage and I wonder how reliable it is these days.
My home lab is not fancy and luckily Rancher has a single node dockerized version of their container management platform. A fun project I decided to try was to see if I could run Rancher and connect some of the older hardware I had collected over the years as nodes of the cluster.
Windows PC with WSL2, Docker Desktop, Kubernetes
MacBook with Ubuntu 20.04 K3s server
Raspberry Pi with Ubuntu 20.04 K3s agent
Samsung Laptop with Ubuntu K3s agent
sudo k3s server & # Kubeconfig is written to /etc/rancher/k3s/k3s.yaml # Below command to see it is working sudo k3s kubectl get node # Grab the token from /var/lib/rancher/k3s/server/node-token
# This is the [easy way to attach the agent to the server](https://rancher.com/docs/k3s/latest/en/quick-start/) #mynodetoken is from the server at /var/lib/rancher/k3s/server/node-token curl -sfL https://get.k3s.io | K3S_URL=https://myserver:6443 K3S_TOKEN=mynodetoken sh -
Connected to the dashboard with monitoring
I attempted to use the resource stipulations in the YAML included in the stateless frontend example and k3s complained it wouldn't deploy due to CPU constraints.
Kubernetes Official Docs Example Apps
Create a deployment
Create a service to expose the application
Scale the application
Remove the application
Kubernetes Cluster distribution differences
One difference I found in the documentation is the LoadBalancer service type. This is dependent on what features your cluster has available to it. In the K3s documentation, it explains that you can use the LoadBalancer type but if it cannot find a node with the exposed port it will continue to be marked as pending and not deploy.
# If you use the provided frontend-service.yaml it uses node port not a public IP # You will need to uncomment the load balancer type and also verify that your cluster supports it # k3s does except it will look for a node with that port exposed on the cluster kubectl get service frontend
It was nice to see a high level what my cluster had available. For some reason I did not see much information related to the agents attached to this control plane.
A mistake I made during the exercise was confusing deployments versus pods and then attempting to delete pods and not deployments. This resulted in the cluster attempting to re-provision more nodes after I deleted the pods.
When cleaning up your deployments deleting pods is not going to help. We need to delete the deployments using labels.
kubectl delete deployment -l app=redis
kubectl delete service -l app=redis
kubectl delete deployment -l app=guestbook
kubectl delete service -l app=guestbook
The next exercise was deploying a WordPress site with a backing MySQL database. I ran into some challenges with assembling the YAML but all-in-all it managed to provision local storage and deploy along with a secret. Rancher also makes it easy to put secrets in its UI and have it create the necessary YAML. The stateful walkthrough exposed me to kustomization.yaml for the use case of generating a secret and specifying other resource YAML files that can tie a set of applications together along with other components like secrets.
As well as another interesting post that uses a metaphor to compare the Kubernetes Architecture shown below.
In comparison the K3s architecture is interesting to compare and contract to full fledge Kubernetes.
Single Node K3s
Highly Available K3s