Old Dog. New Kubernetes Cluster.
I've primarily worked as an Application Engineer for the last several years. As such, I've overseen the development of the the individual components of each application, such as its databases, web services, and container images. Furthermore, I usually also authored the high-level proposals for each application's deployment topology; however, the implementation details of the deployment have always been handled by a separate DevOps team. Consequently, I've heard a lot of amazing things about Kubernetes, but I've never really had any hands-on experience with it.
Well, that changes today!
(Okay, it actually changed about a week ago, but that sounds slightly less inspiring. Anyway --)
Diving In
I have a subscription to O’Reilly Media, which grants me access to numerous books, training videos, and courses. So, I kicked things off by speed-watching Kubernetes for the Absolute Beginners -- Hands On. This provided me with an excellent overview of the key concepts and terminology. Additionally, it pointed me towards Minikube, which is an amazingly simple way to launch a (single node) Kubernetes cluster on your local machine.
From there, I started reading Kubernetes: Up and Running, 3rd Edition
to get a more in-depth introduction. This helped to clarify the
responsibilities and interactions between the various abstractions.
It also indirectly highlighted how quickly Kubernetes is evolving!
Many of the book's examples relied upon the kubectl run sub-command's
Generator capability, which was retired at some point in the last 2 years.
As a result, I've had to adapt some of the examples to use the
kubectl create sub-command instead.
First Impressions
I've been extremely impressed with Kubernetes so far!
I already had a big-picture understanding of the problems
that Kubernetes solved. Still, I never knew that kubectl and other
tooling around Kubernetes was so user-friendly! Here are some
of the many things that have impressed me so far:
kubectloften provides several ways to accomplish a goal. As an end-user, you can choose the one that happens to be the most convenient for your specific circumstance. For instance, you can get a deployment up-and-running quickly via thekubectl create deploymentsub-command. Alternatively, you can create a deployment by writing a configuration file in YAML and then applying it via thekubectl applysub-command. The choice is yours!- Similarly,
kubectlcan render its outputs in numerous different formats. One of these formats is YAML, which means thatkubectloutputs can be rendered directly into configuration files for thekubectl applysub-command. Wow! Additionally, output formats such asjsonpathallow you to extract specific pieces of information. Therefore, there's no need to rely upon external commands to parse the outputs. Fancy! 🧐 - Finally,
kubectlbakes in several ways to monitor every little detail. For instance, you can tail the logs of a specific pod, or you can watch nodes as they come online. Nobashscripting is required!
Okay, maybe I'm most impressed with the kubectl command so far,
but the rest of the ecosystem seems very well-designed as well!
Next Steps
I feel like I have a good grasp on the basics now, but I still need to apply these learnings towards a larger project. A few things that come to mind:
- I need to move beyond Minikube and deploy some clusters in the cloud. AWS and many other cloud vendors have Kubernetes-as-a-Service (KaaS) offerings, but these appear to handle the node provisioning for you. So, I might try provisioning a few nodes by hand first. (Perhaps using some Raspberry Pis?);
- The examples I've seen so far are suitable for relatively simple topologies. But, what if I want to deploy an application at global scale in a manner that minimizes latency? For example: how do I ensure that my Web Service endpoints connect to the Redis cache within the same Data Center rather than the one on the other side of the globe? 🤔
- Finally, I also want to learn more about autoscaling. More specifically, is this detail handled by Kubernetes itself, or is this more of the cloud provider's responsibility?
Anyway, enough blogging -- it's time for more learning!