Under the name of “Managed Kubernetes for AWS”, or short EKS, Amazon offers its own dedicated solution for running Kubernetes upons its cloud platform. The way this is provided is quite interesting: While the Kubernetes Master Infrastructure is offered “as a service” (and also billed separately) the Kubernetes Worker Nodes are simply EC2 instances for which Amazon provides a special setup procedure. These now also offer the potential to use well known AWS features like Autoscaling for Kubernetes workloads.

However, manually setting up this infrastructure is still quite a complex process with multiple steps. To be able to quickly have an EKS Kubernetes Cluster up and running, and also to deploy a software project on it, we created a small helper project that offers the creation of a “turnkey ready” EKS cluster that can be quickly pulled up and also teared down after usage.

more...

Author:Oliver Weise
Tags:Kubernetes, aws, eks, eksctl
Categories:development

AWS offers a great service called “Amazon Elastic Container Service for Kubernetes” (AWS EKS).
The setup guide can be found here: Offical AWS EKS getting started guide

If you overload such a cluster it easily happens that your Kubelet gets “Out of Memory” (OOM) errors and stops working.
Once the Kubelet is down you can see kubectl get nodes that node is in state “NotReady”.
In addition if you describe your node kubectl describe $NODE you can see the status description is: “System OOM encountered”.
If you look on your pods kubectl get pods --all-namespaces you can see that pods are in state “Unknown” or in “NodeLost”.

Kubelet OOM errors should be avoided by all costs.
It causes to stop all pods on that node and its quite complicated for K8s to maintain high availability for applications in some cases.
For example for stateful sets with a single replica k8s cannot immediately move that pod to another node.
The reason is that k8s does not know how long the node with all its pods stays unavailable.

Therefore i like to share some best practice to avoid OOM problems in your AWS EKS clusters.

more...

Author:Johannes Lechner
Tags:AWS, EKS, Cloudwatch, Kubernetes, autoscaling, oom_killer, System-OOM
Categories:devops
Monitoring-Workshop 2019 27./28. Mai München