One of the most challenging questions in cloud environments is about how secure is my application when its deployed in the public cloud ?
Its no secret that security aspects are much more important in a public cloud than it was in classic environments.
But dont be surprised that many applications even in public cloud dont follow best practice security patterns.
This has several reasons for example time and costs are very high trying to achieve a high security level.
But in fact AWS and Kubernetes offer many options which let you improve your security level without too much effort.
I like to share some of the possibilities that you have when creating a secure AWS EKS cluster.
Author: | Johannes Lechner |
---|---|
Tags: | AWS, EKS, Cloudwatch, Kubernetes, security, guide, networkpolicy, consulting, prevent, cyberdetection, cyberprevention, cyberattack, München, Consulting |
Categories: | devops |
Under the name of “Managed Kubernetes for AWS”, or short EKS, Amazon offers its own dedicated solution for running Kubernetes upons its cloud platform. The way this is provided is quite interesting: While the Kubernetes Master Infrastructure is offered “as a service” (and also billed separately) the Kubernetes Worker Nodes are simply EC2 instances for which Amazon provides a special setup procedure. These now also offer the potential to use well known AWS features like Autoscaling for Kubernetes workloads.
However, manually setting up this infrastructure is still quite a complex process with multiple steps. To be able to quickly have an EKS Kubernetes Cluster up and running, and also to deploy a software project on it, we created a small helper project that offers the creation of a “turnkey ready” EKS cluster that can be quickly pulled up and also teared down after usage.
Author: | Oliver Weise |
---|---|
Tags: | Kubernetes, aws, eks, eksctl |
Categories: | development |
AWS offers a great service called “Amazon Elastic Container Service for Kubernetes” (AWS EKS).
The setup guide can be found here: Offical AWS EKS getting started guide
If you overload such a cluster it easily happens that your Kubelet gets “Out of Memory” (OOM) errors and stops working.
Once the Kubelet is down you can see kubectl get nodes
that node is in state “NotReady”.
In addition if you describe your node kubectl describe $NODE
you can see the status description is: “System OOM encountered”.
If you look on your pods kubectl get pods --all-namespaces
you can see that pods are in state “Unknown” or in “NodeLost”.
Kubelet OOM errors should be avoided by all costs.
It causes to stop all pods on that node and its quite complicated for K8s to maintain high availability for applications in some cases.
For example for stateful sets with a single replica k8s cannot immediately move that pod to another node.
The reason is that k8s does not know how long the node with all its pods stays unavailable.
Therefore i like to share some best practice to avoid OOM problems in your AWS EKS clusters.
Author: | Johannes Lechner |
---|---|
Tags: | AWS, EKS, Cloudwatch, Kubernetes, autoscaling, oom_killer, System-OOM |
Categories: | devops |