One of our customers is in the process of decommisioning their OpenShift v3.11 cluster. This cluster is currently still used for building customer specific base images. Over time quite a few elaborated pipeline builds (based on Jenkins) have been developed for that purpose.

The customer wanted me to migrate the existing pipeline builds on their v3.11 cluster to Tekton (aka OpenShift Pipeline) builds running on their new v4.9 cluster. This task turned out to be quite pesky. Tekton is a beast in many aspects.

more...

Author:Markus Hansmair
Tags:CI/CD, Kubernetes, OpenShift
Categories:development, kubernetes, openshift
A look insight Camel K

Today software often needs to be run in cloud environments. Newly developed software, especially microservices are developed with cloud readiness in mind.
But we not only have microservices in business environments, we also have integration software. This type of software is developed and designed to connect external services to internal ones.

more...

Author:Andy Degenkolbe
Tags:Camel, Camel k, kubernetes, Openshift, knative
Categories:development

Some time ago, I started a project to create a Helm based operator for an OpenShift application. I used the Operator SDK to create the Helm operator. The Operator SDK documentation describes the parameters pretty good, and it contains a simple tutorial. it does not, however, describe the complete development cycle. This article aims to describe everything from creating the operator to the point where you can upload your operator to OperatorHub.io. We start with a basic Helm Chart. With this, you can install Nginx as a StatefulSet. You can find the source code in my github repo. Before we can start with creating an operator, we need to fulfill some prerequisites.

more...

Author:Olaf Meyer
Tags:openshift, kubernetes, operator, olm
Categories:development

Last summer I watched the Red Hat master course about Kafka from Sébastien Blanc. The Kafka setup in Kubernetes presented in the course looked pretty easy. The Kafka client implementation for Java seemed to be easy as well. Furthermore, I wanted to use Kafka for a long time, so I got the idea to extend my Istio example. Each time a service is called, a message is sent to a topic. The service (implemented in Quarkus), as well as the Kafka cluster should be in an Istio Service Mesh and secured with mTLS. I found descriptions of Joel Takvorian that Kafka works with Istio, so I knew (or at least hoped) that my plan should work.

This article will describe the overall architecture of the example and which obstacles I encountered during deployment.

more...

Author:Olaf Meyer
Tags:openshift, kubernetes, kafka, istio, kiali
Categories:development

During a discussion with a customer, we talked about which steps are necessary to add an application to a services mesh. Which should be no big deal. Unfortunately, there is not a simple guideline how to do that for the Red Hat OpenShift Service Mesh. Furthermore, I was not sure how the requests for the application would look like in Jaeger. To clarify these points, I created a small application. Which I then deployed on OpenShift and added it to a service mesh control plane. This is the documentation of the steps that I have done.

more...

Author:Olaf Meyer
Tags:openshift, kubernetes, istio, jaeger, kiali
Categories:development

During this year’s Red Hat Summit I had the chance to get a glimpse of the latest version of Kiali. This version had some nice features, like the traffic flow of the application graph during a time period (Graph replay). It also contains wizards to create destination rules and virtual services. This demo has struck my curiosity to get the hands on this Kiali version. One obstacle for me was that my Kiali is running in Red Hat OpenShift Service Mesh and is controlled by the Kiali operator. Currently, it is using version 1.12. The version that I wanted to try was the latest release version (1.17). The Red Hat OpenShift Service Mesh does not support this version. This article describes what we need to do in order to replace the Kiali version of an Red Hat OpenShift Service Mesh with the latest version of Kiali.

more...

Author:Olaf Meyer
Tags:openshift, kubernetes, istio, kiali
Categories:development

Some time ago, I did a webinar about the RedHat Service Mesh, which is based on Istio. For this webinar, I prepared a demo application. Among other things, I wanted to show how to do the authentication with JWT token in general and, more specific, with Keycloak. This article will describe how to configure Keycloak. In the second article, I will show you what problems I encountered running the application in Istio and how I figured out what was wrong in my configuration. You can find the article here

more...

Author:Olaf Meyer
Tags:openshift, kubernetes, istio, keycloak
Categories:development
Debugging Istio

In the article, I’m going to describe what we can do, if we configured our application to use Istio, but it is not working like intended. Originally, I wanted to give a detailed description what problems I encountered during the creation of my webinar and how I fixed them. However, I came to a point where this would be a very long one. I hope that you don’t mind that I shortened it and just describe which tools are available to debug the Istio configuration. In my previous article I described how to configure Keycloak for my webinar. So without further ado, let’s start.

more...

Author:Olaf Meyer
Tags:openshift, kubernetes, istio, keycloak
Categories:development

In this article, I will show you how to install Red Hat OpenShift Container Platform 4.3 (OCP) on VMware vSphere with static IPs addresses using the openshift installer in UPI mode and terraform. In contrast to the official OpenShift 4.3 install documentation, we will not use DHCP for the nodes and will not setup the nodes manually - instead we will use static IP addresses and terraform to setup the virtual machines in our vCenter.

more...

Author:Zisis Lianas
Tags:openshift, redhat, k8s, kubernetes, terraform, vmware, vsphere, ocp
Categories:container, platform, openshift

One of the most challenging questions in cloud environments is about how secure is my application when its deployed in the public cloud ?
Its no secret that security aspects are much more important in a public cloud than it was in classic environments.
But dont be surprised that many applications even in public cloud dont follow best practice security patterns.
This has several reasons for example time and costs are very high trying to achieve a high security level.
But in fact AWS and Kubernetes offer many options which let you improve your security level without too much effort.
I like to share some of the possibilities that you have when creating a secure AWS EKS cluster.

more...

Under the name of “Managed Kubernetes for AWS”, or short EKS, Amazon offers its own dedicated solution for running Kubernetes upons its cloud platform. The way this is provided is quite interesting: While the Kubernetes Master Infrastructure is offered “as a service” (and also billed separately) the Kubernetes Worker Nodes are simply EC2 instances for which Amazon provides a special setup procedure. These now also offer the potential to use well known AWS features like Autoscaling for Kubernetes workloads.

However, manually setting up this infrastructure is still quite a complex process with multiple steps. To be able to quickly have an EKS Kubernetes Cluster up and running, and also to deploy a software project on it, we created a small helper project that offers the creation of a “turnkey ready” EKS cluster that can be quickly pulled up and also teared down after usage.

more...

Author:Oliver Weise
Tags:Kubernetes, aws, eks, eksctl
Categories:development

AWS offers a great service called “Amazon Elastic Container Service for Kubernetes” (AWS EKS).
The setup guide can be found here: Offical AWS EKS getting started guide

If you overload such a cluster it easily happens that your Kubelet gets “Out of Memory” (OOM) errors and stops working.
Once the Kubelet is down you can see kubectl get nodes that node is in state “NotReady”.
In addition if you describe your node kubectl describe $NODE you can see the status description is: “System OOM encountered”.
If you look on your pods kubectl get pods --all-namespaces you can see that pods are in state “Unknown” or in “NodeLost”.

Kubelet OOM errors should be avoided by all costs.
It causes to stop all pods on that node and its quite complicated for K8s to maintain high availability for applications in some cases.
For example for stateful sets with a single replica k8s cannot immediately move that pod to another node.
The reason is that k8s does not know how long the node with all its pods stays unavailable.

Therefore i like to share some best practice to avoid OOM problems in your AWS EKS clusters.

more...

Author:Johannes Lechner
Tags:AWS, EKS, Cloudwatch, Kubernetes, autoscaling, oom_killer, System-OOM
Categories:devops
oc patch unleashed

Recently, I stumbled on a situation where I wanted to add a couple of values to an OpenShift deployment configuration. Previously I had modified or added a single attribute in a yaml file with oc patch. So I started to wonder whether it is possible to update multiple attributes with oc patch as well. To get right to the result: Yes, it is possible. This article will show you which features oc patch and likewise kubectl patch really have, beside a simple modification of one attribute.

more...

Author:Olaf Meyer
Tags:openshift, kubernetes
Categories:development

Kubernetes and OpenShift have a lot in common. Actually OpenShift is more or less Kubernetes with some additions. But what exactly is the difference?

It’s not so easy to tell as both products are moving targets. The delta changes with every release - be it of Kubernetes or OpenShift. I tried to find out and stumbled across a few blog posts here and there. But they all where based on not so recent versions - thus not really up-to-date.

So I took the effort to compare the most recent versions of Kubernetes and OpenShift. At the time of writing v1.13 of Kubernetes and v3.11 of OpenShift. I plan to update this article as new versions become available.

more...

Author:Markus Hansmair
Tags:openshift, kubernetes
Categories:devops
assets/images/kubernetes-logo.png

[Prometheus][prometheus] is a popular monitoring tool based on time series data. One of the strengths of Prometheus is its deep integration with [Kubernetes][kubernetes]. Kubernetes components provide Prometheus metrics out of the box, and Prometheus’s service discovery integrates well with dynamic deployments in Kubernetes.

There are multiple ways how to set up Prometheus in a Kubernetes cluster. There’s an official [Prometheus Docker image][promdock], so you could use that and create the Kubernetes YAML files from scratch (which according to Joe Beda is [not totally crazy][crazy]). There is also a [helm chart][helmchart]. And there is the [Prometheus Operator][promop], which is built on top of the CoreOS [operator framework][operator].

This blog post shows how to get the [Prometheus Operator][promop] up and running in a Kubernetes cluster set up with [kubeadm][kubeadm]. We use [Ansible][ansible] to automate the deployment.

more...

Author:Fabian Stäber
Tags:Kubernetes, kubeadm, Prometheus
Categories:kubernetes
assets/images/kubernetes-logo.png

[Kubeadm][kubeadm] is a basic toolkit that helps you bootstrap a simple [Kubernetes][kubernetes] cluster. It is intended as a [basis for higher-level deployment tools][kubeadm-scope], like [Ansible][ansible] playbooks. A typical Kubernetes cluster set-up with kubeadm consists of a single Kubernetes master, which is the machine coordinating the cluster, and multiple Kubernetes nodes, which are the machines running the actual workload.

Dealing with node failure is simple: When a node fails, the master will detect the failure and re-schedule the workload to other nodes. To get back to the desired number of nodes, you can simply create a new node and add it to the cluster. In order to add a new node to an existing cluster, you first create a token on the master with kubeadm token create, then you use that token on the new node to join the cluster with kubeadm join.

Dealing with master failure is more complicated. Good news is: Master failure is not as bad as it sounds. The cluster and all workloads will continue running with exactly the same configuration as before the failure. Applications running in the Kubernetes cluster will still be usable. However, it will not be possible to create new deployments or to recover from node failures without the master.

This post shows how to backup and restore a Kubernetes master in a kubeadm cluster.

more...

Author:Fabian Stäber
Tags:Kubernetes, kubeadm, etcd
Categories:kubernetes
assets/images/kubernetes-logo.png

This blog post shows how to use [CIFS][1] (a.k.a. SMB, Samba, Windows Share) network filesystems as [Kubernetes volumes][2].

Docker containers running in Kubernetes have an ephemeral file system: Once a container is terminated, all files are gone. In order to store persistent data in Kubernetes, you need to mount a [Persistent Volume][3] into your container. Kubernetes has built-in support for network filesystems found in the most common cloud providers, like [Amazon’s EBS][4], [Microsoft’s Azure disk][5], etc. However, some cloud hosting services, like the [Hetzner cloud][6], provide network storage using the CIFS (SMB, Samba, Windows Share) protocol, which is not natively supported in Kubernetes.

Fortunately, Kubernetes provides [Flexvolume][7], which is a plugin mechanism enabling users to write their own drivers. There are a few flexvolume drivers for CIFS out there, but for different reasons none of them seemed to work for me. So I wrote my own, which can be found on [github.com/fstab/cifs][8].

This blog post shows how to use the fstab/cifs plugin for mounting CIFS volumes in Kubernetes.

more...

Author:Fabian Stäber
Tags:Kubernetes
Categories:kubernetes

Docker Headless VNC Container 1.3.0 has been released today. The different Docker images contains a complete VNC based, headless UI environment for testautomation like Sakuli does or simply for web browsing and temporary work in a throw-away UI container. The functionality is pretty near to a VM based image, but can be started in seconds instead of minutes. Each Docker image has therefore installed the following components:

more...

Author:Tobias Schneck
Tags:docker, openshift, kubernetes, continuous integration, testautomation
Categories:development

Sowohl End-2-End-Testing als auch End-2-End-Monitoring folgen dem gleichen Paradigma – sie betrachten eine Applikation aus der Sicht des End-Users. Hier darf es keine Rolle spielen, in welcher Oberflächentechnologie die Applikation geschrieben ist oder in welcher Art sie mit dem End-User in Verbindung tritt. Genau an diesem Punkt setzt das Open-Source-Tool Sakuli an.

more...

_nowhere_

Getting started with Kubernetes can be intimidating at first. Installing Kubernetes is not the easiest of tasks and can get quite frustrating.[^1] Luckily, there is an out-of-the box distribution called Minikube which makes toying around with Kubernetes a bliss.

more...

Author:Christian Guggenmos
Tags:kubernetes, minikube, docker
Categories:development, java, kubernetes
labskaus
Labskaus

Kiel, 24 Grad, 50 Mann an Bord. Bei unerwartet schönstem Sommerwetter wurde in der Kieler Fachhochschule am 7. und 8. September der elfte Workshop der Monitoring-Community veranstaltet. Das ConSol-Monitoringteam trug mit acht Vorträgen zum Gelingen der Veranstaltung bei. Eine kurze Zusammenfassung:

Bereits mit dem erstem Vortrag nach der Begrüßung, “E2E-Monitoring mit Sakuli”, sorgte Simon Meggle für einen würdigen und technisch anspruchsvollen Auftakt der Veranstaltung. Die Möglichkeit, Sakuli in Docker-Containern einzusetzen und End-to-End-Tests somit praktisch beliebig zu parallelisieren, sorgte für viel Gesprächsstoff.

Damit es jeder zu Hause nachmachen kann, führte Simon dann am zweiten Tag die Teilnehmer in einer Live-Demo durch sein Tutorial “Sakuli-Tests im Docker-Container”.

more...

Author:Matthias Gallinger
Tags:Sakuli, Thruk, OMD, Nagios, Icinga, coshsh, Ansible, Kubernetes
Categories:monitoring