ConSol is an IT consulting company. Enthusiasm for coding and hacking is what unites us. ConSol Labs is a technical playing field where we can share our Open Source involvement. We use this site to blog about our area of personal interest, from the daily business at work and from our spare time projects.
Our world is full of various processes: tracking of goods delivery, currencies trading, monitoring of server resources, hotel bookings, selling goods or services etc. Since these processes occur over time, they can be described by time series data.
Successful businesses always take advantage of their data by analyzing it and then making predictions (e.g. predicting volume of sales for the next month) and business decisions (e.g. if the volume of sales grows then additional goods need to delivered to a warehouse).
There are a number of technologies for analysing the time series data. This article gives an introduction to one of them which is called TimescaleDB which is an open source solution for time series data analysis based on battle-tested PostgreSQL DBMS.
OMD Labs Edition 2.80 has been released today. The OMD Labs Edition is based on the standard OMD but adds some more useful addons like Grafana and Prometheus or additional cores like Icinga 2 and Naemon. This release updates many of the shiped components and adds some more usefull features.
The Prometheus monitoring tool follows a white-box monitoring approach: Applications actively provide metrics about their internal state, and the Prometheus server pulls these metrics from the applications using HTTP.
If you can modify the application’s source code, it is straightforward to instrument an application with Prometheus metrics: Add the Prometheus client library as a dependency, call that library to maintain the metrics, and use the library to expose the metrics via HTTP.
However, DevOps teams do not always have the option to modify the source code of the applications they are running.
As the number of microservice based architectures continues to grow, development teams are facing new challenges when choosing the adequate tools for the job. At the technical level, the decisions need to be made considering the features of both: the cloud or container platform that is going to be used for the deployment and the runtime that will be used by the software. The infrastructure needs to be aware of the health and metrics of the software and the software itself must make the most of the infrastructure by tolerating failures and being able to handle configuration changes. There are numerous solutions for the individual challenges but the lack of an enterprise level blueprint actually paved the way for Eclipse Microprofile.
Let’s move on with this little series about how OpenShift environments may fall short in terms of developer experience.
Today we focus on the role that system tests have in an OpenShift infrastructure and what might possibly go wrong here testdata-wise.
The new release also brings a bunch of enhancements and bug-fixes, a detailed changelog is included in this post.
In some OpenShift environments for building and delivering software we notice that the needs of developers, arguably a group of people who will have a great deal of contact with the platform, are not met as thoroughly as would have been possible.
Especially when it comes to software testing there is often much room for improvement. The usage of container platforms can improve testing techniques a lot but might also be a major blocker when it comes to the provided infrastructure. Good testing is already hard. Everything that makes it even harder, by forcing your developers into workarounds or compromises on testing quality will result in larger round trips, more testing effort, less valid testing, in short: wasted time.
So in this mini series of blog posts we will have a look into some possible fields of improvement and give recommendations on how to fix the respective situation.
Today we evaluate the fact, that some CI/CD setups for OpenShift may spoil the most simple type of testing a developer uses: Just running the software locally - in OpenShift.
This report is about the experience, I’ve made with Arch Linux as the operating system for a developers workstation. You’ll be introduced into the concepts of Arch Linux, followed by a introduction into the main tasks such as package installation and OS maintenance. At the end, I’ll discuss why I think that Arch Linux is a great OS for developers, and finish with a conclusion.
Prometheus is a popular monitoring tool based on time series data. One of the strengths of Prometheus is its deep integration with Kubernetes. Kubernetes components provide Prometheus metrics out of the box, and Prometheus’s service discovery integrates well with dynamic deployments in Kubernetes.
There are multiple ways how to set up Prometheus in a Kubernetes cluster. There’s an official Prometheus Docker image, so you could use that and create the Kubernetes YAML files from scratch (which according to Joe Beda is not totally crazy). There is also a helm chart. And there is the Prometheus Operator, which is built on top of the CoreOS operator framework.
Kubeadm is a basic toolkit that helps you bootstrap a simple Kubernetes cluster. It is intended as a basis for higher-level deployment tools, like Ansible playbooks. A typical Kubernetes cluster set-up with
kubeadm consists of a single Kubernetes master, which is the machine coordinating the cluster, and multiple Kubernetes nodes, which are the machines running the actual workload.
Dealing with node failure is simple: When a node fails, the master will detect the failure and re-schedule the workload to other nodes. To get back to the desired number of nodes, you can simply create a new node and add it to the cluster. In order to add a new node to an existing cluster, you first create a token on the master with
kubeadm token create, then you use that token on the new node to join the cluster with
Dealing with master failure is more complicated. Good news is: Master failure is not as bad as it sounds. The cluster and all workloads will continue running with exactly the same configuration as before the failure. Applications running in the Kubernetes cluster will still be usable. However, it will not be possible to create new deployments or to recover from node failures without the master.
This post shows how to backup and restore a Kubernetes master in a