A couple of years have passed since we last looked into in-memory caches here at ConSol. In that time a bunch of things have happened:
Probably the most significant thing that happened was that the oldest Java Service Request JSR 107, also known as JCache, finally reached ‘Release’ status. This JSR was a long time in the making taking a whole 13 years since the initial proposal back in 2001.
Grid Gains In-memory Data Fabric became an open source project and is now available under the Apache Foundation Project and known as Apache Ignite.
The existing In-memory caches providers, like Hazelcast, have received a whole host of new features including things like support for distributed transactions, a new Map-Reduce API, interceptors for executing business logic, when the cache entries change, to mention just a few.
Distributed caches have evolved into an independent branch of Big Data solutions: When it comes to fast read and write access, distributed caches are the solution of choice.
Dr. Fabian Stäber gave a talk a JayDay 2013 where he introduced and compared the leading distributed cache implementations:
Based on a simple example application, the basic functionality is presented, and the specific strengths and weaknesses of the different cache architectures are highlighted and compared.
The results of this ‘shootout’ and an executive summary can be found here at /java-caches and the example application is available from GitHub.
Author: | Roland Huß |
---|---|
Tags: | bigdata, ehcache, hazelcast, infinispan, terracotta |
Categories: | cache, java, development |