On the following page, we show how our example application
can be implemented using Ehcache in client-server mode.


Ehcache’s server side is called the
Terracotta Server Array (TSA).
As Terracotta and Ehcache were originally independent projects, Ehcache’s
client-server implementation differs significantly from the peer-to-peer
implementation shown in part 02.

Similar to a distributed hash table,
the TSA splits the data into stripes, and has one server being responsible
for each stripe. That way, the TSA can be configured to support atomic
operations in a distributed environment

However, unlike Hazelcast’s and Infinispan’s distributed hash tables,
Terracotta’s server infrastructure is static: It is not possible to remove
a stripe at run-time without loosing the data stored on that stripe. Moreover,
it is not possible to dynamically increase the number of stripes
at run-time.

In order to prevent data loss, Terracotta supports dedicated stand-by instances
for each stripe, serving as a hot stand-by when the master fails. If there
is more than one stand-by for a master, an election algorithm is run to
determine which of the stand-by will take over.


While Terracotta’s static infrastructure seems as a drawback at the first
sight, it gives the user full control of how the hardware infrastructure
is used
For example, when there are some dedicated hardware units to host the
Cache server, Terracotta allows to configure one stripe on each unit.
In these cases, it might not be desired to have a self-organizing
peer-to-peer cluster shifting data around.

Terracotta provides two consistency models:

  • eventual consistency
  • strong consistency

When eventual consistency is used, but two concurrent clients modify
the same key at the same time, data loss might occur.

In its free version, Terracotta supports only one stripe, i.e. one
master/standby serving the entire cache. In order to distribute the data among
multiple stripes, Terracotta’s full version needs to be acquired.

For the demo application below, the free production version of Terracotta
Big Memory Max is used, which requires a free license from Terracotta
to be run.


Initialization is the same as with a local cache in the simple ehcache

public void contextInitialized(ServletContextEvent servletContextEvent) {
    Cache cache = CacheManager.getInstance().getCache("events");
    ServletContext context = servletContextEvent.getServletContext();
    context.setAttribute(CACHE, cache);

The configuration file for the CacheManager is extended: A terracotta
server array is added, and strong consistency is configured:

<ehcache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    <terracottaConfig url="localhost:9510,localhost:9610"/>
    <cache name="events" maxEntriesLocalHeap="1000">
        <terracotta consistency="strong"/>

In the <terracotta> tag, the consistency model is configured. When using
strong consistency, as in the example above, atomicity and consistency
of create-and-swap operations is guaranteed. When eventual consistency
is used instead, data loss might occur when two clients modify the same key at
the same time. Unlike the peer-to-peer set-up,
Terracotta in client-server mode does not throw an exception when CAS
operations are used in an eventually consistent environment.

The REST interface loads the Cache from the ServletContext, as in the
local ehcache example:

// ...
private ServletContext context;
private Cache cache;

public void init() {
    cache = (Cache) context.getAttribute(CACHE);


public void contextDestroyed(ServletContextEvent servletContextEvent) {


In client-server mode, Ehcache supports putIfAbsent() and
replace() operations
as in local mode.

public void appendEvent(@PathParam("user") String user, String msg) {
    boolean success;
    cache.putIfAbsent(new Element(user, UserEventList.emptyList()));
    do {
        Element oldElement = cache.get(user);
        UserEventList oldList = (UserEventList) oldElement.getObjectValue();
        UserEventList newList = UserEventList.append(oldList, msg);
        Element newElement = new Element(user, newList);
        success = cache.replace(oldElement, newElement);
    while ( ! success );


public List<String> searchByUser(@PathParam("user") String user) {
    Element result = cache.get(user);
    if ( result == null ) {
        return new ArrayList<>();
    return ((UserEventList) result.getObjectValue()).getMessages();




Terracotta is available on the following maven repository:


How to Run

How to run the server:

  • Download and unpack bigmemory-max-4.0.0.tar.gz (requires free registration)
  • Place the e-mailed license file terracotta-license.key into the
    bigmemory-max-4.0.0 directory.
  • Create config tc-config.xml conforming to terracotta-8.xsd. The XML
    Schema terracotta-8.xsd for server configuration is hidden inside
    of a JAR file in the Terracotta distribution. Extract it from
    terracotta-toolkit-runtime-ee-4.0.0.jar. Example configuration:
<?xml version="1.0" encoding="UTF-8" ?>
<tc:tc-config xmlns:tc="http://www.terracotta.org/config"
              xsi:schemaLocation="http://www.terracotta.org/config terracotta-8.xsd">
        <mirror-group group-name="my-group">
            <server host="localhost" name="server1" bind="">
            <server host="localhost" name="server2" bind="">

  • Start server master with ./server/bin/start-tc-server.sh -f tc-config.xml -n server1.
  • Start server stand-by with ./server/bin/start-tc-server.sh -f tc-config.xml -n server2.

How to run the clients:

  • Check-out the example application from our GitHub repository.
  • Put the terracotta-license.key into the src/main/resources
    folder of project part04.ehcache.
  • Run instance 1 with maven:
mvn tomcat7:run-war -pl part04.ehcache -am verify -Dmaven.tomcat.port=8080
  • Run instance 2 with maven:
mvn tomcat7:run-war -pl part04.ehcache -am verify -Dmaven.tomcat.port=9090

The Web interfaces are then available via


The following pages show how the example application can be implemented with
Hazelcast and Infinispan in client-server mode: