«Top»

This is the first page of part 3 of our
series on software caches in Java application servers.

In part 2, we showed how the ConcurrentMap in
our reference implementation can be replaced by a Cache in order to limit the memory usage.

Part 2 focused on single local instances. In part 3
and part 4, we show how
the implementations from part 2 can be modified to supported shared caches
in a clustered environment
:

The following introduces peer-to-peer clustering.

Peer-to-Peer for Distributed Caches

From a maintenance point of view, peer-to-peer is the most
simple architecture for distributed caches. Peer-to-peer clusters
are self-organizing, which means that nodes may join and leave the
cluster any time, while the peer-to-peer algorithms take care of
integrating and removing them from the network.

The different Cache implementations offer two fundamentally different
peer-to-peer architectures:

  • full replication
  • distributed hash tables

The following sections introduce these architectures.

Full Peer-to-Peer Replication

As the name indicates, full peer-to-peer replication means that each
peer maintains a full copy of the cache. All updates are broadcast
to all other peers.

This is very useful for WORM applications (write-once-read-many),
as it makes resources available for reading without any network
overhead.

However, fully replicated peer-to-peer clusters do not scale well
with write operations: As each write triggers an update on each other peer,
the scalability is limited by the number of peers and the frequency
of write operations.

Distributed Hash Tables

Distributed hash tables provide an infrastructure where each peer is
responsible for a certain range of keys.

That means, whenever a key/value pair is read or written, there is one (or
a defined set of) peers holding the values for that key.

In order to provide resilience to node failure, there are usually
back-up copies stored on other peers, but there is no full replication
of all data among all peers.

Distributed Hash tables provide good scalability for write operations, as
only a limited number of peers need to be updated. Moreover, Distributed
Hash Tables also provide an infrastructure enabling the implementation of
atomicity and consistency guarantees.

However, read operations in Distributed Hash Tables result in network overhead,
as the responsible peer needs to be involved with each request. This makes read
operations slower than in fully replicated systems, where read operations
can be served from the local RAM.

Implementations

The following table shows which architecture is implemented by the respective
Cache in peer-to-peer mode:

  Full Replication Distributed Hash Table
Ehcache x  
Hazelcast   x
Infinispan x x

Apart from that, Infinispan also has an “invalidation” mode, but this is
targeted for near caches in a multi tier cache architecture, and is not
covered here.

How Do the Peers Find Each Other?

The network protocol implementations used by the peers to find each other
are as follows:

  • JGroups: Infinispan, Ehcache
  • UDP Multicast: included in JGroups, standalone in Hazelcast
  • RMI: Ehcache
  • JMS: Ehcache, Infinispan Near Cache Invalidation

Next

The next pages of part 3 show how our example application can be implemented
using peer-to-peer clustering:

Monitoring-Workshop 2017 12./13.9. Düsseldorf