«Top»

The local Hazelcast example
shown in part 2
implemented a local cache for a single Tomcat instance.

This section shows how the application can be extended to provide a
peer-to-peer-based shared cache among multiple Tomcat instances.

As it turns out, the Hazelcast application can be run in peer-to-peer
mode without any configuration or code changes
.

Hazelcast uses UDP multicast automatically to detect peers, and maintains
a Distributed Hash Table. The peer-to-peer
architecture allows for atomic operations, which means that the atomic
ConcurrentMap operations in the distributed scenario works as well as in the
local scenario
.

As new members join the network, the cluster will re-distribute the keys
such that eventually every member in the cluster will own almost same number
of partitions, and almost same number of entries. Also eventually every member
will know the owner of each partition
(and each key)
.

Initialization

Initialization is the same as in the local cache example:

@Override
public void contextInitialized(ServletContextEvent servletContextEvent) {
    Config cfg = new Config();
    HazelcastInstance instance = Hazelcast.newHazelcastInstance(cfg);
    ConcurrentMap<String, UserEventList> map = instance.getMap("events");
    ServletContext context = servletContextEvent.getServletContext();
    context.setAttribute(CACHE, map);
}

Also, the configuration in hazelcast.xml remains the same:

<hazelcast xsi:schemaLocation="http://www.hazelcast.com/schema/config hazelcast-config-2.1.xsd"
           xmlns="http://www.hazelcast.com/schema/config"
           xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
    <map name="events">
        <eviction-policy>LRU</eviction-policy>
        <max-size policy="cluster_wide_map_size">1000</max-size>
        <eviction-percentage>25</eviction-percentage>
    </map>
</hazelcast>

As the Hazelcast cache implements ConcurrentMap, the REST interface
initialization does not need to be changed:

// ...
@Context
private ServletContext context;
private ConcurrentMap<String, UserEventList> map;

@PostConstruct
@SuppressWarnings("unchecked")
public void init() {
    map = (ConcurrentMap) context.getAttribute(CACHE);
}

Shutdown

@Override
public void contextDestroyed(ServletContextEvent servletContextEvent) {
    Hazelcast.shutdownAll();
}

Write

Hazelcast’s distributed ConcurrentMap provides atomic and consistent updates
in a peer-to-peer environment:

@POST
@Path("{user}")
@Consumes(MediaType.APPLICATION_JSON)
public void appendEvent(@PathParam("user") String user, String msg) {
    boolean success;
    map.putIfAbsent(user, UserEventList.emptyList());
    do {
        UserEventList oldMsgList = map.get(user);
        UserEventList newMsgList = UserEventList.append(oldMsgList, msg);
        success = map.replace(user, oldMsgList, newMsgList);
    }
    while ( ! success );
}

Read

@GET
@Path("{user}")
@Produces(MediaType.APPLICATION_JSON)
public List<String> searchByUser(@PathParam("user") String user) {
    UserEventList result = map.get(user);
    if ( result == null ) {
        return new ArrayList<>();
    }
    return result.getMessages();
}

Dependencies

<dependency>
    <groupId>com.hazelcast</groupId>
    <artifactId>hazelcast</artifactId>
    <version>2.5</version>
</dependency>

How To Run

Our example code is hosted on GitHub.
The project can be run with maven:

Instance 1:

mvn tomcat7:run-war -pl part03.hazelcast -am verify -Dmaven.tomcat.port=8080

Instance 2:

mvn tomcat7:run-war -pl part03.hazelcast -am verify -Dmaven.tomcat.port=9090

The Web interface is then accessible via
http://localhost:8080
and
http://localhost:9090.

Advanced Usages

In Hazelcast’s Distributed Hash Table, for each key there is a responsible peer.
Hazelcast uses this infrastructure to implement functionality that goes beyond
simple caching:

  • Hazelcast provides a publish/subscribe messaging infrastructure.
  • Hazelcast provides a distributed computing infrastructure, where each
    peer processes the values it’s responsible for.

See the Hazelcast documentation
for more info.

Next

The final page of part 3 of our series will introduce peer-to-peer clustering with Infinispan: