«Top»

On the following page, we show how our
example application
can be implemented using Infinispan in
client-server mode.

Overview

HotRod
is a binary protocol defined for exposing an Infinispan cluster as an
caching server to multiple platforms. It has support for load balancing and
smart routing. RemoteCacheLoader is a cache loader that knows how to read/store
data in a remote infinispan cluster. For that it makes use of the java hotrod
client.

infinispan

The server nodes use the same implementation as the peers in the
peer-to-peer deployment,
and provide similar configuration options.

Initialization

Infinispan provides a
RemoteCacheManager
to handle RemoteCache
instances. The example code connects the remote cache hard-coded to two server instances
on port 11222 and 11223:

@Override
public void contextInitialized(ServletContextEvent servletContextEvent) {
    RemoteCacheManager cacheManager = new RemoteCacheManager("localhost:11222;localhost:11223");
    RemoteCache<String, UserEventList> cache = cacheManager.getCache();
    ServletContext context = servletContextEvent.getServletContext();
    context.setAttribute(CACHE_MANAGER, cacheManager);
    context.setAttribute(CACHE, cache);
}

Although RemoteCache implements ConcurrentMap, we explicitly use the RemoteCache
class in the REST interface, because RemoteCache has some additional API
for writing data.

// ...
@Context
private ServletContext context;
private RemoteCache<String, UserEventList> map; // <-- not map!

@PostConstruct
@SuppressWarnings("unchecked")
public void init() {
    map = (RemoteCache) context.getAttribute("cache");
}

The server-side configuration conforms to the same XML schema as the
configuration in the peer-to-peer example.
For the client-server-example, we use a distributed set-up with synchronous
consistency:

<infinispan
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation="urn:infinispan:config:5.2 infinispan-config-5.2.xsd"
        xmlns="urn:infinispan:config:5.2">
    <global>
        <transport clusterName="infinispan-cluster" nodeName="Node-A"/>
    </global>
    <default>
        <clustering mode="distribution">
            <sync/>
        </clustering>
    </default>
</infinispan>

If we had used an <async> server configuration, we could have run into
the same consistency issues as with the <async>
peer-to-peer set-up.

Shutdown

@Override
public void contextDestroyed(ServletContextEvent servletContextEvent) {
    ServletContext context = servletContextEvent.getServletContext();
    RemoteCacheManager cacheManager = (RemoteCacheManager) context.getAttribute(CACHE_MANAGER);
    cacheManager.stop();
}

Write

As RemoteCache implements ConcurrentMap, the original REST implementation
from the local example could be used. However,
it turns out that map.replace()
throws
an UnsupportedOperationException
:

Apr 25, 2013 5:24:49 PM com.sun.jersey.spi.container.ContainerResponse mapMappableContainerException
SEVERE: The RuntimeException could not be mapped to a response, re-throwing to the HTTP container
java.lang.UnsupportedOperationException
  at org.infinispan.client.hotrod.impl.RemoteCacheSupport.replace(RemoteCacheSupport.java:88)
  at org.infinispan.CacheSupport.replace(CacheSupport.java:148)
  at de.consol.research.cache.part03.infinispan.RestInterface.appendEvent(RestInterface.java:36)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:601)

Therefore, the write code must be modified to use
replaceWithVersion(K, V, long)
instead:

@POST
@Path("{user}")
@Consumes(MediaType.APPLICATION_JSON)
public void appendEvent(@PathParam("user") String user, String msg) {
    boolean success;
    map.putIfAbsent(user, UserEventList.emptyList());
    do {
        UserEventList oldMsgList = map.get(user);
        UserEventList newMsgList = UserEventList.append(oldMsgList, msg);
//        success = map.replace(user, oldMsgList, newMsgList);
        VersionedValue valueBinary = map.getVersioned(user);
        success = map.replaceWithVersion(user, newMsgList, valueBinary.getVersion());
    }
    while ( ! success );
}

Read

@GET
@Path("{user}")
@Produces(MediaType.APPLICATION_JSON)
public List<String> searchByUser(@PathParam("user") String user) {
    UserEventList result = map.get(user);
    if ( result == null ) {
        return new ArrayList<>();
    }
    return result.getMessages();
}

Dependencies

In addition to infinispan core, the client hotrod library is needed
to run the example:

<dependency>
    <groupId>org.infinispan</groupId>
    <artifactId>infinispan-core</artifactId>
    <version>5.2.5.Final</version>
</dependency>

<dependency>
    <groupId>org.infinispan</groupId>
    <artifactId>infinispan-client-hotrod</artifactId>
    <version>5.2.5.Final</version>
</dependency>

How to run

Server

  • Download infinispan-5.2.1.Final-all.zip and unpack it two times:
    infinispan-5.2.1.Final-all.A and infinispan-5.2.1.Final-all.B.
  • The config files for server A and B are in the src/main/server-config
    directory in part04.infinispan in our
    example code on GitHub.
    In infinispan-5.2.1.Final-all.A, run the following command:
./bin/startServer.sh -r hotrod -p 11222 \
    -c .../part04.infinispan/src/main/server-config/server-config-A.xml
  • In infinispan-5.2.1.Final-all.B, run the following command:
./bin/startServer.sh -r hotrod -p 11223 \
    -c .../part04.infinispan/src/main/server-config/server-config-B.xml

Client

The clients are run as follows:

  • Instance 1
mvn tomcat7:run-war -pl part04.infinispan -am verify -Dmaven.tomcat.port=8080
  • Instance 2
mvn tomcat7:run-war -pl part04.infinispan -am verify -Dmaven.tomcat.port=9090

The Web interfaces are then available via
http://localhost:8080
and
http://localhost:9090.

Advanced Topics

Infinispan allows to extend the classical client/server architecture, and
to build a multi-tier cache, where the servers build a cluster in dist
mode, while the clients build a cluster in invalidation mode for accessing
the most frequently used entries.

Next

This was the last page of part 4 of our series. Part 5 concludes the
series with pages on second level database caches, and other advanced features: