Devoxx 2011 - Day 2

On the second the rest of the ConSol posse joined us : Christoph, Christian and Torsten arrived for the fireside chat. A session format which lacked a bit the tension, but there were some rare highlights like the following joke: Q: “What’s the difference between Ant and Maven ?”, A: “The author of Ant apologized”. The other stuff covered were the ServiceMix combo, Groovy, Spring in the cloud, Infinispan, JDK 7 and Jenkins for Continous Delivery.

# Introduction to Apache ActiveMQ, ServiceMix, Camel and CXF by Charles Mouilliard and Gert Vanthienen (Jan)

This talk, given by Charles and Gert from FuseSource (a companny providing enterprise subscriptions for the shown tool stack), introduced a bunch of Apache integration technologies:

  • ServiceMix (integration container)
  • ActiveMQ (JMS/middleware broker)
  • Camel (integration toolbox)
  • CXF (WS/REST stack)

Each of these projects was introduced using slides and a (very) simple live Demo. Ok so far, but nothing revolutionary for someone who already has basic knowledge about the scope of the projects. Interesting to hear were the reasons why ServiceMix dropped the idea to be a 100% JBI-compliant container:

  • Everything must be in XML format in JBI
  • Bad encapsulation of routing
  • Few support from major actors (IBM/Oracle etc.), JBI never got wide adoption (version 2.0 was never released)

Today, ServiceMix is independent from JBI and based on an OSGI kernel. In 2010, the ServiceMix OSGI runtime became an own Apache top level project called Karaf. ServiceMix is regarded as being grown out from being “just” an ESB to a full-fledged container where one can deploy web applications and/or service logic compon nts next to the integration logic. OSGI/Karaf glues all these components together.

High availability (my main interest from this talk) was covered, too. Basically, ActiveMQ is used to achieve this while one can choose between several modes

  • Static Master/Slave replication
  • Shared storage (broker instances compete on locks on shared storage)
  • Network of brokers

A little bit disappointing that this important topic was handled so quickly (two slides).

The second part of the talk covered the Fuse IDE, a (commercial) bunch of Eclipse plugins providing tool support for integration projects with Camel, ActiveMQ and ServiceMix.

It provides

  • a graphical editor for Camel XML routes (supporting roundtrip engineering XML<–>graphical view)
  • graphical runtime view and performance statistics on routes
  • tracing of messages
  • drag & drop (test) messages from your project view directly into routes/endpoints in runtime view (quite cool!)

Third and last part firstly covered the Karaf command line support which can be used to administrate bundles and inspect the running Karaf (ServiceMix) instance. On top of Karaf, FuseSource developed Fabric, a tool which shall simplify dependency handling and administration of one or more ServiceMix instances. Furthermore, Fabric provides virtualization/discovery/loadbalancing of services and endpoints together with a centralized repository to ease administration.

Finally, a project was set up from scratch using Karaf and Fabric in a live demo. The capabilities of Fabric looked nice on the slides, but administration and setup seemed quite complicated with tons of command line calls. No suitable stuff for a live demo IMO and it looked not very user friendly.

To be honest, I was expecting a little bit more from this session - basic stuff I expected to see live (cluster configuration!) or at least explained in detail as not shown while wrestling with the Fabric shell took about 30mins of the talk. There’s some room for improvement here.

“Spring into the Cloud” by Josh Long and Chris Richardson (Alvin)

This three hour talk explained nicely what cloud means for spring applications. According to Richardson there are lot of new inventions in the area of hardware and software but the one thing which still doesn’t change much are the area of delivery or rollout. There are also issues like explosion in amount of data which we have to handle and the limitation of rdbms on handling this amount of information. There was a nice introduction about what is public cloud and private cloud.

Cloud foundry is the open platform as service which seemingly integrates with Spring and many other NoSQL databases. There is version called Micro Cloud foundry which can be downloaded and installed on developer machines.

Cloud foundry Services API can be called through the VMC CLI or through the Spring STC and looks fairly simple to deploy and test your applications.

Cloud foundry API is broken down into 5 main components plus a message bus:

  • Cloud Controller
  • Health Manager
  • Routers,
  • DEAs (Droplet Execution Agents)
  • Set of services for MongoDB, redis, RabbitMQ

The cloud namespace can be used to ease the definition of datasources and connection to other resources on a cloud foundry. The talk also covered many things about different Spring DAO template classes for Redis MongoDB, RabbitMQ and AMQP (Advanced Message Queueing Protocol). Another good thing is the Spring profiles introduced in the new 3.1 version which is used to deploy the application in different environments like “local”, “cloud” etc.

“Groovy Ecosystem” by Andres Almiray (Roland)

First Andres introduces the Groovy flagship, Grails. Well done, but nothing spectaluar. Domain objects, controllers, scaffolding, the standard stuff that every ‘web application framework’ does as well. A little bit more unique are tag libraries, however the demo failed spectacularly at first, but Andres finally cut the edge. Uff, one could really fill the tension in the good filled room. Then plugins are demoed (like the rest plugin). Sidenote: Grails borrowed a new feature from Graddle for smart parsing of the CLI command.

Griffon, the next Groovy Tool presented, is similar to Grails but for desktop applications. It provides as Groovy DSL for Swing, JavaFX and SWT, knows about how to run standalone, as applet or via WebStart. It has a binding annotations for models and processing annotations for thread handling. It knows about archetypes and comes with a tons of plugins (what else). Very cool is the packaging stuff, including izpack and debian packages. All nice stuff which I would give a try if I ever going to write a fat client again.

Both topics eat up about two-third (105 minutes) of the talk, IMO by far to much for an overview.

Next then Gaelyk (“Gaelyk is the PHP for google app engine”, 3 minutes). Graddle (20 minutes) afterwards. If Graddle is supposed to be so much easier than Maven, then the demo was no evidence for this. However, inter-project dependencies are a real advantage over Maven’s artifact dependency resolution (via the local Maven repository). Gant (2 minutes) is yet another building tool, branded as ‘ant without the ugly XML’. Gant is predicted to vanish in favour of Graddle. Easyb (5 minutes) is there for Behavioural Driven Development (like JBehave for Java). The Spock framework (5 minutes) is a testing DSL framework, seems to have some nice features. Codenarc (7 minutes) for static code analysis, GPars (1 minute) for concurrency. Ratpack, GContracts and yet someother stuff within the last minute.

Conclusion: The Groovy eco system is 2/3 Grails and Griffon and the rest is worth for the last third. Really ? 150 minutes are really a long time for giving the other guys a larger space. Nevertheless a nice wrap up from which I take away a lot of starting pointers for further exploration.

“Real world deep dive into Infinispan” by Sanne Grinnovero, Pete Muir and Mircea Markus (Jan)

Sanne, Pete and Mircea from JBoss introduced Infinispan, a JBoss project providing a high available in-memory store/data grid with elasticity concepts.

Basically, Infinispan is a cache, enriched with several extra capabilities. It can run next to a business application in the same JVM (embedded mode) or in an e tra JVM (client/server mode) providing its data access services via REST or other protocols. Its main use cases are

  • Local cache (like Hibernate 2nd level cache)
  • Cluster of caches (cache nodes which keep in sync with each other)
  • Data Grid (dedicated cluster of servers)

After covering these basics, the rest of the session was performed doing short concept introductions followed by a direct live demonstration. A basic ticket order system based on JEE6 was extended with a simple Infinispan data access layer and more advanced concepts afterwards. Obviously, Infinispan integrates very nicely with JavaEE6 and CDI.

The advanced concepts shown were

  • Cache node replication using JGroups for inter-node communication
  • Several replication modes (complete/partly replication, invalidation)
  • Expiration handling (on entry and cache level)
  • Built-in cache event handling and listeners mechanism
  • Distributed transactions support
  • Deadlock detection and handling
  • Persistent cache stores
  • Indexing cache data with Lucene

Later on, they showed Infinispan running as a standalone data grid, which should be applied when

  • You need to tune the JVM Infinispan is running in massively (could harm “normal” apps)
  • Multiple apps share the same data grid
  • Non-JVM clients are involved

Some general lessons I learned during the talk:

  • Infinispan can run in every cloud env (also on EC2 where multicast is blocked –> so it should work for CM6 too since our JBossCache uses JGroups as well)
  • Don’t guess your best cache expiration policy - measure, analyze and adapt it with load tests. Good point!
  • Infinispan can also be used to store a Lucene index (which is stored on file system by default) inside its cache. Should give a good boost for full text search queries.

I enjoyed this talk very much - not much glamour or show, but solid and interesting stuff, backed by lots of live coding and demos. Up to now this was my favorite session and I recommend it strongly to everyone interested in basic as well as advanced topics on distributed data grids and caching.

“What’s In Your Toolbox for JDK 7?” by Geertjan Wielenga (Alvin)

In this talks the toolkit support for java 7 was demonstrated on all three main IDE’s , namely IntelliJ, Netbeans and Eclipse.

There are a number of new features coming in jdk 7 which are mainly

  • JSR-292 InvokeDynamic
  • JSR-334 Project Coin
  • JSR-203 new IO api
  • JSR-166 Fork join
  • JDBC 4.1
  • updated the XML stack

The change which caused the toolkit enhancement was the JSR-334 (Small language enhancements). JSR-334 helps to improve the readability, reliability and productivity of Jva code. This JSR bundles the following changes into it:

  • Strings in switch statements
  • try-with-resources statements
  • improved type inference for generic instance creation (“diamond”)
  • simplified varargs method invocation
  • better integral literals
  • improved exception handling (multi-catch)

All three IDEs are already equipped to adapt the code into new style. Once you change the JDK version of the project to 1.7, the IDE notifies the possible changes on the file. Intellij even have the option to go back to the jdk 1.6 style in a later time. Netbeans is having the option to migrate the entire project to the new format, which is very handy.

The talk was informative but otherwise not very special.

“Jenkins - From Continuous Integration to Continuous Delivery: by John Smart (Jan)

After having attended (and blogged about) yesterday’s Continuous Delivery (CD) talk by David Farley, visiting this 30min session was a logical consequence. John is author of the book “Jenkins - The Definitive Guide” and started with some quick background info on Continuous Delivery.

Surprisingly, in contrast to David Farley’s yesterday’s statement DON’T BRANCH, John made clear that (feature) branching is a good thing und should not be doome at all. To make sure that things don’t get out of control when there are many feature branches in parallel, John introduced the concept of a central integration build, which pulls changes from all branches, merges them automatically (easily possible with git and Jenkins) and kicks off a build and test cycle which will give an early hint if some changes on a branch crash or lead to problems/conflicts.

This concept was shown in a live demo, where a change inside a feature branch kicked off the integration build which ran automatically, merged all branches and started build and test execution. In this context, a new Jenkins plugin providing an overall view on a build pipeline (see my blog about CD from day1) was shown.

A production release should follow these steps:
* Merge branches to head
* Default build will be triggered and produces release candidate
* Final activity (release to prod) should be an own build and offered as manual(!) target/button
* Business (!) department decides which RC(remember? every successful build produces a RC) to release to production

For a 30 minutes talk, there was really much interesting stuff in here. I recommend it to everyone interested in CD and/or DevOps since it gives a good impression of a possible concrete approach to CD.

Author: Roland Huß
Categories: devoxx, development