Devoxx 2011 - Day 4

The last full day was again packed with high-end tech stuff before we enter the 10th anniversary party of Devoxx. So I guess, the fifth day gets a bit less blog coverage than the previous blogs. The ConSol posse has some reviews about Akka, JavaFX, HTML-5 and Android again, Play, JMS 2.0 and Clojure for you.

# “Above the Clouds: Introducing Akka” by Jonas Bonér (Torsten)

Typesafe is a Scala company but fortunately (for me as a Java Developer) Akka can be used with both, Scala and Java.

The open source Akka framework is two and a half years old. It sits just above Scala in the Scala runtime Stack. Akka solves the problem of writing concurrent, scalable and fault tolerant systems by providing a unified programming model and a managed runtime. Basically Akka is there to manage system overload. It addresses both, scaling up (big machines) and scaling out (many machines). Akka was born in the finance sector and is mainly used for all sorts of event driven apps in the fields of batting&gaming, telecom, simulation and e-commerce.

The Akka architecture consists of a concurrency layer at the bottom, scalability and the fault tolerance layer at the top. There are also a lot of modules and add ons around akka covering topics such as management consoles, monitoring, provisioning.

The concept of Actors is the most important tool in the akka toolbox. So what is an Actor? It’s an Object with state and behaviour with strong encapsulation, even at runtime. Communicating with an Actor is done by putting a message on the Actors “inbox”. Everything is event driven, there are no blocking Threads involved.

The Akka runtime has a scheduler which puts actors on a thread and forwards messages to it. Through this separation between actors and threads it becomes very cheap to create, process and shut down actors. This makes it for example possible to have an actor for each of the millions of users of a multiplayer online game without any problem of handling them.

class Counter extends Actor {
  var counter = 0;
  def receive = {
    case Tick =>
    counter += 1

Note that the receive method checks for the type of message it receives which makes Akka almost feel like a dynamic language. Although there is a counter variable defined in the Actor, the code is thread safe because concurrency is handled by the Akka runtime.

Actors are created through the AkkaApplication - it’s possible to have many Akka applications per JVM - by simply calling the actorOf method on it.

val app = AkkaApplication()
val counter = app.actorOf[Counter]

The counter variable is not a reference to an Actor but an ActorRef which is a pointer to a running actor. So actors can live everywhere, on different machines, different data centers, etc. They can be killed by the runtime and recreated without the ActorRef becomming invalid. This gives Akka a lot of runtime flexibility.

To tell an actor to do something in a fire and forget manner the “!” operator is used.

counter ! Tick

To send an actor a message and consume a result the “?” operator which immediately returs a “future” is used.

val future = actor ? Mesage

future onComplete { f=>

The future api includes these methods: await, onComplete, onResult, onException, onTimeout, foreeach, map, flapMap, filter. Futures can easily be composed to wait for results from multiple actors and the like. In Java futures are blocking in Akka there is a non blocking futures api.

An actor can reply to a method call by using the magic sender variable and ! on it.

sender ! ("Hi)

It’s possible to change the behaviour of actors on the fly by using the become and unbecome methods. Using this it’s very easy to implement state machines

become {
  case NewMessage =>

By binding actors to names it’s easy to implement remote actors. That’s because an actor name is virtual and decoupled from how it is deployed.

akka {
  actor {
    deployment {
      /path/to/my-service {
        router = "round-robin"
        nr-of-instances = 3
        remote {
          nodes = ["wallace:2552", "grommit:2556"]

In the example the service is deployed on two nodes with three instances. If two actors interact they do not know where the other actor is deployed. The ActorRef points to a “remote” actor in this case and serves as a router to the it.

Fault tolerance in Akka is inspired by the Erlang model. The classic way is to have fault tolerance implemented all over the code in try catch blocks. The actor way is onion-layered. There is an error kernel containing all critical stuff such as database failure, external system access etc. Actors delegate “scary” stuff to other actors supervise the delegates and monitor failure. Actors can also be grouped together. In case of failure these grouped actors can be killed altogether. It’s also common for supervising actors resp. supervisors to escalate failure to another actor.

That was the Akka introduction talk - very technical but very good presented. I would really like to use Akka in a real world project but I’m afraid that we will not have any projects at ConSol in the near future which fit to Akka or vice vers.

“Moving to the client - JavaFX and HTML5” by Stephen Chin and Kevin Nilson (Christoph)

First of fall a funny note on speaker’s clothing. It’s November in Antwerp and Kevin Nilson wears some shorts! Anyway the talk about JavaFX and HTML5 was not cold at all. So HTML5 is more than just HTML, CSS and JavaScript. To get an impression on HTML5 capabilities check out this small example

As browsers behave differently JavaScript frameworks (e.g. jQuery, Dojo, YUI) try to solve issues in order to give consistent API. If you want to check your favorite browser’s compliance with HTML5 visit Can your browser pass the test?

According to 51% of the top 10,000 web sites use jQuery. So the talk strongly recommends to get known to jQuery as a web developer. If you want to try out some jQuery experience visit Now even with JavaScript frameworks on board helping to solve cross browser differences we might need to address older browsers, especially IE6, IE7, IE8. The answer could be Chrome Frame which is running Chrome on these browsers.

The concept of Modernizr is very important for new era web development, too. Instead of user agent sniffing we use feature detection. Modernizr tells you what features are available in the browser and which of them are ready to use for you. So after this short introduction to HTML5 web era changes what ships with Java FX 2.0 in this sense?

First of all how to display HTML in JavaFX? We wanna see live coding! And we got live coding in this talk! Here is the small example on calling a browser URL in JavaFX and displaying the contents of this website in a JavaFX application.

public class WebViewTest extends Application {
    public static void main(String[] args) {

    public void start(Stage primaryStage) {
        WebView webView = new WebView();

        Scene scene = new Scene(webView);

        primaryStage.setTitle("Hello Devoxx!!!");;

We can do this in 12 lines and something about 300 characters! In addition to that plain Java code we can use GroovyFX or ScalaFX in order to code the same example. I would have typed all examples for you but I was not able to write fast enough, sorry. The important thing to notice is the fact that we now coded the example in roundabout 10 lines and 110 characters using Groovy and Scala. The Visage language does it in 8 lines and 67 characters. Pretty cool!

Now let’s move on to calling JavaScript from JavaFX.

String script = "alert ('Hello World!');";

We might also have to respond to browser events in JavaFX. We are able to get callbacks for these browser events:

  • Alert/confirm/prompt
  • Resize
  • Status
  • Visibility
  • Popup

Last not least there was a nice idea on how to interact with JavaScript in JavaFX. So what can we do when JavaScript code needs to interact with JavaFX code? We can use the HTML page status as an event bus in order to send data from JavaScript to the JavaFX application. The JavaScript code simple sets the status with some character data and JavaFX can react to that in some event callback.

That’s it for this talk which was quite impressive on HTML5 websites running and interacting with JavaFX application. Need to find out more about that, definitely!

“What’s new and important in Android” by Nick Butcher (Christian)

This talk just started right after the central Android keynote, which mostly left out all new features being supported by the new Android version 4.0 (API-version 14) aka “Ice Cream Sandwich”. This 17.11.2011 is also some kind of remarkable: By starting to sell the Google Nexus in the UK, Android 4.0 is from today on also available to customers in Europe!

As you can get all android-4.0-highlights much better on the official website, I will mention just some highlights of the talk itself.

First of all, we experienced a very vivid talk. Nick often went with his mobile just in front of the camera and demonstrated the features being listed on the presentation. Some examples: Taking pictures with face detection, taking a live face video recording with manipulation effects like popping eyes, shrinking face etc., beaming contact details from one phone to another.

Additionally we saw a whole bunch of new features, among them the new Roboto font, the beam-technology, VPN support and lots of new APIs to ease life of for developers. In the case that you want to migrate your application code from an older Android API version, Nick made you aware of testing your layouts and the new navigation bar. Also be aware of hardware acceleration being enabled now by default!

In my personal opinion, the new Android version biggest achievement is providing a smooth converged platform for both, mobiles and tablets. Layout is automatically managed by the platform in respect of the display resolution and screen orientation. Developers don’t have to care about that anymore, well done!

“WWW: World wide wait” by Stijn Van den Enden, Guy Veraghtert and Ward Vijfeijken (Christian)

Performance issues seem to be a very important topic this year at Devoxx. So I decided to visit at least this one talk about a comparison of the performance of following java web application frameworks:

  • JSF (Mojarra implementation)
  • Wicket
  • GWT
  • Spring MVC
  • myfaces

Using these frameworks, an example-app was written containing an overview and detail screen with input fields providing autocompletion and ajax validation. This application was deployed to an amazon cloud with 7,5GB RAM, 4 BC2 CPUs, 1GB Xmx,Xms. A database connection was simulated to avoid bottlenecks caused by the backend.

Using jMeter, each application was tested according different parameters such as throughput and max. supported users. When it came to response time, they measured the response time towards getting the HTML-page or the REST-json data.
In result, they got more than 300 Mio test-samples with more than 16GB of data and >700 hours of test-runs.

Here the ranking according to throughput:

  1. GWT
  2. Spring MVC

… and with big distance …

  1. JSF (Mojarra)
  2. Wicket
  3. Myfaces

Already at the beginning the presenters pointed on the two major classes these frameworks can be grouped:

  • Client-side RIA: server only delivers model-data in a REST-style, such as GWT and Spring MVC
  • Server-side RIA: server is rendering representation by model, view and controller, such as Wicket, JSF and myfaces.

Unfortunately they never(!) mentioned that the server-side frameworks slowlyness is completely logical and already determined by their architecture: they have more to work and a bigger memory-footprint per session! Well, for that reason the following hosting costs calculation is not really fair in my eyes. Anyway, I don’t want to leave it out:

  • GWT: $7.000
  • Spring MVC: $10.000
  • JSF: $50.000
  • Wicket: $60.000
  • myfaces: $100.000

Hosting costs considering a web application with 10.000 concurrent users (is this realistic?) with 5 sec thinktime and a limit of 200ms response time (time unit for the costs was not mentioned).

In summary they showed that client-side-RIA are much better (factor 10) concerning performance and scalability. But in many cases the database should be the first bottleneck, so it is still a valid option to use server-side component frameworks such as JSF and Wicket. At the end just let me highlight the unique presentation style: They used one big mind-map-poster in which they zoomed into one or two levels. Very nice fonts also, so don’t miss this if you are interested into alternative presentation styles!

“Deploying Java & Play Framework Apps to the Cloud” by James Ward (Roland)

First is fired up a term, create a Spring sample application with Roo and used the Heroku Plugin for preparing to a deployment to the Heroku PaaS offering. Heroku uses Git for deploying. As soon Heroku receives a push, it will start a Maven build on the Heroku side which has the advantage that the world gets downloaded within the Cloud. But this is only an option, binaries can be pushed, too.

Heroku can be summarized as a Polyglot + Paas + Cloud Components. It is a cloud application platform with support for HTTP routing and load balancing.

Instances are deployed on so called Dynos, there are 750 free dyno hours per app, per month. The application server itself needs to be deployed as well. The command to start is specified in a Procfile.

Heroku uses a Erlang based HTTP Routing. Related to load balancing, Heroku does not support sticky sessions, session state must be an external system (like MongoDB) for scalability reason. Autoscaling is provided by third party tools, but not within the platform, because Heroku has no semantic idea for when to scale.

For Scala, sbt is the build tool of choice on which the stage task is called on deployment. Lift is not that well supported on Heroku because it has quite some state requirements , take Play is the message. Heroku knows how to rollback a deployed release to the latest version. Very nice, one can log onto a dyno and get a bash shell ! It is also very flexible when it comes to databases. Postgres is available out of the box, but any external dababase can be used as well.

Interestingly, there are no custom API required to use Heroku. That’s different to CloudFoundry (which is on the other hand has the advantage that the PaaS stack is open source).

It was a very well structured talks with very clear and concise demos which gave me who doesn’t know yet anything specifically about Heroku a very clear impressiof what Heroku offers (and what not). BTW, it was also a quite nice introduction into Play Framework, too ;-). Very impressive, I’ll will give it a try for sure.

What’s (probably) coming in JMS 2.0 by Nigel Deakin (Christoph)

So let’s talk about the Java Message Service specification which is part of Java EE but also existed as standalone specification. The last maintenance update of version 1.1 was back in 2003. In March 2011 JSR 343 was launched in order to develop JMS 2.0. The target for this specification is to be part of Java EE 7 so timeline ends in Q3 2012.

The initial goals for this JMS version are:

  • Simplification
  • Java EE 7 support (PaaS, Multi-tenancy)
  • Standardizing interface with application severs
  • New messaging features (standardize some vendor extensions)

What’s wrong with the JMS API? It’s not bad, but it could be easier to use. So one large intention is to simplify the API usage. If we have a look at a usual code for sending and receiving messages with JMS we certainly see some improvements. If you receive a JMS message in Java EE you usually have the @MessageDriven annotation along with the onMessage(Message message) method. So first of all we need to cast the message object to a TextMessage for instance in order to get the message payload. When sending a message we inject some resources with @Resource annotation (connectionFactory and destination). The sendMessage() method needs to manually create a connection, a session and a message producer. These are three objects to be created ad even worse the connection needs to be closed with some boilerplate code in a finally block. The create session method ships with two arguments: the local session true or false and secondly the acknowledgement mode. Actually in Java EE container these parameters do not have any effect! These are only for standalone Java applications managing server transactions.

A possible new API would also inject the resources via CDI, but code can be much easier. See following example on how it could look like:

ContextFactory contextFactory;

Queue outboundQueue;

try (MessageContext mCtx = contextFactory.createContext();) {
  TestMessage textMessage = mCtx.createTextMessage(payload);
  mCtx.send(outboundQueue, textMessage)

The ContextFactory combines connection and session creation and the AutoClosable API takes care of closing the resource in finally block. You can make it even more simple with CDI annotations.

Queue outboundQueue;

MessagingContext mCtx;

TextMessage textMessage;

public void senMessage(String payload) {
  mCtx.send(outboundQueue, textMessage);

That’s an improvement, definitely, but for me it feels not as smooth as Java APIs nowadays look like. If I have a look at the Play Java API also shown at Devoxx I feel even more comfortable. But that’s just my opinion.

Regarding durable subscriptions the JMS 2.0 API does not require a client id mandatory anymore. For MDB the container will generate a default subscription name. We also can expect some new resource annotations for JMS connection factory and destination. So we can define JMS resources in the JEE container inside our code with annotations like we do with data sources.

The new features in JMS 2.0 are:

  • Delivery delay: Sets a delay on delivery for message producer so message gets delivered at a later time
  • Async sending: Sender is not blocking until acknowledgement from server has been received. Instead an asynchronous callback is offered for the acknowledgment
  • JMS delivery count: JMS 2.0 will make this mandatory! The reason for that is to handle poisonous messages better on the server side
  • Topic hierarchies: Most vendors support this already, now it is in the spec
  • Multiple consumers: New API for non-durable subscribers
  • Batch delivery: Better performance plus acknowledgments are also sent in a batch

So JMS 2.0 promises to be a version upgrade to basically simplify the API and introduce some new features. As I already set the timeline for this is Q3 2012 so stay tuned for this new version.

“Cracking Clojure” by Alex Miller (Alvin)

Allex Miller (a functional language specialist) gave a short introduction to Clojure. Clojure is a Lisp kind of language (dynamic, functional). Every thing in Clojure is immutable. Clojure separates state and identity. In Alexs words, an identity can be in different states at different times, but the state itself doesn’t change.

Clojure is a compiled language - it compiles directly to JVM bytecode, yet remains completely dynamic. Every feature supported by Clojure is supported at runtime

Clojure has almost every primitive type and all major types of collections like List, Vector, Set and Map. Clojure handles the code as data.

Defining a function:

(defn square [x]
    (* x x))

You can also make use of anonymous functions as below

((fn[x(* x x))

In Clojure we can write the code more compact than in Java. As an example,
with the following method we can count the number of lines in a file

(defn file-counts [file])
  (count (line-seq (reader file))

This looks pretty short, or ?

Or with the following code we can do the same for all files in directory recursively

(def file-count[dir])
  (map line-count
      filter #(. % is FIle)
          (file-seq (file dir)))))

How do you define a Bean class for Bear? like this.

{:name "Tremens"
  :brewery "Deliruim"
   :alcohol 85
    :ibu 26}

Just imagine how many lines we have to write for this in Java. Alex explained much more Like sequences, macros, multimethods, Agents.

A good talk on Clojure. Clojure is not the easiest langauage but a very powerful language.

Author: Roland Huß
Categories: devoxx, development