Let’s go directly into todays talks and sessions.
Day four at Devoxx 2012 was to start with a keynote from Google, but not before Ray Ploski and Mark Little from JBoss invited the developer community to vote on the name of the next JBoss application server. And the nominees are:
Vote at jboss.org between 15./30. November. The Result will be presented early 2013!
The Google keynote was held by Tim Bray, developer advocate for Android at Google. As Tim stated, Google developer advocates are installed to listen to developers and can be reached at code.goole.com/team
Following tradition, his presentation started with a variation of the “Hello World” theme: a visual collage of clips from Google Maps, sending us around the globe and even to the moon.
Next came a little demo, visualizing 257.000 geo locations listing all trips done by sailing ships since 1715. The data, encoded as JSON, was rendered in the Chrome browser using JavaScript in real time. A slider allowed to jump back and forth in history, which looked pretty impressive.
But with all that cool technology coming from Google: what does the company want, why do they do it? Tim’s answer: Google wants you to live online, the longer, the better for Google as this is where their business is.
Next on stage were Romain Guy and Chet Haase talking about the Android eco system: there are currently about 700.000 apps available, 1.5 billions are installed per month, 25 billions have been installed in total. 560 million Android devices have been activated so far, 1.3 millions are activated daily (which means: if every person activating Android climbs onto the shoulders of the previous person, they would reach the moon in 172 days - whoever cares about that :-).
What is new with Android:
Back on the stage, Tim pointed out that there are other things going on at Google apart from Android:
There is the Google App Engine “platform as a service” framework, available since 2011, updated about monthly. 500.000 active App Engine apps are hosted as of today. App Engine now features Maven and Jenkins integration: you can, for example, deploy different versions of an application directly from Jenkins, then define a certain percentage of users to get directed to that new version (a feature called “traffic splitting”). App statistics are also available.
Another web related theme: security and identity mamangement. Tim strongly urges developers to use HTTPS everywhere and two factor authentication where possible. However, he feels that the current sign-in experience is broken on the internet. As a result, people tend to use the same, short and easy to guess passwords all over the pace. Even worse: passwords get stolen from servers, so protect your user’s passwords on the backend systems.
Most central authentication frameworks have so far more or less failed in Tim’s view. OAuth2 and OpenID Connect is what Google bets heavily on. OAuth2 is integrated into Android, but only usefull where you can use Google accounts, so not for Facebook or Amazon.
If you want to identify users at the backend have a look at OpenID Connect, to be rolled out by Google.
Anther problem: the sheer amount of choice that has to be presented when it comes to online identification (sign in via Facebook?, Google?, Twitter?, etc). OpenID Connect will offer to store the identification services preferred by a user on the browser so that only these need to be presented.
Regarding Chrome and HTML5: 1 billion users have HTML5 enabled browsers. That is 73% of the users.
Tims suggests to check out “Chrome WebLab” and “Jam with Chrome” for some fun demos.
New features to come to the browser: Web cam access and live processing of audio input in JavaScript, as well as applying styles with CSS filters (for effects like blur) and CSS shaders for any kind of DOM element (for 3d transformations).
This talk was very nicely moderated by the Diabolival Developer (Martijn Verburg) and the Voice of Reason (Ben Evans). Unfortunatly this talk was cut short, because the previous talk was way over time
(the speaker was cut off at the end and still had more than 20 slides left, thats what I call really bad time managment.)
Means: presentations and talks are poorly prepared and held; no effort is put into slides.
What can you do?
Means: No documentations is done, no cummunication to the other developers is taken place, in order to become inreplaceable.
What can you do?
Use a common language for developers and non developers, talk to peaple and cummunicate also to non developers, because developers who communicate are the most successful. Ideas presented in a poor manor will never go to production.
This ist the well known habit of using the newest bit of technology, to include alpha build libraries into your project for the illusion of beeing ahead of the time. Also noone really questions if these new technologies are necessary and, worst case, noone admits if they suck.
What can you do?
First avoid boredom of developers :-) Give the developers room to explore new technologies, and a place to present their conclusions (for example during brown bag sessions). Then decide in a team which tech will really bring a value to your application.
I think this is self-explanary.
What to do?
Dont design just for the sake of design. Don
t overdo abstractive layers, dont overdesign. Do not use UML code generators. Make design only for what you need now, because you don
t know what the future brings or how your code will be used in times to come. Also try to reduce your source code, the less source code the butter, but make sure its still readble.
Means: Try to use every desing patter ever written down in a book to use in your application.
What to do?
Do not use design patterns blindly, be aware that sometimes design patterns are already part of your language or your framework. In the end it comes all down to communcation, so every memeber of the team shares the same idea of architecure.
Means: Unnecessary performace tuning and the resulting overuse of tuning technologies and increas of complexety.
What to do?
Analyse and measure your application. Decide where the performace is neede and where not. Add tuning only where necessary.
Means: Putting too much code in one java class, overusage of inner classes and anonymous methods.
What to do?
This is my favorite tip by the way: Try to read your code boosted at 3 in the morning, if you can still fix it, youre good. :-) Also, a propably more achievable method: Put the junior memeber of your team in application support if he can
t do it, you might have a problem.
Continiuos delivery is a business enabler, but you have to put thought in your integration environment and build process. You can`t ship it, once it compiles.
Lots of developers try to add as much programming languages to their CV as possible. This way you
end up, with lots of half specialists in your team. Its better to be good at principles, not on syntax. Software developer are not programmers, its not about hacking code, but about architecture design, build environments and so on.
Do not jump into the cloud without testing and thinking. Evaluate and prototype Also related to:
Lots of companies jump into the mobile business without proper preparation. HTML 5 is seen as the holy grail, but it is still not established as a standard and still not fully supported by every browser, and it maíght take some more years. (And this was really intersting, considering HTML 5 was praised so much in the opening speach, as being exclusivly used for parleys.com)
(Unfortunately here the talk was pretty much rushed through because of the already mentioned delay)
If you have large data, you have to know how it is used and also pay attention to non functional
requirments. Otherwise you might end up telling marketing that they can`t do they quaterly report anymore because their data is distributed over 4 different systems.
In the end it all comes down to:
So we talk about testing frameworks. For me a bit of duty listening. But I do it with pasion! So let’s get to the talk content.
There are different types of tests
First choice should be unit testing as we can reach most test coverage and quality assurance there. You should concentrate on Unit testing because they are:
Still we need feedback on wiring classes which is the job of integration tests. And we need a few system tests putting it all together. Still we have to write lots of unit tests from the very beginning. This might slow you down at the beginning but it will speed you up in the long run as you have well tested applications and refactoring gets more easy with tested code. Generally speaking the Unitils guys recommend a unit test coverage of 75%.
Unitils aims to avoid the explosion of test frameworks in use. Think of Unitils as a glue between various test frameworks. Unitils integrates with JUnit or TestNG. It serves several modules that can be added as Maven dependencies to your project. These modules are related to dependency injection or mocking frameworks like EasyMock.
In your JUnit class you can use the special Unitils Java annotations like (@TestedObject, @Mock). So Unitils takes care on injecting tested objects and mocked objects into your test. There are some simplifications within this Unitils way of testing like automatically calling EasyMock.replay and EasyMock.verify with all your mock objects.
Nice is the simple bean integration where you can test your Java POJO beans with all getters and setters with one single line of Unitils testing code. Now I know how to reach 80%+ test coverage ;-)
What’s about integration tests? This is a totally different story because integration tests tend to be:
Unitils provides some “glue” modules for Selenium integration and some transport modules (e.g. for mail communication). The talk provided a small demo on invoking a web frontend GUI with Selenium, checking the database in between and finally expecting an email sent to the mail server. The test ran fully automated as JUnit tests. But unfortunately the verification steps in this demo were a bit weak. And test verification is a central goal. Just checking on the amount of emails sent out is not enough. The email content verification would be a significant check to add in this use case. Maybe I missed if Unitils is able to do that but the demo did not show this.
Unitils provides their own transport modules (e.g. for Mail communication). I personally rather would integrate with existing adapters provided by Camel or Spring Integration for sending receiving messages over Http, FTP, Mail and so on. Also I do not see point in Unitils beeing the full stack testing tool for enterprise applications. Enterprise applications nowadays deal with SOAP WebServices, JMS, REST interfaces dealing with XML or JSON data going over the wire. I do not see support for these technologies within Unitils right now. There were some questions from the audience about this and unfortunately this is not supported right now.
By the way the Unitils guys want to reach 90% (maybe 100%) test coverage on Unitils code with the next release. I personally would go for the mentioned lack of messaging support rather than aiming 100% test coverage.
This morning I decided to listen to this talk and write a post about it. Unfortunately I oversaw that the skill level of the talk is “senior”. Bad luck. I didn’t really understand most of the things Jake was talking about.
He started with an overview of the libraries square uses. As I said, mostly I didn’t understand what they do, but anyway, here’s the list:
For testing square uses:
Finally Jake made a statement about open source and said that Square is build on open source so square wants to contribute back to the community. Everybody using open source should contribute.
Jake really rushed through his talk and finished it in a little more than 30 minutes. That was fast but left plenty of time for q&a. The talk was good but to “senior” for me.
Dynamic languages like JavaScript even need more tests. Reaching good testability is a major thing to do in JavaScript in the first place. Use modules and take care on writing good testable code. You could for instance separate UI related code from service or model related code to reach better testability.
Selenium does not fit for high number of unit testing as it has to start a browser which takes a lot of time. Jasmine provides a better solution and is the JUnit in JavaScript world. Jasmine provides behaviour driven unit tests with mocks and expectations. Jasmine supports spies, fixtures and assertion statements. A simple Jasmine runner ships as HTML file that you open in your browser and all Jasmin specs (tests) get executed and test results are displayed in the browser.
So how to execute the Jasmine tests in a more automated way? How to integrate those tests within my application build lifecycle with Maven, Ant, Gradle or whatever you use.
Jasmine provides a JUnit runner implementation which does the magic. It uses Rhino to put JavaScript to the JVM so you can execute those tests from your IDE for instance. Pretty nice and fast! Debugging is also possible with this test setup which is very nice.
But there is also another option for you to reach even more snappy JavaScript testing. PhantomJS is a “real” headless browser which acts and behaves like a real browser. This is a big advantage to the option presented before as your test is green in PhantomJS it is also green in Chrome for instance. There is a good integration with Maven and JUnit reporting, too. So you can integrate this with your Jankins build for instance. But unfortunately you loose debugging with this option.
And as a matter of fact Jasmine is a fine way to unit test your JavaScript code but this unit testing does not solve the ugly browser related differences in behaviour. So at the very end you have to use Selenium for those edges.
Great talk!
This talks was supposed to talks about the G1 garbage collector,
however unfortunately not much material about was presented at all.
After a brief history of memory management and garbage collection, the
requirements on garbage collectors are laid out:
The heap is splitted up into several generations, with young and old
objects, respectively.
Garbage collectors that exist in the JVM can have of one of more
characteristics:
In the JVM the exist several GCs:
Both are not concurrent and “stop the world”. The bigger the heap
space, the longer the pauses will be.
These GCs are concurrent. In CMS, there is a “young GC”. Eden space
is cleaned up and surviving objects are copied into one of the 2
survivor spaces. After a young GC, Eden space is empty as well as one
of the two survivor spaces. Young GCs are not concurrent. For the old
generation, CMS is ‘mostly’ concurrent. It is done in several phases:
CMS is not compacting, so heap might gets fragmented. This can be
quite an issue. CMS works mostly well, but when it fails (i.e. big
fragmentation) it fails terribly.
Finally, now for G1. It is
The goals of G1 arer low latency, better predictability and easy to
use and tune.
Heap is divided into ~ 2000 regions of equals size. There is no
physical separation between young and old generation. Each region can
be either young or old. Obects are moved between regions during
collections. There is also a humongous region for large objects, where
the collection as expensive as before.
In a young generation GC, alls regions marked as young are gc’ed and
moved into a new region and the original regions remain empty. The
young GC is also stop the world.
The old generation GC is a combination of CMS and parallel compacting
GC:
Regions with no living objects can be reclaimed immediately. Regions
with smallest ratio of living objects is chosen to evacuate during
next GC.
G1 is not very well suited when there a many references between
objects of different regions and where there are many large objects.
And then, the talk had to stop because Jaromir ran out of time.
This presentation talked too much about GC basics, which were probably
already well understood by most of the audience. Only the last third
talked about G1, and the he run out of time and probably missed quite
a bit of his talk, probably half of his talks alltogether. Probably
the most interesting stuff. For now, this was by far the worst talk,
not because of the content, but because we missed the interesting
parts completely. Sorry, lost time.
Android still has room for improvement when it comes to smooth animations in the user interface. In their talk, Romain Guy and Chet Haase described how Google identified and addressed issues in the Android framework that sometimes prevented good, consisten framerates in the UI. They also gave a few tips for developers.
“Project Butter” is Google’s code name for addressing what is commonly called “the jenk”: choppy UI performance in Android. The desired smooth user experience means: low latency (no more than five frames lag behind user actions) and consistent framerates.
Lag behind user input occurs because user events are queued and periodically dequed for bulk (“batch”) processing. During processing, new user events queue up. To smoothly process user events, Jelly Bean syncs dequeuing to screen refreshs and no longer handles user events in batches, thereby processing as much recent user events as possible for the next screen redraw. This process is known as “event streaming”.
The other area for improvement was the drawing speed of the user interface: a user interaction typically updates and then marks for redrawing (“invalidates”) a UI element. All marked elements lead to updates in the “DisplayList” which is then drawn to the display buffer by the GPU. The display buffer is finally displayed on the screen just when the display has ended its current refresh cycle (Vsync).
Even though GPU performance is high, having to wait for the next display refesh may cause the screen to stutter in cases where the new display buffer happens to be ready for display just after a display refresh cycle has started: in these cases, switching to the updated display buffer will have to wait nearly a full display refresh cycle to avoid screen tear. In Jelly Bean, CPU and GPU processing of the DisplayList starts syncronized to the end of display refresh cycles, thereby giving the system as much much time as possible to finish updating the display buffer before the next display refresh occurs.
Another improvement was the introduction of tripple display buffering: in cases where CPU and GPU have a display buffer ready for display but have to wait for the next display refesh cycle, they may now start preparing yet another (third) display buffer instead of just idling.
In another area performance enhancement was achieved by allowing to directly update DislayList properties, thereby skipping invalidating UI elements first.
To identify and resolve display performance issue, Romain and Chet advised developers to get familiar with a few tools:
In the “platform-tools” directory, “adb shell dumpsys gfxinfo” will create profiling data that can comfortably be analysed and displayed in any spread sheet application. Profiling needs to be enabled in the Androids developer settings first.
To draw windows to the display buffer, two hardware units are involved: the GPU operating on the frame buffer and overlay hardware. Overlay hardware is very limited in number, typically to two or three. To analise overlay vs. GPU usage, use the “adb shell dumpsys” command: in the output, look for a table with heading “type” and “name”. If under “type” “FB” indicates GPU usage (instead of the desired “OVERLAY”), try reducing the number of windows displayed at the same time in your application.
Also recommended is the use of “systrace.py” under “tools/systrace” to profile an application (you need to first enable tracing in Android’s developer options). Open the generated “trace.html” file and look especially for sleeping processes that block delivery of user touch events.
Generally, developers should keep in mind that creating objects (“new …”) is a costly operation and should not be done in time critical animation routines. It is better done earlier. Also remember to avoid drawing invisible areas, for example by using clipping in invalidate calls and have a look at the “Choreographer” API for more controll over animations.