Java 7: The Top 8 Features

t’s been a while since the last major Java release and expectations were naturally high for the upcoming release. The Java 7 release initially included many JSRs with exciting features, like support for closures, which were later deferred to Java 8 in order to release JSRs that are already done. This effectively diluted what is now offered in Java 7 and has left some disappointed.

The Java language has undergone major changes since I started using it in 1998. Most of the changes were driven by the Java Community Process (JCP) which was established in 1998 as a formal and transparent process to let interested individuals and entities participate and influence how the language should evolve. This is done through the submission of a change request, known as Java Specification Request (JSR), followed by a review and a voting process. Changes or enhancements made to the language can be usually tracked back to a JSR where they were originally put forward for review. For example, the addition of Generics in Java 5 was done via JSR 14.

Java Releases

Here’s a quick snapshot of the past Java release dates (table 1). There are several small new features and enhancements in Java 7. Out of the 28 features that I looked at, here are the ones that I found useful.

New features and enhancements

#1 Strings in switch

In programming, we often encounter situations where we need to do different things based on different values of a variable. For Boolean variables, an if-then-else statement is the perfect way of branching code. For primitive variable types we use the switch statement. However, for String variables, we tend to resort to using multiple if-then-else branches as follows.

Java 6 and Before

One workaround for this is to convert the String into an enum and then switch on the enum.

Java 7

Java 7, however, has added a language level support for String in switch. Now you can rewrite the same code more elegantly:

Not only does this help us write more readable code, but it also helps the compiler generate more efficient byte code as compared to the if-then-else by actually switching on the hashcode() and then doing an equals() comparison. Please note that you will get a NullPointerException if the variable language in the above example resolves to null. I like this feature, but unlike some of the other enhancements in the past (like Generic in Java 5), I don’t anticipate using this feature a lot. Practically, I find myself using if-then-else for one or two values and resort to an Enum when the number of values are higher.

#2 try-with-resources statement

One of the most useful additions in Java 7 is the auto closing of resources like InputStream which helps us reduce boiler plate code from our programs. Suppose we were writing a program which reads a file and closes the FileInputStream when it’s done, here is how you would write the program:

With Java 6 and Before

I want to point out a couple of things in this code. Firstly, notice that we declare the FileInputStream outside the try block just so that it can be accessed in the finally block. The second observation is that we need to initialize the InputStream to null, so that it is guaranteed to be initialized when we access it in the finally block. Last but not the least, the is.close() in the finally block may throw an Exception as well, thereby hiding the original Exception thrown in the try block Exception from the caller. What we probably want is to handle the Exception thrown from is.close() and throw the original IOException.

The above code still has a shortcoming that the Exception thrown from finally is supressed and not accessible to the calling code. I’m not sure how often we want to get both the original Exception and also the Exception thrown from the finally block, but if we did want it, we could do always do something like this:

SuppressedException above is a user written Java bean with a field named suppressed of type Exception. The calling code can then call SupressedException.getThreadLocal().getException() to get the Exception that was supressed in the finally clause. Great, we solved all the problems associated with the try-catch-finally! Now we must remember to repeat this exact sequence with each use of try-catch-finally when handling files or other resources which need to be closed. Enter Java 7, and we can do the above without the boiler plate code.

With Java 7

try can now have multiple statements in the parenthesis and each statement should create an object which implements the new java.lang.AutoClosable interface. The AutoClosable interface consists of just one method.

Each AutoClosable resource created in the try statement will be automatically closed! If an exception is thrown in the try block and another Exception is thrown while closing the resource, the first Exception is the one eventually thrown to the caller. The second Exception is available to the caller via the ex.getSupressed() method. Throwable.getSupressed() is a new method added on Throwable in Java 7 just for this purpose.

GlassFish reduced to “toy product” as commercial offering axed

In a GlassFish and Java EE roadmap update yesterday, it was revealed that going forward there would be no more commercial major releases of the popular Oracle GlassFish Server. 
Oracle is ushering those looking for an alternative towards the WebLogic Server - however, it insists that GlassFish is in no way dead.
In terms of practicality, having two commercial servers in one company never made sense. Ultimately one was going to have to go, and unfortunately for GlassFish’s community, their product bit the dust.
For now, the plan is for GlassFish Server to continue to underpin Java EE reference implementations in future releases - however, it remains to be seen how the server will thrive now that it’s been cast out of the Oracle commercial fold.
The community around GlassFish certainly isn’t convinced. Java EE specialist Markus Eisele lamented in his blog that “GlassFish Server as we know it today is deprecated from a full blown product to a toy product”. He added that, with the lack of commercial support, overall quality and reliability was bound to suffer. Not to mention the fact that development will hugely slow, and users can expect far less frequent updates from now on.
There had been rumblings that something was afoot in Larry Ellison's empire. Two weeks ago, loyal Java EE and GlassFish evangelist Arun Gupta made an abrupt departure for software open sourcers Red Hat.
Whilst he couldn’t tell JAXenter whether he was aware that this move was coming, Gupta made it very clear that, with no commercial backing, GlassFish is simply not the viable option for Java EE that it once was.
He told  us that “now that Oracle is not planning any commercial support, WildFly definitely emerges as the leader in this space. So while all the rapid innovation will continue in WildFly, developers / customers can be assured that their deployments will be commercial supported with JBoss EAP”.
In recent years, Red Hat has set a bit of a precedent for picking up the slack when the Java stewards have left their dependents high and dry. This spring, the trilby crew stepped up with OpenJDK 6 when Oracle dusted its hands of Java 6, and with Arun and his huge band of followers, it seems likely that it’ll be gaining a host of new WildFly adherents in the months to come.

Oracle evangelist: “GlassFish Open Source Edition is not dead”

Following the fallout from Oracle’s decision to kill off the commercial version of GlassFish, Oracle evangelist Bruno Borges has hit back, insisting that GlassFish is very much still in good health.

On Monday, Oracle quietly announced the decision to end support for the commercial branch of GlassFish (known as Oracle GlassFish Server). Prominent Java EE blogger Markus Eisle quickly picked up on the news and didn’t mince his words. In a blog post titled “R.I.P. GlassFish - Thanks for all the fish”, he said the application server was being “deprecated from a full blown product to a toy product”. Judging by the reaction on Twitter, many commentators appear to agree.
However, Borges, cross-posting to both his personal and Oracle-hosted blogs, had a different spin on the news. GlassFish’s commercial twin might be gone, he said, but the open source edition is far from dead. In fact, he argued, this might be a positive thing for GlassFish, allowing it to be “free from any ties with commercial decisions”.
He also set out the combat other “FUD”, such as confusion over the price of WebLogic – the closed-source server Oracle is pushing current GlassFish customers towards – and emphasised that support will continue for old builds of Oracle GlassFish Server.
Many have suggested that GlassFish’s natural successor is JBoss WildFly, the open-source foundations for JBoss EAP – including ex-Oracle evangelist Arun Gupta. Gupta, who evangelised GlassFish before moving to Red Hat last month, told JAXenter that WildFly “definitely emerges as the leader in this space”.
In response to such claims, Borges pointed out that Red Hat does not provide commercial support for WildFly – only JBoss EAP, which will not share identical builds. It’s a similar case with IBM and WebSphere, he said: Oracle is far from alone in its policies.

Tomitribe steps up

In fact, only one open-source application server now has direct commercial support: Apache TomEE, courtesy of Tomitribe. The young company, recently founded by David Blevins, took the opportunity to reiterate its commitment to open source in a blog post of its own.
However, Blevins – who appears to be the author of the unsigned post – said selfish companies using GlassFish were as much to blame for the move as Oracle themselves.
“If you are a GlassFish user, how would you compare Oracle’s contribution to GlassFish to your own contribution to GlassFish?” he wrote. “For as much as we throw money around in our companies, it’s shocking we do not think to direct some of it at the communities of Open Source software we use.”
Tomitribe was launched at JavaOne this year with the explicit purpose of supporting the TomEE community and its contributors. Its strategy is to provide paid consulting, training and support services in order to fund TomEE’s development – and, presumably, build a profitable business too.
This need for a commercial base to open source software was echoed by Blevins in yesterday’s blog post. He concluded: “Not even IBM or Oracle can pick up the bill for Open Source forever. All Open Source communities need your support.”

From database to RESTful web service to HTML5 in FIVE minutes

In "From database to RESTful web service to HTML5 in 10 minutes", published near the beginning of this year, I described a set of tools that lets you expose data from a database via RESTful Web Services to an HTML5/Backbone.js/CSS front-end. And all this was done in 10 minutes.
You then had the basis of a complex application that already had your data integrated, and already had a basic user interface for you to get started with. The templates used were configurable - meaning that you could include your own company styles and logos and the application contained a set of best practices for organizations creating hybrid Java/HTML5 applications, which is still a relatively new combination of technologies.
You started by creating a Maven-based Java EE 7 application containing JPA entity classes and RESTful Web Services, all of which were generated via wizards and templates in NetBeans IDE. Then you created an HTML5 application in which you used tools in NetBeans IDE to generate a Backbone.js front-end from the Java EE backend. That was all well and good, but left several people wondering why they couldn't create their HTML5 front-end directly within their Java EE 7 application. In this article, I'll show how this can be achieved in NetBeans IDE 7.4, thus shortening the development cycle from 10 minutes to about 5.
Using the Java EE Platform
As before, start in the Services window, where you can register your database server, connect to your databases, and modify data as needed.
Next, create a new Maven Java Web Application, of course, without needing to install a Maven plugin into NetBeans IDE, because Maven support in NetBeans IDE is just there out of the box as soon as you start the IDE.

Click Next and provide the Maven properties relevant to your project, e.g., the Group ID and Version information.

When you complete the wizard, you have a brand new Maven Java Web project, ready to roll with Java EE 7 registered in the POM and visualized in the logical view in the IDE.

As you look through the generated code, note that you can tweak the SQL calls directly within the @NamedQueries, as shown in the screenshot below.

Next, let's generate our RESTful Web Services! The nice thing about convention over configuration is that tool providers know exactly where all the folders and files need to be created and what they need to contain, simply by following the conventions. We have our database, so we can point to it and let all the code be generated from there, while being able to tweak the templates used where needed.

Click Next and notice all the tables in the selected database are shown.

Select the table of interest, and the tables with foreign key relationships on the selected table will be automatically selected too:

You can then define the package where the classes will be created. Notice that when the database changes, you can come back into this wizard and regenerate all the code because you'll be able to select "Update" instead of "New", which will add new properties to the generated JPA entity classes, without removing anything you already added there.
On the final page of the wizard, specify where the RESTful Web Services will be created:

Completing the wizard, you now have a rather neat Maven Java Web application, a firm basis for whatever user interface you need to create:

Notice that the POM shows a graph of your Maven dependencies, which can be exported to an image to impress your boss with:

Combining"webresources") in the "ApplicationConfig" class with the @Path("com.mycompany.customermanager.customer") in the "CustomerFacadeREST" class, you can browse to your payload in the browser, after deploying the application:

The "findAll" method is automatically invoked via our RESTful call, which we can tweak to the following to return JSON, as shown above, instead of XML:
public List<Customer> findAll() {
        return super.findAll();

Incorporating the HTML5 Platform
Now we're ready to create our HTML5 front-end. Use the "RESTful JavaScript Client" wizard to do so, as shown below, in the same application as where you created all the files above:

After specifying that you want a Backbone.js front-end, together with a JQuery Tablesorter UI, and the related JavaScript libraries, you're good to go:

Notice that when you click the second Browse button, the available RESTful Web Services will be shown for you to select:

Click Next and specify the name of the HTML file that will reference your JavaScript client:

Now your Backbone.js front-end is complete, browse through it, read the code, and understand what's been created as the basis of your application.

For example, notice that the URL to your RESTful Web Service is referenced in your Backbone.js front-end code.

In the toolbar, select the browser where you'd like to deploy the application:

If you deploy the application to a Chrome browser that has the NetBeans plugin install, you'll see a NetBeans icon that lets you switch to different form factors to check how your user interface will look on different platforms:

When you choose "Inspect in NetBeans Mode", you can see where items defined in the browser are defined back in the IDE:

Similarly, you can browse the live DOM view in the IDE and see where matching items are defined back in the browser.

Aside from the above, you can use the Java debugger and the JavaScript debugger at the same time to debug the front-end and back-end simultaneously. Also be aware that changes you make in Chrome Developer Tools will automatically be saved back in your files in the IDE. Yes, that's round tripping between NetBeans IDE to Chrome Developer Tools.
As you can see from the above, the time taken to get to the end of this article is significantly reduced from the previous version. Why? Simply because you can now integrate HTML5 front-ends into your Maven Java Web applications. Now you can choose whether you'd like a JSF front-end (via PrimeFaces or Vaadin, for example) or a JavaScript front-end via Backbone.js or some other JavaScript library. The choice is yours and depends on your skill sets and business needs. Whichever approach you choose, NetBeans IDE is there to help you from start to finish!

51 holes plugged in latest Java security update

51 vulnerabilities have been patched in the latest Java security update, the first of a new quarterly cycle. Until now, Java’s official patch release schedule has been three times a year – although the emergence of dangerous zero-day exploits have forced Oracle to issue two out-of-cycle emergency patches over the past twelve months.
Of the 51 Java vulnerabilities patched in this update, all but one are remotely exploitable without the need for a username and password, and 12 were given the maximum possible CVSS score of 10/10.
As with the majority of high-profile Java vulnerabilities, almost all target browser Java applets, and as such security advisors continue to recommend that users disable the browser plugin (or, if possible, remove it altogether). However, security firm Qualys note that two “highly critical” vulnerabilities of the 51 can also apply to server installation.
The new schedule is in line with Oracle’s quarterly Critical Patch Update (CPU) bulletin, which also covers the company’s other software. VirtualBox, MySQL Server and GlassFish are among the many other applications that have received security updates this week.
Last month, Trend Micro highlighted a new wave of attackers, who are taking advantage of weaknesses in Java’s native layer. Though difficult to pull off, it appears knowledge of such exploits has become widespread, with highly dangerous results – infiltration of the native layer allows for execution of arbitrary code.
On the Sophos Naked Security blog, researcher Chester Wisniewski praised the move to a more regular cycle, but said it still wasn’t regular enough – especially since Microsoft and Adobe provide monthly patches for their browser plugins.
“Put the award on the shelf in your lobby, sell the ten million dollar boat and hire the engineers needed to update the Java patch cycle to monthly with the spare cash,” concluded Wisniewski, referring to Oracle’s recent America’s Cup win. “3+ billion devices will thank you.”

Tutorial: Integrating with Apache Camel

Running the rule over the open source integration framework

Tutorial: Integrating with Apache Camel

Since its creation by the Apache community in 2007, the open source integration framework Apache Camel has become a developer favourite. It is recognised as a key technology to design SOA / Integration projects and address complex enterprise integration use cases. This article, the first part of a series, will reveal how the framework generates, from the Domain Specific Language, routes where exchanges take place, how they are processed according to the patterns chosen, and finally how integration occurs.


From a general point of view, designing an integration architecture is not such an obvious task even if the technology and the frameworks you want to use are relatively easy to understand and implement. The difficulties lie in the volume of messages, transformations to apply, synchronicity or asynchronocity of exchanges, processes running sequentially or in parallel, and of course the monitoring of such projects running in multiple JVMs.
In traditional Java applications, we call methods from classes, while objects are passed and/or returned. A service (such as payment or billing, ...) is a collection of classes. Called methods are chained and objects transport information, sometimes enlisted within transactions but always deployed within the same Java Container (Web, JEE, Standalone). Unless we have to call external systems or integrate legacy applications, RDBMS etc, most of the calls are done locally and synchronously.
If a service wants to be reusable, it needs to be packaged, versioned in a library and communicate to the project which will use it. This approach is fine for projects maintained by in-house development teams where costs can be supported by IT departments but it suffers from different issues and requires us, most of the time, to use the same programming language or specific technology to interconnect process (RPC, IIOP, …), container where code is deployed.

Figure 1: SOA
To allow applications to be developed independently, without such constraints, the decoupling must be promoted between the issuer of a request/message from the service in charge to consume it. Such a new architecture paradigm is called Service Oriented Architecture and uses a transport layer to exchange information between systems. One of the immediate benefits of SOA is to promote a contract based approach to define the Services which are exposed between applications and manage them according to ‘governance rules’.
The SOA approach has been able to federate different teams, tackle problems surrounding the development of more complex projects. This IT transformation is required as companies need to be more agile to adapt to the market needs, information must be provided in real-time and business adaptions need to be supported by existing legacy and back systems.
While the SOA philosophy has been widely adopted, the learning curve to master XML, XDS Schemas, Web Services and Business Process Engine, the creation and management of transversal teams, the governance needed to manage services and the skills to acquired have certainly been factors in explaining why SOA still struggle to be adopted by corporate companies. Moreover, IT departments are not only concerned by promoting and managing the Web Services and Registry but also to interconnect, exchange, transform and validate information between disparate systems. This integration aspect of IT work has been completely “underestimated” when SOA principles have been elaborated.

Enterprise Integration Patterns

In 2005, Gregory Hope and Bobby Wolf have published a book called 'Enterprise Integration Patterns' where they not only spend their time to describe complex use cases, but they also define a vocabulary, grammar and design icons to express those complex integration patterns that IT departments have to address. This book has changed the way how development teams (Business/Functional analysts, Data Modelers and Developers) collaborate together to design Integration / SOA projects. The discussions were not only focused any more just on how Services, XML should be structured and business processes imagined, but also on how patterns should be used to solve integration use cases (aggregation, splitting, filtering, content based routing, dynamic routing). This book has leveraged actors towards a more agile programming approach. To support the EIP described in this book and help the developers to solution integration use cases, the Apache Camel Integration Java framework was created 5 years ago.

EIP design icons

Discover Apache Camel

Representing the EIP patterns for aggregated or routing which requires that we ‘express’ them using a language. This language is not a new programming language, moreover a language specific to a Domain, which describes problems adequately to the chosen domain (Integration). Apache Camel is a Java Integration Framework that supports such Domain Specific Language (aka. DSL; for further information, see the Camel documentation) using object-oriented language like Java, Scala, Groovy etc. No parser, compiler or interpreter is required but instead a list of commands, instructions which are sequenced:
Apache Camel is also defined as a “Mediation and Routing” engine. Let’s think of the global road network: we can transport vehicles of different type and size, with passengers of different origin, color, age, sex, between cities and capitals. According to traffic conditions, the trip can be adapted and alternative roads used. Likewise Apache Camel transports ‘messages’ along Routes.
 .to("Paris"); // Transport passengers from Brussels Capital to Paris
Each Camel route starts with the from instruction which is particularly important as it acts as a consumer and plays a specific role depending on whether it will be triggered (‘event drived architecture’) or be able to read data at regular intervals (‘poll architecture’). The consumer is a factory and whenever data are received, then ‘Messages’ will be created and transported by the Apache Camel route.

Of course Apache Camel does not at all transport ‘passengers’ in a route but ‘messages’. These Messages will pass through a collection of steps, aka processors to transform, validate, format, enrich the content of the information received. The framework provides different processors which have been specialized (Bean, Log) to simplify the manipulations that we would like to apply, like the code below:
       .log("Passport has been controlled")
       .to("log:travel://LogLevel=INFO" + "Ticket has been controlled")
Each processor placed after the ‘from’ pass the information and “form” a chain like the wagons of a train, as below:
       .to("log:travel://LogLevel=INFO" + "Ticket has been controlled") //
       .to("file:///outputDirectoryWhereFileWillbeCreated") //
       .to("") // Call External HTTP Server
       .to("jms://queue:outputQueue; // Response received is published in a queue
Nevertheless, certain processors will produce a message that Camel will send to a Server (SMTP, FTP), Application (RDBMS), Broker (JMS), to another camel route (DIRECT, SEDA, VM) and in some cases will wait till they get a response (HTTP, TCP/IP, WS, REST,WebSocket).
One of the key benefits of Camel is that it offers the possibility to take decisions according to the information that it carry in using a Message structure. Such a Message which corresponds to an object Exchange contains the information or object carried in a Body, but also the metadata part of Headers.
The metadata allows you to document the objects transported but also to know from where they are coming from, their origin (File, Ftp, WebService, SMTP, JDBC, JPA, JMS, …) and where they should go. To support the decisions, Camel uses one of the EIP patterns Content Based Router, Filter, Aggregator, Splitter, … with a specific language called Expression Language (Simple, Constant, Xpath, Xquery, SQL, Bean, Header, Body, OGNL, Mvel, EL, ...).

Message structure
The decisions are taken by Predicates that we can compare to If/Then/Else, While/For statements. The routing engine will determine what to do with the “Message(s)” and where they should go.

The choice/when which is used by the Content Based Router will calculate (using the predicate and expression language) if the condition is met. If this is the case, then the exchange is moved to the processors defined in the path of the branch, otherwise they will move into another pipeline. All this is demonstrated below:
                .simple(${header.isValid}==true) // Simple language checks if  the status is equal to true
                   .log("Passenger has been controlled")
                   .log("We can now control their ticket")
                   .log("Your are not authorized to continue your trip");  
For some components used, a response is expected from the receiver called (HTTP, WebService, REST, JMS – Request/Reply, TCP/IP, …) or by the sender issuing the message. In this case, Camel will adapt the pattern used to internally transport the message. This pattern is normally of type InOnly but when a response is required, the pattern to be used will be InOut. To transport the information and to avoid that we mix an incoming Message with outgoing Message, Apache Camel will use two different objects for that purpose, which are in or out. When no response is required which is the case when we use by example a File component, then the out object is always null.

One step further

As the traffic is controlled by operators, Apache Camel provides an environment to manage routes (Start/Stop/Suspend/Resume the traffic in routes). This environment is called a Container or more precisely a CamelContext.

The container is not only the runtime where the routes are deployed and but also acts as complex ecosystem. It can trace the exchanges, how to manage using JMX information exposed by the framework, how to handle the thread’s pools, how the routes can be discovered, how we should shutdown the routes and generate unique identifiers that we use when an exchange is created.
The CamelContext will also register the components that we need to consume or produce that information. According to the scheme name contained in the URI, Apache Camel will scan the classes loaded by the classloader to find the components that it would like use:
"scheme://properties?key1=val2&key2=val3 //
"file:///home/user/integration? "
The Component class is a factory which will create an Endpoint object based on the parameters of the collected from the URI (?key1=value1&key1=value2 ...). This object contains the methods required to create a Producer or Consumer according to the role played by the component.
Typically, the Polling Consumer regularly scans a directory of a file system, has a listener to a JMS reading JMS messages and will create an Exchange that it will propagate to the next processor as shown below:
protected int poll() throws Exception {
 Exchange exchange = endpoint.createExchange();
// create a message body
 Date now = new Date();
 exchange.getIn().setBody("Hello World! The time is " + now);
 try {
// send message to next processor in the route
 return 1; // number of messages polled
At the opposite end, the Producer will wait till it gets a Camel Exchange from a processor, then will manipulate the “Message”, enrich it and change the ‘Metadata’:
public void process(Exchange exchange) throws Exception {
// Add a new property
A Camel project typically consists of a Java Main class where we will create a DefaultCamelContext, register the Camel Routes and start the container. As described in the following example, a RouteBuilder class is required, as is its Configure method to call the static methods (= instructions) to design a Camel Route (= collection of Processors). A Route Builder allows to create one to many Camel Routes.
  public class MainApp {
    public static void main(String[] args) throws Exception {
        // CamelContext = container where we will register the routes
        DefaultCamelContext camelContext = new DefaultCamelContext();
        // RouteBuilder = Where we design the Routes using here Java DSL
        RouteBuilder routeBuilder = new RouteBuilder() {
            public void configure() throws Exception {
            // Add the routes to the container 

        // Start the container
        // When work is done we shutdown it
Compared to other integration frameworks, Apache Camel is unique, as it is able to handle Java Objects and is able to automatically convert the object type to the one which is expected by the Processor or Predicate.

During the crea tion of the CamelContext, all the classes in charge of doing type conversion (File to String, Reader, String to DOM, …) will be loaded in an internal registry which is queried during exchange processing by the Camel Processors. These Converter classes come from the different jars, part of the java classpath. While such a process is done by default by Apache Camel, you can also use a specific instruction to tell which specific converter should be applied to the object type received. See below:
 from("file:///travelers") // The File endpoint polls every 1s second files
 // available under "travelers" directory
 .convertBodyTo("String") // We convert Camel Generic File object to a String 
// which is required by Xpath expression language during Content Based Routing
 .xpath("/traveler/@controlled" = 'true') // Check if condition is 
// matched using as expression left hand side part and condition, right hand side
 .log("Passenger has been controlled")
 .log("Your are not authorized to continue your trip"); 

Next Time

During this first part of the Apache Camel article, we have introduced some of the basic functionalities of this Java Integration Framework, implementing the Enterprise Integration Patterns, which uses a Domain Specific Language to design route transporting Messages between systems/applications.
The DSL allows us to define instructions which are read sequentially. When information is received or consumed by a Camel Component, then an Exchange is created and moved to a collection of processors doing transformations on the content of the message linked to a Body object. The framework is able to take decisions (Content Based Router, Filter, …) using one of the Expression Languages supported (Xpath, Simple, SQL, Header, Body, Constant, EL, JavaScript, …) with a Predicate which allows us to define the condition. During the transportation of the Exchange, Camel will automatically convert object from and / or to a specific type using an internal registry containing Converting strategies. A Camel project typically uses a Container called a CamelContext to register the Camel Routes, Endpoints. In the second part of the series, we will cover more advanced features of Camel (Transformation of Complex Data Format, mutltithreading, asynchronous exchange, ...).

Exploring the future of the JVM at JavaOne

Exploring the future of the JVM at JavaOne

When Sun Microsystems was acquired by Oracle three years ago, to say that the Java Virtual Machine (JVM) ecosystem was overgrown was to put it mildly. As Java Platform Group JVM architect, Mikael Vidstedt found himself faced with the unenviable task of pruning down seven JVMs down into a more manageable entity.
He’s been coordinating Oracle’s technical vision for the JVM ever since, and after a year that’s been eventful for mostly all the wrong reasons, faced up to a JavaOne audience with plenty of questions for where the virtual machine can go next.
This time last year, Vidstedt was content with Oracle’s progress. As far as he was concerned, JVM convergence was a mission accomplished – with Java Flight Recorder and Mission Control incorporated, and Permgen purged, and everything else ticking over nicely. Then wave after wave of security issues came to light – most of which Oracle would rather not mention in their triumphant keynote speech, thank you very much.
Even though the issue wasn’t addressed by the big guns, and swiftly batted down in the Java media panel, Vidstedt readily acknowledged the many problems that Oracle has faced over the past year due to vulnerabilities in Java – pointing out that the sum of these problems was very much demonstrable by the fact that this year’s event has its own dedicated security track.  
With this in mind, security will apparently remain a key focus area for future JVM projects. But that’s not the only big issue. As Vidsedt noted, from now on, with cloud computing here for good, situations where many, many JVMs are running (almost) the same application will become the norm. The focus will be on how best to share resources across machines, and sound distribution management will be critical.
Understandably, lambdas are a hot topic at JavaOne this year, and Vidstedt was keen to emphasise their benefit to future JVM developments. Oracle has invested literally centuries of man hours grappling with the issue of how to make non-Java languages run efficiently on the JVM. With the invokedynamic instruction addition to Java seven, real progress has been, but there is still considerable work to do. Going forward, lambdas will be key pivots for a language blind JVM.
In terms of serviceability, Java Flight Recorder (re-released earlier this month) continues to be a work in progress. The most exciting development in this area is the addition of automated data analysis in Java Mission Control, drawing analysis from events, and coming to high level conclusions.


Spring creator Rod Johnson joins Hazelcast board

Spring creator Rod Johnson joins Hazelcast board

 Rod Johnson, creator of the Spring Framework, is to join Hazelcast, the company behind the in-memory data grid technology of the same name. The move is part of an investment round, which saw Johnson and Bain Capital Ventures together inject a total of $2.5 million into the startup.
Hazelcast was founded in Istanbul, Turkey in 2008 off the back of an open source project. The company has since opened offices in Palo Alto, and Bain Capital MD Salil Desphande claims that Hazelcast is “positioned to clean up in a market that Gartner says will be worth a billion dollars by 2016”.
The investment is particularly notable, however, due to the involvement of Johnson. The Australian developer and entrepreneur created the highly influential Spring Framework for Java in 2003, and cofounded SpringSource – later purchased by VMware – off the back of its success.
However, in July 2012 he announced his departure from VMware, and in effect Spring, to “pursue other interests”. He then set tongues wagging with a position on the board of Scala creators Typesafe in September (in which he continues to be “actively involved”).
Alongside Typesafe and now Hazelcast, Johnson has also invested in – and serves on boards of – the companies behind graph database Neo4j, real-time JavaScript platform Meteor and search engine Elastisearch. InfoQ report that news of a further two board positions will be announced “in the next few weeks”. [Corrected Sep 20]
It appears that, a year on from his departure, Johnson’s post-Spring plan isn’t just to help kill Java with Scala, but to spread his open-source business knowledge throughout the industry. And perhaps make some cash on the side while he’s at it.

Should Oracle be doing more for Java 6 users?

 It may not be Halloween for another month or so, but a grim blog post from security expert Christopher Budd will send a shiver down the spine of users with desktop Java still installed. As we reported last week, Java’s security issues have become even more complex this year, with a new raft of super-skilled hackers capable of targeting its native layer and exploit system vulnerabilities on an unprecedented level. Unfortunately for Oracle, the bad news has continued, with Budd delivering the grim prediction on September 10 that there’s every reason to believe that the worsened situation is “here to stay”, and likely to get even worse before it gets better.
In a doom laden post on Trend Micro, Budd identified the native layer exploits as emblematic of an increasing sophistication in attacks, and just one sign that things had changed for the worse. The coalescence of this issue with a new wave of  attacks targeting unpatched vulnerabilities in Java 6, a widely-deployed but, as of  February 2013, no-longer supported version of Java, has led the analyst to conclude that the overall ‘threat environment’ for Java has increased significantly.
More than 50% of Java users are still actively employing the program, in spite of the huge risks of having security support, creating an unprecedented situation for Oracle. Java 6 users are effectively now a sitting target, and Budd is in no doubt that new waves of attacks are inevitable as malware developers get busy reverse engineering Java 7 fixes to have their wicked way with the old, unsupported version.
Of course, the simple solution would be to just uninstall Java 6 and upgrade to Java 7 - but, as we’ve seen, that’s not a realistic scenario, or a feasible solution for every user, and whilst there is a premium option where users can pay for extra Java 6 support, that’s simply not a solution for everyone.
Information security consultant Michael Horowitz points out on his Java version testing site that there seems to be a communication failure between Java browser plug-ins and browsers, meaning that it can be difficult to find and catalogue all the versions of Java on a PC. The platform is so ubiquitous that it would be virtually impossible to completely eradicate vulnerable versions - and so means that the line of defence must shift from individual devices to the network as a whole. As Budd reflects, this gives a new and sinister connotation to Sun Microsystems’ marketing slogan “The Network is the computer.”
When support for Windows XP is withdrawn by Microsoft next spring, Budd frets that “a perfect storm of permanently vulnerable systems” will be created, leading him to hypothesise that summer 2014 could be a veritable spree for cyber criminals.
For those unable to jump ship from Java 6, the best they can do is try to mitigate the security issue. Since March, Red Hat has assumed leadership of the OpenJDK 6 community, and Apple has actively updated OS X to automatically disable Java if it hasn't been used for 35 days. Oracle is highly aware of the issue, and has been enforcing a Microsoft style ‘security push’, but perhaps they would be better served by re-examining their “End of Life” date policy and abandonment of non-premium customers policy, not only as a goodwill gesture towards the millions of users still dependent of Java 6, but to bolster the integrity of Java as a whole.

Developers now able to get their hands on Java 8 preview



Well, it’s near one deadline

Developers now able to get their hands on Java 8 preview

Oracle has now released the preview test build of Java SE 8, intended for broad testing by developers – although the much delayed general-release development kit will be percolating for at least another few months.

So what are the key features to look out for in this release? Well, of course there’s the much anticipated  Project Lambda (JSR 355) - cited in a blog post from Reinhold as one of the main reasons for pushing back the release earlier this year- which "adds lambda expressions, default methods, and method refer­ences to the Java programming language and extends the libraries to support parallelizable operations upon streamed data."yesterday, Oracle's Mark Reinhold, chief architect of the Java Platform Group, urged developers to test out the developer preview for JDK (Java Development Kit) 8.
Other noteworthy developments in JDK 8 include a new date and time API, compact profiles, and the Nashorn JavaScript engine. There are also some “anti-features”, such as the removal of the permanent generation from the HotSpot virtual machine.
This month was originally slated for the release of JDK 8, but due to the numerous security concerns that have dogged Java recently, Oracle wisely postponed availability until Q1 2014 - at the earliest. Though, based on the track record for this release, we’re hedging our bets.
Although Oracle has been putting its energies into resolving outstanding security glitches and restore confidence, recent incidents show that there’s still some considerable work to be done. With exploits like watering hole attacks and zero-days still fresh in the minds of many, Oracle won’t want to risk the final release becoming yet another excuse for Java baiting.
If you’re eager to get stuck into the release, you can download it here.