mercoledì 29 luglio 2009

Unit Testing Best Practices

With the agile shift in software development, unit testing has become more and more important.
I like to say that a good programmer is one who knows how to write good unit tests.
A good unit test grants that tested code is correct, testable, readable, reasonable and even elegant!

So how to write good unit tests?

This is my personal best practices list:
  • always use tdd: it grants that the interfaces of your classes (i.e. your api) are created in a real life scenario; testability improves code readability and inner structure, giving it a simple design
  • use data builders: tests require to create many objects; use data builders to build them; style your test building code defining your own dsl for data building
  • don't use db-unit, use db builders: db-unit context files are external to the tests, making them less readable and self-contained; these files duplicate code and are hard refactor and maintain (does eclipse refactor automatically an xml/excel ?); so apply the builder pattern to create db objects in java too
  • use light-weight mock: don't try to verify the internal behaviour of a mock, just mock its external behaviour; digging into the internals of a mock exposes to much information making the test fragile; most of the time you can check the correctness of the class under test just by checking it externally (by the way my favourite mocking library is mockito)
  • write readable asserts: use a matcher library such as hamcrest or a dsl like fest-test
  • use dsl when possible: there are nice libraries out there (time4tea, fest-reflect) that can make your test (or actual) code more fluent, and you can easily create your own (i did a small one for date manipulation, maybe i will post it later on)
That's it.

So go unit testing right now :)

martedì 28 luglio 2009

Who is gonna be the next Java?

After java went out in 1995, it was difficult to think it would have reached such a widespread adoption.

There were several others OO programming languages around (C++ went out in 1983), but key to the success of java were a few language enhancements, the focus on enterprise (i.e. web) development and a strong marketing and commercial strategy.

Now many claims Java is old and stale.
So who's gonna take its place ?

Omitting commercial reasoning, we have a plethora of potential alternatives: ruby, jruby, groovy, scala....

We can see that none of them impose a big shift in programming paradigm: they basically combine object orientation with funcional pardigm.

But scala is the only one to be at the same time:
  • statically typed: checks type at compiler time, but avoids redundant typing through a local type inference mechanism
  • fully java compatible: it can smoothly run on a jvm, leveraging existing libraries and code base
  • java targeted: Martin Odersky created scala with the intent of making a better java, so it looks more natural to java programmer than other languages
So at the time my bet would be on scala...

domenica 31 maggio 2009

Web Beans - Part I: Redefining Dependency Injection

Dependency Injection has been around for quite a while, initially spread by the popular Spring Framework, but recently has been revamped with new ideas coming from the open-source community, especially JBoss Seam and Google Guice.

In the last year the various contenders (or at least some of them =) have joined their efforts to produce a java specification, JSR-299, named Web Beans.

Let's see why Web Beans (or simply WB) promises to be the new java revolution.

WB expands the Dependency Injection pattern offering a DI that is:
  • strongly typed
  • deployment targeted
  • contextual

The purpose of DI is to enhance loose coupling of client (the one that gets the bean injected) and server (the injected bean).

With Classic Dependency Injection, the client is not required to manage the construction and wiring of the client.

With Web Beans Dipendency Injection, the client is not required to manage the lifecycle of the server object, and to take care of the actual deployment environment.

Or, quoting WB manual, "This approach lets stateful objects interact as if they were services".

Let' see WB in action.

STRONGLY TYPED
WB enforces strong typing using binding annotation.

To inject a generic PaymentProcessor we use @Current:
@Current
PaymentProcessor paymentProcessor;
If we need to pay by check or credit card, we exploit a user defined annotation (called binding annotation):
@PayByCheck
PaymentProcessor chequePaymentProcessor;

@PayByCreditCard
PaymentProcessor creditCardPaymentProcessor;
The feature is the same as Guice binding annotations, or Spring custom qualifier.
What is different from Spring, is that no dependency resolution by name is allowed, since using a string would sacrifice the benefits of typing (compiler check, IDE refactoring, etc.).

But the most powerful feature here is the ability to combine Binding Annotations to produce elegant results:
@Asynchronous
@PayByCheque
PaymentProcessor processor
Can't you see how nice? No more stupid bean names like asyncByCheckPaymentProcessor !!
And it enforces the correct semantic too: in this case we are sure that we will pay by check and asynchronously.

DEPLOYMENT TARGETED
What if our web beans should be deployed differently in certain environments?

For example we would like to mock our payment processor in an integration testing environment.
We can specify this through a user defined annotation (called deployment type):
@Integration
public class MockPaymentProcessor implements PaymentProcessor {
...
}
Then through the web-beans.xml we say we are in the integration environment (i.e. we enable the deployment type):
<webbeans>
<deploy>
...
<integration />
</deploy>
</webbeans>
And magically every time a client asks for a PaymentProcessor, he will receive a mocked instance !

CONTEXTUAL
This is probably the most innovative feature, directly inspired by JBoss Seam.

More to come...

lunedì 25 maggio 2009

Hibernate Search: the one who searches, he will found

This time i will explore full text search in Java, and particularly Hibernate Search.

Hibernate Search (or HS) builds on top of Apache Lucene, the famous java search engine used by many opensource and commercial projects.

HS enhances Lucene providing:
  • transparent indexing capabilities for persistent entities
  • a flexible architecture
What does transparent indexing capabilities means?

Instead of forcing the developer to use the Lucene API for indexing data, hence a programmatical approach, HS foster a declarative approach: through annotations you can specify that an Entity should be indexed and which fields should be indexed and how (which analizer to use, whether to store it or not, etc.).

HS will keep an eye on the persistent context and transparently index any entity that is added, removed and modified through it.

Since i am a great fan of the declarative approach and annotations, this is great news for me!

Beside this, HS has a flexible architecture, allowing to customize indexing strategies in order to balance between consistency and performance.

BACK-END MODE
For dealing with clustered environments, HS offers the following alternatives:
  • Lucene Mode: the index is on a shared directory (i.e. NFS) that is accesed by every cluster node
  • JMS Mode: any modification is sent to a JMS queue that updates a master index; any search is done on a local copy of the index; this copy is periodically refreshed from the master index
The Lucene Mode is the default and is also used for non clustered environment, usually with a local directory (no need for the index to be on NFS if there is no cluster).

With Lucene Mode index updates are in real-time, granting maximum consistency.
With JMS Mode index updates are delayed, but the application is not locked, granting maximum performance.

More to come....

venerdì 22 maggio 2009

JSON / XML serialization of Hibernate POJOs

I recently had to serialize java objects to JSON format.
There are two libraries to do it:
Both have similar features, but i used the latter since serializes also to XML.

The use is pretty straightforward if you serialize standard objects, but things get a bit more complicated if your POJOs are also persistent (i use Hibernate for this).

There are two main problems:
  • transient fields do not get serialized: both XStream and Gson uses field inspection insted of getter/setter inspection; transient field are usually derived from existing field through a getter, and do not have a real field in the POJO
  • collections fields generate circular references: Hibernate collection implementations (i.e. PersistentSet, etc.) have hidden fields that shouldn't be serialized; beside polluting the serialized JSON/XML, they may lead to circular references
To avoid this problems, the POJO needs to be prepared for XStream serialization.
This means updating a real field with the value of a transient field, and substituting any Hibernate collection with the corresponding collection from the Java Collection Framework.

Let's see how to prepare a tree structure for serialization:
public void prepareForXStream()
{
// convert transient fields to fake persistent fields
this.leaf = isLeaf();

// convert Hibernate AbstractPersistentCollection to HashSet.
// this avoid circular references.
if (children != null)
{
this.children = new HashSet(children);
}

// call recursively on children
else
{
for (Node child : children)
{
child.prepareForXStream();
}
}
}
As you can see we store the calculated and transient leaf value into a real field, so it will get serialized.

We also change the set implementation of the children collection to HashSet.

A New Beginning

I opened this blog a few months ago but never used it.
Now it's time for me to revamp it.

Stay tuned for new posts from naaka...

giovedì 14 maggio 2009

Agile and Underplanning

Agile isn't about underplanning, it's about planning at the right time, i.e. when you have an idea of what you are talking about.

Waterfalling is really like asking a child what a dog is, when he has never seen one.

Agile let the child see a bunch of dogs first, and then ask him to describe one.