Tuesday, June 21, 2016

Hibernate OGM + Apache DeltaSpike Data = Pure Love for NoSQL

If you haven't looked at it before, Hibernate OGM is a JPA implementation designed around NoSQL databases.  Since its a JPA implementation, it plugs in perfectly to Apache DeltaSpike's Data module.  The added support for NoSQL databases really is a strong suitor for cross platform support.

To start, we'll create a persistence.xml to represent our connection.

    xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd">
  <persistence-unit name="MyPU" transaction-type="RESOURCE_LOCAL">
    <!-- Use the Hibernate OGM provider: configuration will be transparent -->
      <property name="hibernate.ogm.datastore.provider" value="mongodb"/>
      <property name="hibernate.ogm.datastore.database" value="swarmic"/>
      <property name="hibernate.ogm.datastore.create_database" value="true"/>

This is copied pretty much verbatim from the user guide.  I'm using MongoDB.  Not for any particular reason other than I've used it before and it was already installed on my machine.  Next we'll create a repository and an entity:

public class Employee {

    @GeneratedValue(generator = "uuid")
    @GenericGenerator(name="uuid", strategy="uuid2")
    private String id;

    @Column(length = 40)

    private String name;

@Repository(forEntity = Employee.class)
public interface EmployeeRepository extends EntityPersistenceRepository{
    @Query("select e from Employee e order by e.name")
    Stream list();


Now we have a database layer that can save and retrieve entities.  Last thing we need is a REST endpoint to expose this.  To do that, we want to make sure its transactional since we're using resource local as well as providing some sane operations (create, list).  That endpoint probably looks like this

public class EmployeeRest {
    private EmployeeRepository repository;
    public Response list(@QueryParam("name") String name) {
        return Response.ok(repository.list().collect(toList())).build();
    public Employee get(@PathParam("id") String id) {
        return repository.findBy(id);
    public Response create(String name) {
        Employee employee = new Employee(name);
        Employee result = repository.save(employee);
        return Response.ok(result.getId()).build();


So just like that, with in the ball park of 75 lines of code, we have a CDI based application that uses Hibernate OGM and Apache DeltaSpike that can list, get and create entities end to end.  That was super simple!

Sunday, May 29, 2016

Hammock 0.0.3 is Out!

Hammock 0.0.3 is out!

After more than 2 years, the next version of Hammock is finally out.  Some of the key changes here:

- Upgraded to the latest libraries of pretty much everything.
- Use a more modular build structure
- Introduction of a new security component
- Serving file system assets

If you're not familiar with Hammock, its a lightweight integration, using CDI, based on RestEasy, Undertow.  It's a quick and easy to use development framework to spin up small applications.

Full details on how to get started with Hammock can be found in the README

So what's next?  Well a few integrations are still in the works.

- JPA support, probably via Hibernate, likely to also provide migration support via Flyway
- Camel support based on the Camel CDI project
- Metrics support on the Metrics CDI project

Sunday, December 20, 2015

Picking the right Scope for your CDI Beans

I sometimes find people struggling with how to choose the right scope for your beans.  While it can be confusing, I hope that this article will help with your decision making to figure out which scope best applies to your beans.

What's available?

As of Java EE 7, your normal scopes are RequestScoped, SessionScoped, ApplicationScoped, ConversationScoped and TransactionScoped.  The first two are closely tied to the HTTP life cycle, typically when an HttpSession is started, you get the beginning of a Session Context available for use.  When a session is destroyed that Session Context ends.  Sessions are meant to be passivated and potentially serializable, so any SessionScoped objects you have must be serialization friendly, including their dependencies.  A Request Context exists in many places, but the most straight forward use case is around the duration of a single HTTP Request.  The various specs define a few other places where a Request Context is active - PostConstruct methods of Singleton EJBs, MDB invocations, etc.

A Conversation Context is started manually by a developer and is usually bounded within a shorter period than a containing context.  For example, you could create a conversation as a child to a request, and close it before the request is done.  The typical use case is around a single session, a set of requests that are all meant to be done together in a coordinated fashion.

ApplicationScoped beans are created once, the first time the application calls upon them, and are not created again.  Note that the scope of an application is somewhat ambiguous.  If you have just a WAR, even if it has some libraries without EJBs, it will share a context.  Anytime a JAR provides its own entry point (e.g. a JAX-RS endpoint, SOAP endpoint), that may be considered a separate application.  EARs are known for introducing this kind of problem, as multiple WARs and EJB JARs will likely create their own unique contexts, resulting in multiple Application Contexts being available.  When in doubt, if you can run a single WAR without EAR do it.

Transaction Scope was introduced in Java EE 7 as a part of JTA 1.2.  This context is activated within the bounds of a single transaction.

There are two other scopes, pseudo scopes you could say, Dependent and Singleton.  When working with CDI, Singleton is basically the same as ApplicationScoped, except that Singleton beans aren't eligible for interceptors, decorators.  Dependent has similar restrictions, but has an interesting caveat.  The injected bean shares its context with its injection point.  If you inject it in a RequestScoped bean, then the injected Dependent bean shares the Request Context.

State & Data vs Services and Operations

As an application is doing work, it needs to maintain some amount of state.  This could represent a user's session, entities being manipulated, a local cache of static resources.  This is different than a service that may be performing operations.

Domain-Driven Design considerations

Suppose that you are designing a shopping cart, one of the timeless classics in software development.  How would you model the ShoppingCart based on scopes, as well as building a rich model that supports the operations relevant to its use.  Consider these interfaces

public interface ShoppingCart {
    ShoppingCartItem addItem(Item item);
    Order checkout(BillingInformation billingInformation);

At a very high level, consider these behaviors as well:
- A shopping cart is persistent.  If I change it, and then leave the site, what I added should be available to me at a later point.
- Adding an item to my shopping cart is an atomic, synchronized, idempotent and request based transaction.  I'm only able to add one item at a time in a single request but I could open multiple browser tabs and make changes in tandem.
- Since these operations are atomic and state is persistent, the changes I make should be automatically written to a data store as a part of these operations.

Based on this information, as well as the tips above, how would you scope these domain objects?

Here's one approach.

- Your ShoppingCart is session scoped.  It's associated to a user, persistent but generally tied to a single user session.
- Your ShoppingCart is dependent on some kind of ShoppingCartService that is responsible for the persistence of the state of a shopping cart, and likely other services as needed (e.g. an OrderService for handling checkout w/ credit card information).  These services are ApplicationScoped and operate on these classes.
- Item is a RequestScoped bean that represents what you're trying to add to your cart.  It is built up in a single request and pushed into your cart.
- BillingInformation is a ConversationScoped bean, the data is built up in a few requests, and then calls checkout when all of the information is present.
- Neither Order nor ShoppingCartItem are managed beans.  They are created within the scope of their respective methods.

These domain classes together represent the state of your application, specifically how things are changing over time.  So how do you interact with them?


If a controller is aware of the view and the backing model, what scope does it get?  If you're dealing with JAX-RS, the answer is pretty easy - RequestScoped resource classes are mandated in a JAX-RS/CDI environment.  You have a bit more leway with frameworks like JSF, but realistically a request scoped controller makes the most sense.
- It's bound to the HTTP lifecycle (assuming you're using HTTP)
- It represents the start of a request.
- It's the user's action of interacting with the system on a per request basis.

This doesn't mean you can't use a SessionScoped or TransactionScoped controller, but semantically it makes things clearer that it is request scoped.  Another thing to point out, your request scoped controller can still interact with your session scoped model.  Typically requests are made within a session, and thus give you access to the sessions contents as well as requet's contents.  That means this controller is valid

public class ShoppingCartController {
    private ShoppingCart shoppingCart;

This is a perfectly valid thing to do.

Consider this alternate approach.  What do you think is going to happen?

public class Item {
public class ShoppingCartController{
    private Item item;

In here, as mentioned above, Item is bound to a request, it represents the item that is being selected by the user.  I marked my controller as session scoped since the lifecycle of the shopping cart is tied to a session.  This is a legal injection and is threadsafe.  What happens is that the context is bound to the HTTP request, the same controller (session scoped) will operate on two different requests.  Obviously be careful for things like mutating the controller (since its a controller, shouldn't be an issue).


Services, from my point of view have two valid scopes.  First, they can be application scoped (or singleton, if you don't care about proxies/AOP) since they maintain no state.  Second, they can be dependent since they also maintain no state, and can be reused in a wider variety of cases.  I'm generally warry of recommending dependent scoped beans, just because of some ways they can be misused (e.g. not cleaning up, lookup/non-injection cases).  As long as a service isn't maintaining any state, it can be used over and over again.


There you have it.  I hope you found this article useful.  Please feel free to add comments below.

Sunday, March 23, 2014

Review of Java EE and HTML5 Enterprise Application Development

I recently had the pleasure of reading through the book Java EE and HTML5 Enterprise Application Development by John Brock, Arun Gupta and Geertjan Wielenga.

First, a comment on file formats.  I use Linux at home.  I had the most trouble trying to get setup to actually read the book.  The format used is only compatible with Adobe Digital Editions, which only works on Mac and Windows.  I ended up getting a VM to do the reading on.  Little bit of a pain if you're like me and use a tablet for a decent amount of reading.

The book takes the approach of wanting to build HTML5 applications, leveraging the Java server side as an API type server, with both REST APIs and WebSockets in use, and a front end based on Knockout and low level jQuery to process API calls.  The backend is using your stereotypical Java EE stack of JPA, EJB and JAX-RS, plus WebSocket support.  The way used in this book serves as an entry point for someone new to a lot of the technology and how to use them; it doesn't focus on changes over time in these specs or some of the new features.  It's a bottom up approach for being able to expose your database over an API.

Probably the most confusing part of the book, and it may be because of different authors, is the crossing of example types.  In the JPA and JAX-RS chapters, we use a book and author example.  In the WebSocket chapter the authors use a board and tic tac toe example.  I believe the consistent use of a fluid example is the best thing to do in a technical book.  A good example of that is Continuous Enterprise Development by Andrew Lee Rubinger and Aslak Knutsen.

The chapter on application security is probably the best of the book, in my opinion.  It goes through what you need to do not just server side but also client side to secure your web applications.  For those new to this programming paradigm, it's some good information on some of the key differences versus traditional server side rendered web applications (JSF, Struts, etc).

The content of the book is brought about in an introductory manner.  If you're new to these technologies, it's a good read to get up to speed on how they work.  The JPA spec has only changed a little bit in 2.0 and 2.1, so if you're already familiar with how things worked previously, it's not a huge change.  JAX-RS is a newer spec, in its 2.0 release already and shows how declarative it can be.  Hidden in the REST chapter you'll find some interesting pieces on CDI, Transactions and Bean Validation.  These other technologies really help build the bridge across all of the technology.

Wednesday, February 26, 2014

Announcing Hammock

I'd like to introduce the world to Hammock

Hammock is based on my last blog post, creating a light weight service to run JAX-RS resources over a minimalistic configuration.

Binaries are currently up on Sonatype OSS (I hope they sync to MVN central shortly): https://oss.sonatype.org/index.html#nexus-search;quick~ws.ament.hammock

Github: https://github.com/johnament/hammock

What is Hammock?

Hammock is a light weight integration between JBoss RestEasy, Undertow and Weld.  Leveraging Weld SE, it provides automatic resource scanning and minimal binding code to launch a web container (Undertow) running the basic services to launch a full JAX-RS application.

Getting Started

Getting started with Hammock is simple.  Add a reference to the project in your pom.xml:


Add your REST Resource class:

public class EchoResource {
    public String greet() {
        return "hello";

Implement the configuration, for application (via @ApplicationConfig)

public class ApplicationConfigBean implements WebServerConfiguration {
    public int getPort() {
        return 8080;
    public String getContextRoot() {
        return "/api";
    public Collection getProviderClasses() {
        return Collections.EMPTY_LIST;
    public Collection getResourceClasses() {
        return Collections.singleton(EchoResource.class);
    public String getBindAddress() {
        return "";

You can optionally also do this for a management interface as well (via @ManagementConfig).  The resources tied to each of these configurations would then be launched when you start your application.

Starting your app can be done manually via Weld SE, or by using their built in class, org.jboss.weld.environment.se.StartMain .

Sunday, January 19, 2014

Bridging Netty, RestEasy and Weld

As you likely know, RestEasy already supports an embedded container for Netty.  RestEasy also supports CDI injection, but only for enterprise use cases (e.g. part of the Java EE spec or using Weld Servlet).  In the case of Netty, it's almost possible, except that the lack of a ServletContext seems to throw it off.

In addition, in many use cases you may want to translate each incoming request into a CDI RequestScope.  This requires some custom handling of each request, before passing it down to RestEasy for processing.  This allows you to properly scope all of your objects, though you cannot use a session scoped object (since there would be no active session).

The code is pretty simple to do this.  You can find details on my github repository: https://github.com/johnament/resteasy-netty-cdi

First, define your endpoint.  In my test case, I added a very simple one:

public class TestEndpoint {
    public String echo() {
        return "pong";

Next, we need some code to initialize the server.  I added this directly in my test, but I would imagine most people would want to initialize it elsewhere.

CDINettyJaxrsServer netty = new CDINettyJaxrsServer();
        ResteasyDeployment rd = new ResteasyDeployment();

As you can see in the test, I am using a custom CdiNettyJaxrsServer, which is what enables me for CDI integration.  The only thing different about mine versus the normal one is what RequestDispatcher I use.  The RequestDispatcher is what RestEasy provides to handle the incoming requests and what the response looks like.  It's very low level.  I decided this was the exact point I wanted to start the CDI RequestScope.  So my RequestDispatcher looks like this

public class CDIRequestDispatcher extends RequestDispatcher
    public CDIRequestDispatcher(SynchronousDispatcher dispatcher, ResteasyProviderFactory providerFactory,
                                SecurityDomain domain) {
    public void service(HttpRequest request, HttpResponse response, boolean handleNotFound) throws IOException
        BoundRequestContext requestContext = CDI.current().select(BoundRequestContext.class).get();
        Map<String,Object> requestMap = new HashMap<String,Object>();
        try {
        finally {

So whenever a request comes in, I start the context (using Weld's BoundRequestContext) and on completion I end it.  I also created a custom CdiInjectorFactory for Netty.  This alleviates a bug in the base one that depends on a ServletContext being available (throughs a NullPointerException).  It's just a simplified version of the injector factory

    protected BeanManager lookupBeanManager()
        BeanManager beanManager = null;
        beanManager = lookupBeanManagerCDIUtil();
        if(beanManager != null)
            log.debug("Found BeanManager via CDI Util");
            return beanManager;
        throw new RuntimeException("Unable to lookup BeanManager.");

You'll also notice in my test code I'm using a CDI Extension - LoadPathsExtension.  This simply sits on the classpath and listens as Weld initializes.

LoadPathsExtension paths = CDI.current().select(LoadPathsExtension.class).get();

For each ProcessAnnotatedType it observes, it checks if Path is present.  If Path is present, it adds it to a local list of all resources.

public void checkForPath(@Observes ProcessAnnotatedType pat) {
        if(pat.getAnnotatedType().isAnnotationPresent(Path.class)) {
            logger.info("Discovered resource "+pat.getAnnotatedType().getJavaClass());

This makes scanning for Paths possible, which is done by the container for RestEasy.  In the Netty deployment, you need to always maintain your list of resources.

LoadPathsExtension paths = CDI.current().select(LoadPathsExtension.class).get();
        CDINettyJaxrsServer netty = new CDINettyJaxrsServer();
        ResteasyDeployment rd = new ResteasyDeployment();

Finally, we start the actual test which uses the JAX-RS client API to make a request to a specific resource.

        Client c = ClientBuilder.newClient();
        String result = c.target("http://localhost:8087").path("/").request("text/plain").accept("text/plain").get(String.class);
        Assert.assertEquals("pong", result);

Saturday, October 26, 2013

Announcing injectahvent - a lightweight integration between CDI Events & Apache Camel exchanges

Howdy all!

I am proud to announce a new open source library I am working on, injectahvent.  Injectahvent is meant to be a light weight integration between Camel Messages and CDI Events.  It allows you to define a processor that will fire CDI events on Camel exchanges, and in turn will also allow you to register new ObserverMethods in CDI that listen for events and move them over to Apache Camel exchanges.

So what does it do so far?

Well, it's a little generic right now, but essentially you can register a new CDIEventProcessor in to your RouteBuilder and let CDI fire an Event with the body and exchange objects.

The second part it does is is to allow an application developer to create a simple CDI extension that registers new ObserverMethods for CDI events to look for and use a Camel ProducerTemplate to fire that message.

Conceptually, this is a carry on project from the JBoss SeamJMS module.  While the new fluent API for sending messages was inspired by some of the builder pattern used in Seam JMS and is now incorporated in JMS 2.0, the firing of events was not carried over.  Firing the events is more of an EIP/EAI type of tool.  As a result, I decided to leverage Apache Camel to make it more generic.

The library leverages Apache DeltaSpike, with its CDI Container Control API to start a new RequestContext for a fired message.  It's currently assumed that the basic bootstrap of a CDI container is done outside of this context.

Source Code: https://github.com/johnament/injectahvent
Issues: https://github.com/johnament/injectahvent/issues

I hope to start putting together a wiki for it soon.  As well as doing some nice code clean up.