Blog de Nicolas DUMINIL

Lasciate ogni speranza vuoi che entrate qui !

Publié le 15/12/2018

Il est plus difficile de savoir qui sont les Gi...

Publié le 10/12/2018


JPA (Java Persistence API) is a major part of the Java EE specifications. Its utilization on Java EE servers, like Wildfly, JBoss EAP or Red Hat Fuse on JBoss EAP, became one of the most common persistence solutions. However, on OSGi platforms, like Apache Karaf or Red Hat Fuse on Apache Karaf, it was quite difficult until recently to use JPA providers like OpenJPA or Hibernate. With Red Hat Fuse 7.1 and Karaf 4.2 there is no more any reason to not choose JPA as your application persistence solution. This article shows how and gives you a consistent approach, as well as a reliable template, of developing OSGi applications, using Hiberante as a JPA provider, with Red Hat Fuse on Apache Karaf platforms. For illustration purposes, I’ll be using a sample application, named customer-manager which, as its name implies, handles customers and implements its associated CRUD (Create, Read, Update, delete) API. The code may be found here:



Publié le 05/10/2018

Welcome to the 6th part of the Microservices article series. This 6th part shows how to use a Netflix Zuul filters in order to secure micro-services.

The micro-services used until now were publicly accessible resources. In this example we will secure them by using the OAuth 2.0 protocol. Please notice that this is not an OAuth 2.0 tutorial and it assumes that the reader is familiar with it. For more information please see

There are several possible approaches to use the OAuth 2.0 protocol. Here we choose to use the Keycloak implementation. Please notice also that this is neither a Keycloak tutorial and that we assume the reader is familiar with it. For more information please see

In this new part, we add two new docker containers to our infrastructure, as follows:

  • a container named "keycloak" running Keycloak 3.4.2
  • a container named "ms-keycloak" running a Spring Boot service which exposes Keycloak as a micro-service.

The following listing ...

Publié le 11/08/2018

In the Part 3 of these article series (, we demonstrated how to load-balance between micro-services instances, using he Netfkix Eureka service. This way we achieved one of the most important goals of the Service Orientated Architecture: the services virtualization and their location transparency. The Part 4 ( showed how to use the Netflix Hystrix service  in order to improve microservices resilience.

Netflix Zuul is a so-called API Gateway, i.e. an intermediary component sitting between micrsoervices and their consumers.

Until now, while testing our microservices, we have either called them directly or via the Eureka discovery service. 
A microservices gateway is a mediator between the microservices and their consumers. This way we have a single URL that the consume...

Publié le 11/08/2018

The core microservice presented in the parts from 1 to 3 was assuming that everything happens as expected and all the events and conditions are on the happy path. This is not always the case. For example, our ActiveMQ broker may experience issues and be only partially available. Or it may simply be stopped. Or the network failure might prevent the connections to get established, etc.

In all these cases the consumer calls the service which will finally timeout but, before doing that, might waste resources without eventually being able to fulfill its role. Entry Netflix Hystrix service.

The Netflix Hystrix service can be used to provide the following patterns:

  • Failling fast. This pattern is also called "circuit breaker". It monitors endpoints and, when it detects a non responding one, it fails fast without calling it any more, avoiding this way to waste resources. It is further able to detect that the endpoint becomes responsive again.
  • Fallback. This pattern is a variant of the previous o...

Publié le 11/08/2018

Our sample microservice presented in the first and second part was a single-instance one, deployed in a docker container, having a known IP address and TCP port number. This is not typical for a microservice-based architecture where a microservice might have several instances, each running in its own docker container and is served by its own servlet engine. In these cases a consumer of these microservices cannot call them through a well defined IP address and TCP port number, not only because these information aren't known to the consumer, but also that using a dedicated HTTP connection will result in calling a pinned instance of the microservices, which is not what one expects when dealing with a cloud-based cluster of microservices. What one expects in this situation is that he consumer simply mentions the naml of the service and the infrastructure load-balances and chooses the one in the cluster which is the most suitable at that moment to serve the requests. Entry Netflix Eureka...

Publié le 11/08/2018

Our sample microservice presented in the first part used a property file to define its configuration. This property file belongs to the JAR or WAR archive hosting the microservice itself and, hence, in order to modify properties, one needs to rebuild the archive. Another point is that, while a majority of a microservice properties can be defined in property files, there are always properties which cannot, because they aren't known at the design/development time. Take for example a password. A developer cannot know it such to define it as a property in a property file. It is only known at the deployment time and it's the deployer responsibility to define it. But the build process is not the responsibility of the deployer, who even doesn't have the required tools.  Hence Spring Cloud provides a configuration server, in the form of a standalone microservice, which defines configuration properties.

The Spring Cloud Config Server supports a large variety of repositories, including file sy...

Publié le 11/08/2018

The example presented here is taken from a very real use case. It consists of a Spring Boot based microservice, deployed in an embedded Tomcat container. This microservice exposes a REST API which encapsulates a JMS topic. The API allows to its clients to publish/subscribe JMS messages to this topic. Building this projects results in the creation of two Docker containers, one running an ActiveMQ broker, the other one running a Tomcat container having the microservice deployed into it. The two Docker container communicate each other via OpenWire protocol. Let’s look at the code now.

package fr.simplex_software.micro_services.core.controllers;

import fr.simplex_software.micro_services.core.domain.*;
import org.slf4j.*;
import org.springframework.beans.factory.annotation.*;
import org.springframework.http.*;
import org.springframework.jms.core.*;
import org.springframework.web.bind.annotation.*;
import javax.jms.*;
import java.util.*;

public class HmlRestCon...

Publié le 11/08/2018

JBoss Drools ( is a BRE (Business Rules Engine) which aims at facilitating the implementation and the integration of the business rules into Java code. It has a community release provided by, as well as a commercial one, provided by Red Hat and known as JBoss BRMS (Business Rules Management System). In fact, JBoss BRMS is a full-fledged platform including an EAP (Enterprise Application Platform) server together with Drools, the BPM (Business Process Management) engine and other business intelligence components. But in this article we’ll be using the community release of JBoss Drools.

BREs are software systems that allow to define and execute business rules in a production system. A business rule is a statement that describes a business procedure or policy. It typically supports rules, facts, priority (score), mutual exclusion, preconditions, and other functions.

In the past, business rules used to be implemented directly in the code, using the program...

Publié le 20/01/2018

JBoss Fuse Integration is the legitimate heir of Apache ServiceMix and Progress Software Fuse. It has been acquired by RedHat in 2012 and, in the beginning, it was available in two architectures: JBoss Fuse, which ran on OSGi platforms, and JBoss Fuse Service Works (FSW), running on Java EE platforms. Currently the two releases, dedicated to these two architectures, were unified in a unique one: JBoss Fuse Integration. It can be run on OSGi platforms, like Apache Karaf, or on Java EE platforms, like JBoss EAP. This article a show how easy is to develop and deploy Apache Camel routes based services using JBoss Fuse Integartion on JBoss EAP. You may find here the project which illustrates the article.

The first step to perform, after having downloaded and installed JBoss EAP 6.4, is to add to the existent installation JBoss Fuse Integration 6.3, as described here. This process goes through downloading JBoss Fuse Integration 6.3 from the RedHat Customer Portal, unzip it into the existe...

Publié le 09/01/2018

This blog entry demonstrates the use of the JSR 352 specifications in Java EE 7. The JSR 352 specs define the implementation and the management of the batch processing. Historically, the Java batch processing made the object of the Spring Batch framework. Now, with the JSR 352, the Java batch processing became a part of Java EE, meaning that it is standard and it is implemented by any compliant application server, without any add on or complementary libraries.

Our demo uses Wildfly 10.1.0, the community release of the famous RedHat JBoss EAP, but things should also work in a similar manner with any other Java EE 7 compliant application server. In this last case, some slight modifications in the associated maven POM files, are of course required.

The Batch Processing has this particularity of being able to process large quantities of data. They should be seen as long running processes, comparable with business processes, without the inherent heavyness of these last ones. Like business ...

Publié le 04/01/2018

This blog ticket aims at demonstrating some of the most modern techniques in the world of the REST API, as follows :

  • Using JAX-RS 2.0 to develop REST APIs. The JAX-RS 2.0 implementation used here is RESTeasy 3.0.19 provided by the Wildfly 10.1.0 application server. Wildfly is the community release of the famous JBoss, one of the most known and used Java EE application servers, currently provided by RedHat under the name of JBoss EAP (Enterprise Application Platform). In its 10.1.0 release, Wildfly supports the Java EE 7 specifications level.
  • Using the Keycloak IAM (Identity and Access Management) server in order to secure our REST API. Keycloak is the community release of the RedHat Single Sign-On product. It encompasses lots of technology stacks like OAuth 2.0, OpenId Connect, SAML, Kerberos and much others. Here we’ll be using the last release of Keycloak server which is the 3.4.2.
  • Using Docker containers to deploy the full solution.

So, let’s start coding.

The Customer Man...