<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://nicolasduminil.github.io/feed.xml" rel="self" type="application/atom+xml" /><link href="https://nicolasduminil.github.io/" rel="alternate" type="text/html" /><updated>2026-02-11T12:12:23+00:00</updated><id>https://nicolasduminil.github.io/feed.xml</id><title type="html">Simplex Software</title><subtitle>Senior Silver Software Architect</subtitle><author><name>Nicolas DUMINIL</name></author><entry><title type="html">Rethinking Java Web UIs with Jakarta Faces and Quarkus</title><link href="https://nicolasduminil.github.io/posts-archive/quarkuspf/" rel="alternate" type="text/html" title="Rethinking Java Web UIs with Jakarta Faces and Quarkus" /><published>2026-02-11T00:00:00+00:00</published><updated>2026-02-11T13:05:34+00:00</updated><id>https://nicolasduminil.github.io/posts-archive/quarkuspf</id><content type="html" xml:base="https://nicolasduminil.github.io/posts-archive/quarkuspf/"><![CDATA[<p>Nowadays, Java enterprise applications often default to Angular, React, or Vue for
the frontend. But for this kind of applications, the most natural UI framework
already exists in the Java ecosystem: Jakarta Faces.</p>

<p>Do enterprise grade Java Applications really need heavy JavaScript libraries ?
This is the question to which we’ll be trying to answer here.</p>

<p>Modern Java enterprise applications tend to follow a familiar pattern: a Java
backend exposing REST APIs and a JavaScript/TypeScript frontend built with some
library like Angular, React, or Vue. This architecture has become so standard
that we rarely question it.</p>

<p>But is this always the most natural choice ? Certainly not, given that the Java
ecosystem already provides a mature, productive, and integrated web UI framework:
Jakarta Faces.</p>

<p>In former times, when dinosaurs still populated the Earth, the enterprise-grade
Java applications development only required the knowledge of a single technology:
Java with possibly its enterprise extensions, appointed successively as J2EE,
Java EE, and finally Jakarta EE. Unless it was Spring, the applications and
services were deployed on Jakarta EE-compliant application servers, like
Glassfish, Payara, Wildfly, JBoss, WebLogic, WebSphere, etc. These application
servers were providing out-of-the-box all the required implementations of the
above-mentioned specifications. Among these specifications, Jakarta Faces (formerly
called JSF: <em>Java Server Faces</em>) was meant to offer a framework that facilitates
and standardizes the development of web applications in Java.</p>

<p>The Jakarta Faces history goes back to 2001 to its initial JSR (<em>Java Specifications
Request</em>) 127. At that time, another web framework, known under the name of Struts
and available under an Apache open-source license, was widely popular. As it
sometimes happens in the web frameworks space, the advent of Jakarta Faces was
perceived by the Apache community as being in conflict with Struts and, this
alleged conflict was resolved through a long and heavy negotiation process of
several years, between Sun Microsystems and the Apache community. Finally, Sun
agreed to lift the restrictions preventing JSRs from being independently implemented
under an open-source license, and the first implementation (RI <em>Reference Implementation</em>),
was provided in 2003.</p>

<p>Jakarta Faces was generally well received despite a market crowded with competitors.
Its RI was followed by other implementations over the years, starting with Apache
MyFaces in early 2004 and continuing with RedHat RichFaces in 2005, PrimeTek
PrimeFaces in 2008, ICEsoft ICEfaces and Oracle ADF Faces in 2009, OmniFaces in
2012, etc. The specifications have evolved as well, from the 1.0 released in 2001
to the 4.1 released in 2024. Hence, more than 20 years of history to advent
to the last Jakarta Faces release 4.1, a part of the Jakarta EE 11 specifications,
named Mojara.</p>

<p>The software history is sometimes convoluted. In 2010, Oracle acquired Sun Microsystems
and became the owner of the Java trademark. All along the time period that they
were under the Oracle stewardship, the Java EE specifications were in a kind of
status quo before becoming Eclipse Jakarta EE. The company didn’t really manage
to set up a dialogue with users, communities, work groups, and all those involved
in the recognition and promotion of the Java enterprise-grade services. Their
evolution requests and expectations were ignored by the editor, who didn’t know
how to deal with their new responsibility as the Java/Jakarta EE owner. In such
a way that, little by little, this has led to a guarded reaction from software
architects and developers, who began to prefer and adopt alternative technological
solutions.</p>

<p>While trying to find alternative solutions to Jakarta EE and to remedy issues like
the apparent heaviness and the expensive prices of application servers, many
software professionals have adopted Spring Boot as a development platform. Other
solutions, closer to real Jakarta EE alternatives, have emerged as well and, among
them, Netty, Quarkus, Micronaut, Helidon are the best-known and most popular. All these
solutions were based on a couple of software design principles, like single concern,
discrete boundaries, transportability across runtimes, auto-discovery, etc.,
which were known since the dawn of time. But because the software industry
continuously needs new names, the new name that has been found for these alternative
solutions was “microservices.”</p>

<p>More and more microservice architecture-based applications have appeared during the
next few years, to such an extent that the word “microservice” became one of the most
common buzzwords in the software industry. In order to optimize and standardize the
microservices technology, the Eclipse Foundation decided to apply to microservices
the same process that was used to design the Jakarta EE specifications. The Eclipse
MicroProfile was born.</p>

<p>But all these convolutions have definitely impacted the web framework technologies.
While the high majority of the Java enterprise-grade applications were using
Jakarta Faces for their web tier, switching from a software architecture based on
Jakarta EE-compliant application servers to microservices resulted in a phasing-out
of these architectures in favor of some more lightweight ones, often based on
Eclipse Microprofile specifications. And since Jakarta Faces components needed
an application server to be deployed on, or at least a servlet engine, other
lighter alternatives, based on JavaScript or TypeScript libraries, like Angular,
Vue, ExtJS, jQuery, and others, have been adopted to make up for its absence and
to become the front-end preferred stack.</p>

<p>Such applications generally require two development teams:</p>
<ul>
  <li>A front-end team specialized in JavaScript / TypeScript, Angular, CSS, and HTML development, using Node.js as a deployment platform, NPM as a build tool, Bower as a dependency management, Gulp as a streaming system, Karma and Jasmine for testing, WebPack as a code bundler, and probably many others.</li>
  <li>A back-end team specialized in Java development with Jakarta EE / Eclipse Microprofile specifications, including but not limited to MP Config, MP REST Client, MP OpenAPI, MP Health, etc. or Jakarta REST, Jakarta Persistence, Jakarta Messaging, Jakarta Security, Jakarta JSON Binding, etc.</li>
</ul>

<p>Building enterprise-grade project teams became too difficult as it
required at least two categories of profiles and, given the technology’s complexity,
the mentioned profiles should have better been seniors. Hence, the software
industry has been
facing a shortage of qualified developers which determined several organizations
to favor full-stack JavaScript / TypeScript enterprise applications. This has
lead to kind of unnatural and convoluted architectures, where the back-ends were
written in a browser dedicated programming language, with all the issues that
this choice implied, like performance, security, maintainability, etc.</p>

<p>This situation sharply contrasts with what happened in the former times when the
front-end could have been implemented using Jakarta Faces and, hence, a single
Java development team was able to take charge of such an enterprise-grade project.
Jakarta Faces is a great web framework whose implementations offer hundreds of
ready-to-use widgets and other visual controls. Compared with Angular, where the
visual components are a part of external libraries, like Material, NG-Bootstrap,
Clarity, Kendo, Nebular, and many others, Jakarta Faces implementations not only
provide ways more widgets and features but also are part of the official JSR 372
specifications and, in this respect, they are standard, as opposed to the mentioned
libraries, which evolve with their authors prevailing moods, without any guarantee
of consistency and stability.</p>

<p>The figure below shows a two typical architectures:</p>

<ul>
  <li>an SPA (<em>Single Page Application</em>) architecture based on a JavaScript front-end, and a Java backend exposing REST APIs. As you can see, it requires two codebases and, probably, two separate development teams, one for the front-end and one for the back-end, and it also requires a lot of different technologies and tools.</li>
  <li>a classical Java enterprise-grade application built with Jakarta Faces and Quarkus. The frontend is implemented using Jakarta Faces, which provides a rich set of components and features for building complex web applications. The backend is implemented using Quarkus, which provides excellent support for Jakarta Faces, via its <a href="https://quarkus.io/extensions/io.quarkiverse.primefaces/quarkus-primefaces/">PrimeFaces extension</a>, and allows for fast development and efficient performance.</li>
</ul>

<p><img src="/assets/images/fig1.png" alt="Modern Java Web UI options" title="Modern Java Web UI options" /></p>

<p>One of the criteria that has formed many organizations’ decision basis to switch
from Jakarta Faces web applications to JavaScript/TypeScript frameworks was
client-side rendering. It was considered that the server-side rendering, which
is the way the Jakarta Faces works, is less performant than the client-side
rendering, provided by the browser-based applications. This argument has to be
taken with a grain of salt:</p>

<ul>
  <li>Client-side rendering means rendering pages directly in the browser with JavaScript. All logic, data fetching, templating, and routing are handled by the client. The primary downside of this rendering type is that the amount of JavaScript required tends to grow as an application grows, which can have negative effects on a page’s capacity to consistently respond to user inputs. This becomes especially difficult with the addition of new JavaScript libraries, polyfills, and third-party code, which compete for processing power and must often be processed before a page’s content can be rendered.</li>
  <li>Server-side rendering generates the full HTML for a page on the server in response to navigation. This avoids additional round-trips for data fetching and templating on the client since it’s handled before the browser gets a response.</li>
  <li>Server-side rendering generally reduces the time required for the page content to become visible. It makes it possible to avoid sending lots of JavaScript to the client. This helps to reduce a page’s TBT (<em>Total Blocking Time</em>), which can also lead to a lower average response time as the main thread is not blocked as often during page load. When the main thread is blocked less often, user interactions will have more opportunities to run sooner.</li>
  <li>With server-side rendering, users are less likely to be left waiting for CPU-bound JavaScript to run before they can access a page.</li>
  <li>Server‑side rendering often has a better Time to First Byte (TTFB) and avoids large JavaScript bundles, which can improve perceived performance for many business use‑cases.</li>
</ul>

<p>Accordingly, the argument consisting of saying that the server-side rendering is
bad while the client-side one would be better is just a myth.</p>

<p>Consequently, it appears clearly from this analysis that developing Java web
applications using server-side rendering frameworks, like Jakarta Faces, not
only leads to more performant applications, but it’s also much simpler and less
expensive. This approach doesn’t require so many different technology stacks as
its JavaScript/TypeScript-based alternatives. The development teams don’t need
several categories of profiles, and the same developer can directly contribute
to both the front end and the back end without having to operate any paradigm
switch. This last argument is all the more important as Java developers, concerned
by things like multi-threading, transaction management, security, etc., aren’t
comfortable when it comes to command programming languages that have been designed
to run in a browser.</p>

<p>The following table summarizes the main differences between the two approaches:</p>

<table>
  <thead>
    <tr>
      <th>Criteria</th>
      <th>JavaScript/TypeScript-based front-end</th>
      <th>Jakarta Faces-based front-end</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>Technology stack</td>
      <td>Multiple (Angular, React, Vue, etc.)</td>
      <td>Single (Jakarta Faces)</td>
    </tr>
    <tr>
      <td>Development teams</td>
      <td>Multiple (front-end and back-end)</td>
      <td>Single (full-stack Java)</td>
    </tr>
    <tr>
      <td>Rendering</td>
      <td>Client-side rendering</td>
      <td>Server-side rendering</td>
    </tr>
    <tr>
      <td>Teams expertise</td>
      <td>Requires expertise in multiple technologies</td>
      <td>Requires expertise in a single technology</td>
    </tr>
    <tr>
      <td>Performance</td>
      <td>Potentially lower due to client-side rendering</td>
      <td>Potentially higher due to server-side rendering</td>
    </tr>
    <tr>
      <td>Complexity</td>
      <td>Higher due to multiple technologies and teams</td>
      <td>Lower due to single technology and team</td>
    </tr>
    <tr>
      <td>Maintainability</td>
      <td>Potentially lower due to multiple codebases and technologies</td>
      <td>Potentially higher due to single codebase and technology</td>
    </tr>
    <tr>
      <td>Security</td>
      <td>Potentially lower due to client-side vulnerabilities</td>
      <td>Potentially higher due to server-side control</td>
    </tr>
    <tr>
      <td>User experience</td>
      <td>Potentially lower due to slower initial load and client-side rendering</td>
      <td>Potentially higher due to faster initial load and server-side rendering</td>
    </tr>
    <tr>
      <td>Cost</td>
      <td>Potentially higher due to multiple teams and technologies</td>
      <td>Potentially lower due to single team and technology</td>
    </tr>
    <tr>
      <td>Scalability</td>
      <td>Potentially higher due to client-side rendering</td>
      <td>Potentially lower due to server-side rendering</td>
    </tr>
    <tr>
      <td>Development speed</td>
      <td>Potentially lower due to multiple technologies and teams</td>
      <td>Potentially higher due to single technology and team</td>
    </tr>
  </tbody>
</table>

<p>So the good news here is that, if like me, you’re a nostalgic of Jakarta Faces,
from now on, you can start implementing your front-ends with it, without the need
for any Jakarta EE-compliant application server. That’s because Quarkus, our
famous Supersonic Subatomic Java platform, provides a Jakarta Faces extension,
allowing you to write beautiful front-ends like in the old good times.</p>

<p>Quarkus offers fast development mode, cloud‑native performance, and optional
native compilation, making Jakarta Faces viable even in microservices‑centric
environments. At
Melloware Inc., they provide a PrimeFaces extension for Quarkus, as described
<a href="https://github.com/quarkiverse/quarkus-primefaces">here</a>. You’ll find in the mentioned GIT repository a <a href="https://github.com/melloware/quarkus-faces">showcase</a> application that
demonstrates, with consistent code examples, how to use every single PrimeFaces
widget. Please follow the guide in the README.md file to build and run the showcase
on both an application server, like Wildfly, and in Quarkus.</p>

<p>I’ve tested it recently. Those of you having used in the past Jakarta Faces must
certainly remember the book <a href="https://www.amazon.fr/PrimeFaces-Cookbook-Second-Mert-Caliskan/dp/1784393428">“Primefaces Cookbook”</a>,
by Mert Caliskan and Oleg Varaskin, published in 2013, with a 2nd edition in 2015.
This book is one of the most comprehensive and detailed resources about
Jakarta Faces and PrimeFaces. So, I refactored all the code examples of this
book to make them work with the latest versions of Quarkus and PrimeFaces. If
you’re interested, you’ll find the project <a href="https://github.com/nicolasduminil/primefaces-showcase.git">here</a>.</p>

<p>If you want to give it a try, proceed as follows:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ git clone https://github.com/nicolasduminil/primefaces-showcase.git
$ cd primefaces-showcase
$ mvn package
$ java -jar target/quarkus-app/quarkus-run.jar
</code></pre></div></div>

<p>Then, go to http://localhost:8080 and the following screen will be displayed in your browser:</p>

<p><img src="/assets/images/fig2.png" alt="Live samples of the PrimeFaces with Quarkus" title="Live samples of the PrimeFaces with Quarkus" /></p>

<p>Here, you are given the chance to exercice most of Jakarta Faces and its
implementation Mojara and PrimeFaces, and to see how they work in a modern Java
web application built with Quarkus. Just unfold the different nodes in the left-hand
tree and click on the different samples to see them in action. You may change
the current theme as well by clicking on the “Change Theme” button at the top
right of the page.</p>

<p>You’ll tell me what it feels like there!</p>]]></content><author><name>Nicolas DUMINIL</name></author><category term="Java" /><category term="Quarkus" /><category term="Jakarta Faces" /><category term="PrimeFaces" /><category term="DZone" /><summary type="html"><![CDATA[Nowadays, Java enterprise applications often default to Angular, React, or Vue for the frontend. But for this kind of applications, the most natural UI framework already exists in the Java ecosystem: Jakarta Faces.]]></summary></entry><entry><title type="html">Building a Containerized Quarkus API and a CI/CD Pipeline on AWS EKS/Fargate with CDK</title><link href="https://nicolasduminil.github.io/posts-archive/customer-service-eks/" rel="alternate" type="text/html" title="Building a Containerized Quarkus API and a CI/CD Pipeline on AWS EKS/Fargate with CDK" /><published>2025-12-20T00:00:00+00:00</published><updated>2025-12-20T13:05:34+00:00</updated><id>https://nicolasduminil.github.io/posts-archive/customer-service-eks</id><content type="html" xml:base="https://nicolasduminil.github.io/posts-archive/customer-service-eks/"><![CDATA[<p>In a recent <a href="http://www.simplex-software.fr/posts-archive/customer-service-ecs/">post</a>, I have demonstrated the benefits
of using AWS ECS (<em>Elastic Container Service</em>), with Quarkus and the CDK (<em>Cloud Development Kit</em>), in order to implement
an API for the customer management.</p>

<p>In the continuity of this previous post, the current one will try to go a bit further and replace ECS by EKS (<em>Elastic
Kubernetes Service</em>) as the environment for running containerized workloads. Additionally, an automated CI/CD pipeline,
using AWS CodePipeline and AWS CodeBuild, is provided.</p>

<h2 id="architecture-overview">Architecture Overview</h2>

<p>The solution that you’re about to look at implements a complete production-ready architecture consisting of:</p>

<ul>
  <li><strong>Presentation Layer</strong>: A Quarkus REST API with OpenAPI/Swagger implementing the customer management solution. This implementation is exactly the same used in the previous project which leverages ECS.</li>
  <li><strong>Application Layer</strong>: Business logic with Quarkus Panache for data access</li>
  <li><strong>Data Layer</strong>: PostgreSQL (RDS) for persistence, Redis (ElastiCache) for caching</li>
  <li><strong>Container Orchestration</strong>: AWS EKS with Fargate for serverless container execution</li>
  <li><strong>Infrastructure as Code</strong>: AWS CDK implemented in Quarkus</li>
  <li><strong>CI/CD</strong>: Automated pipeline with AWS CodePipeline, CodeBuild, and GitHub integration</li>
</ul>

<p>Before starting, a couple of explanations are probably required. As you probably know, EKS can be used with two compute
engines: EC2 or Fargate. In this example we’ve chosen to use Fargate, as it was also the case of our previous, ECS-based
project.</p>

<p>Fargate is a serverless compute engine for containers that provisions and manages the underlying infrastructure and
provides automatic scaling. It is designed to make it easy to run containers without having to manage servers or
clusters. It’s a great fit for workloads that don’t have long-running connections or require frequent scaling. This project
uses Fargate because it needs a continuously running containerized application. Fargate provides the serverless operational
model (no server management) while maintaining the traditional container execution model your Quarkus API requires.</p>

<p>The figure below shows the project’s architecture diagram:</p>

<p><img src="/assets/images/architecture-diagram.png" alt="Architecture Diagram" /></p>

<p>Please notice that, as mentioned above, several layers like: presentation, application and data are the same ones used in
the previous ECS-based example. Hence, we created a new module, called <code class="language-plaintext highlighter-rouge">customer-service-eks</code>, in the current Maven multi-module
project. This module is similar to the <code class="language-plaintext highlighter-rouge">customer-service-ecs</code> one and they both share the same presentation, application
and data layers, that have been moved in a shared Maven module, called <code class="language-plaintext highlighter-rouge">customer-service-cdk-common</code>.</p>

<h2 id="prerequisites">Prerequisites</h2>

<p>The following prerequisites are required to run this project:</p>

<ul>
  <li>Java 21+</li>
  <li>Maven 3.9+</li>
  <li>Docker</li>
  <li>AWS CLI installed and configured with appropriate credentials</li>
  <li>kubectl installed</li>
  <li>AWS CDK CLI installed</li>
  <li>GitHub account with OAuth token stored in AWS Secrets Manager</li>
</ul>

<h2 id="project-structure">Project Structure</h2>

<p>The project is structured as follows:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>customer-service-eks/
├── src/main/java/
│ └── fr/simplex_software/workshop/customer_service_eks/
    │ └── config/
      │ └── CiCdConfig.java # CI/CD Pipeline configuration
    │ ├── CiCdPipelineStack.java # CDK Quarkus CI/CD pipeline infrastructure
    │ ├── CustomerManagementEksApp.java # Quarkus CDK application
    │ ├── CustomerManagementEksMain.java # Quarkus main application
    │ ├── CustomerManagementEksProducer.java # Quarkus CDI producer
    │ ├── EksClusterStack.java # Quarkus CDK EKS cluster infrastructure
    │ ├── MonitoringStack.java # Quarkus CDK monitoring stack infrastructure
    │ ├── VpcStack.java # Quarkus CDK VPC stack infrastructure
├── src/main/resources/
│ ├── buildspecs/
│ │ ├── build-spec.yaml # CodeBuild build specification
│ │ └── deploy-spec.yaml # CodeBuild deploy specification
│ ├── k8s/
│ │ └── customer-service.yaml # Kubernetes manifests
│ ├── scripts/  #several shell scripts
      ...
│ └── application.properties # Configuration
└── src/test/java/
    └── fr/simplex_software/workshop/customer_service_eks/tests/
        └── CustomerServiceE2EIT.java # End-to-end integration tests
</code></pre></div></div>

<h2 id="configuration">Configuration</h2>

<p>The project’s configuration is stored in two files:</p>

<ul>
  <li>the <code class="language-plaintext highlighter-rouge">env.properties</code> file</li>
  <li>the <code class="language-plaintext highlighter-rouge">src/main/resources/application.properties</code>file.</li>
</ul>

<p>The <code class="language-plaintext highlighter-rouge">env.properties</code> file contains environment variables that are used by the Maven build process. Its structure is
reproduced below:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>CONTAINER_IMAGE_GROUP=nicolasduminil
CONTAINER_IMAGE_NAME=customers-api
CDK_DEFAULT_ACCOUNT=...
CDK_DEFAULT_REGION=eu-west-3
CDK_DEFAULT_USER=nicolas
</code></pre></div></div>

<p>The properties <code class="language-plaintext highlighter-rouge">CONTAINER_IMAGE_GROUP</code> and <code class="language-plaintext highlighter-rouge">CONTAINER_IMAGE_NAME</code> are used to build the container image and push it to
the ECR repository. They are used by the JIB Quarkus extension to build the container image. The other properties are
used by the CDK application to deploy the infrastructure and their meanings don’t require any explicit explanation.</p>

<p>The project uses AWS Secrets Manager to store sensitive data like GitHub OAuth token which is used by the CI/CD pipeline.
In order to create the secret, you can use the script <code class="language-plaintext highlighter-rouge">setup-github-token.sh</code> reproduced below:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>#!/bin/bash
set -e
echo "=== GitHub Token Setup for CI/CD Pipeline ==="

# Read token from stdin or argument
if [ $# -eq 0 ]; then
  if [ -t 0 ]; then
    # No arguments and no piped input
    echo "Usage:"
    echo "  $0 &lt;github-personal-access-token&gt;"
    ...
    exit 1
  else
    # Read from stdin
    GITHUB_TOKEN=$(cat | tr -d '\n\r')
  fi
else
  # Read from argument
  GITHUB_TOKEN=$1
fi

SECRET_NAME="github-oauth-token"

echo "Creating secret in AWS Secrets Manager..."

# Check if secret already exists
if aws secretsmanager describe-secret --secret-id "$SECRET_NAME" &gt;/dev/null 2&gt;&amp;1; then
  echo "Secret already exists. Updating..."
  aws secretsmanager update-secret \
    --secret-id "$SECRET_NAME" \
    --secret-string "$GITHUB_TOKEN"
else
  echo "Creating new secret..."
  aws secretsmanager create-secret \
    --name "$SECRET_NAME" \
    --description "GitHub OAuth token for CI/CD pipeline" \
    --secret-string "$GITHUB_TOKEN"
fi

echo "✅ GitHub token stored successfully!"
echo "You can now run: cdk deploy --all"
</code></pre></div></div>

<p>This script takes a parameter which could be either an argument or a piped input. The GIT OAuth token should already be
acquired from GitHub. In order to do that, proceed as follows:</p>

<ol>
  <li>Go to: https://github.com/settings/tokens”.</li>
  <li>Click <code class="language-plaintext highlighter-rouge">Generate new token (classic)</code></li>
  <li>Select <code class="language-plaintext highlighter-rouge">repo scope</code></li>
  <li>Copy the generated token.</li>
</ol>

<p>The othe configuration file, <code class="language-plaintext highlighter-rouge">src/main/resources/application.properties</code>, contains the following key properties:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># CI/CD Configuration
cdk.cicd.repository.name=${CONTAINER_IMAGE_GROUP}/${CONTAINER_IMAGE_NAME}
cdk.cicd.github.owner=${CONTAINER_IMAGE_GROUP}
cdk.cicd.github.repo=aws-cdk-quarkus
cdk.cicd.github.token-secret=github-oauth-token

# EKS Configuration
cdk.infrastructure.eks.namespace=customer-service
cdk.infrastructure.eks.cluster-name=customer-service-cluster
cdk.infrastructure.eks.service-account-name=customer-service-account
...
</code></pre></div></div>

<p>In addition to these configuration files and scripts, the class <code class="language-plaintext highlighter-rouge">CiCdConfig</code> uses the MP Config API to define properties
relative to different services and stages of the CI/CD pipeline.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@ConfigMapping(prefix = "cdk.cicd")
public interface CiCdConfig
{
  RepositoryConfig repository();
  GitHubConfig github();
  BuildConfig build();
  PipelineConfig pipeline();
  ...
}
</code></pre></div></div>

<p>As we can see, <code class="language-plaintext highlighter-rouge">CiCdConfig</code> is an interface which contains several sub-interfaces, one for each service or stage. Each
sub-interface defines a set of properties that are used to configure the corresponding service or stage, for exzmple:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>...
interface RepositoryConfig
{
  @WithDefault("customer-service")
  String name();
}

interface GitHubConfig
{
  @WithDefault("your-github-user")
  String owner();
  @WithDefault("customer-service")
  String repo();
  @WithDefault("github-token")
  String tokenSecret();
}
...
</code></pre></div></div>

<h2 id="the-cdk-stacks">The CDK Stacks</h2>

<p>The IaC code is organized into several CDK stacks, each responsible for a specific aspect of the infrastructure.</p>

<h3 id="the-vpcstack">The <code class="language-plaintext highlighter-rouge">VpcStack</code></h3>

<p>This stack creates the foundational networking infrastructure for the entire solution. It provisions a VPC (<em>Virtual Private
Cloud</em>) with multi-AZ (<em>Availability Zone</em>) support for high availability. The VPC is configured with both public and
private subnets across multiple availability zones, as specified by the <code class="language-plaintext highlighter-rouge">maxAzs</code> configuration property (default: 2). The
stack also creates NAT Gateways to enable outbound internet access for resources in private subnets, with the number
controlled by the <code class="language-plaintext highlighter-rouge">natGateways</code> property (default: 1). This VPC serves as the network foundation for all other stacks,
including the EKS cluster, RDS database, and ElastiCache Redis instances. The implementation is minimal, as shown below:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>vpc = Vpc.Builder.create(this, "EksVpc")
  .maxAzs(config.vpc().maxAzs())           // Default: 2 AZs
  .natGateways(config.vpc().natGateways()) // Default: 1 NAT Gateway
  .build();
</code></pre></div></div>

<p>The code above uses the <code class="language-plaintext highlighter-rouge">software.amazon.awscdk.services.ec2.Vpc</code> CDK construct that automatically creates 6 subnets across
two AZs:</p>

<ul>
  <li>
    <p>2 public subnets (one per AZ) connected to an IGW (<em>Internet Gateway</em>). An IGW is a horizontally scaled, redundant AWS-managed component that allows bidirectional communication between resources in the VPC and the internet. It enables resources with public IP addresses to receive inbound traffic from the internet and send outbound traffic to the internet. In our case, it is used as an NLB (<em>Network Load Balancer</em>) which receives external traffic.</p>
  </li>
  <li>
    <p>2 private subnets with “egress” (one per AZ) connected to a NAT Gateway. A Nat Gateway is a managed service that enables resources in private subnets to initiate outbound connections to the internet (for software updates, API calls, etc.) while preventing unsolicited inbound connections from the internet. In this context, “egress” means outbound-only traffic flow. These 2 private subnets are used for EKS Fargate pods, RDS database and ElastiCache Redis which require all outbound internet access but should not be directly accessible from the internet.</p>
  </li>
  <li>
    <p>2 isolated subnets (one per AZ). These subnets have neither IGW, nor NatGateway and, hence, they don’t have internet connectivity. They are created by default by the <code class="language-plaintext highlighter-rouge">Vpc</code> construct but they aren’t used in this project as they are typically dedicated to highly sensitive resources that should never communicate with the internet.</p>
  </li>
</ul>

<p>The <code class="language-plaintext highlighter-rouge">maxAzs</code> property (default: 2) determines how many availability zones to span for high availability. The <code class="language-plaintext highlighter-rouge">natGateways</code>
property (default: 1) controls the number of NAT Gateways - using 1 instead of 2 reduces costs but creates a single point
of failure for outbound internet connectivity.</p>

<p>This VPC serves as the network foundation for all other stacks, including the EKS cluster, RDS database, and ElastiCache
Redis instances. We need to mention that any AWS account has a default VPC and that we could have used it here, instead
of creating another one. While this alternative would have been much simpler with no additional network cost, having a
dedicated VPC is a more “production ready” solution, as it provides better isolation, customized CIDR blocks and more
subnets.</p>

<h3 id="the-eksclusterstack">The <code class="language-plaintext highlighter-rouge">EksClusterStack</code></h3>

<p>This is the core infrastructure stack that creates and configures the EKS cluster with a Fargate compute profile. The stack
performs the following several critical operations:</p>

<ol>
  <li>
    <p>creates an EKS cluster (version 1.34) with API authentication mode and public endpoint access. The cluster is deployed in the private subnets of the VPC for enhanced security.</p>
  </li>
  <li>
    <p>adds to the previous created cluster a Fargate profile that targets the <code class="language-plaintext highlighter-rouge">customer-service</code> namespace, ensuring all pods in this namespace run on Fargate serverless compute. The profile’s pod execution role is granted CloudWatch Logs permissions for centralized logging.</p>
  </li>
  <li>
    <p>sets up a Kubernetes ServiceAccount with IRSA (<em>IAM Roles for Service Accounts</em>), granting the pods secure access to AWS services without embedding credentials. The service account is granted permissions to connect to the RDS database and read secrets from AWS Secrets Manager.</p>
  </li>
  <li>
    <p>programmatically creates Kubernetes manifests including a namespace for workload isolation, a <code class="language-plaintext highlighter-rouge">ConfigMap</code> containing database and Redis connection strings, a deployment and a service resource, loaded from the YAML file in the <code class="language-plaintext highlighter-rouge">resources/k8s</code> directory.</p>
  </li>
</ol>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>public void initStack() throws IOException
{
  createCluster();
  KubernetesManifest namespace = createNamespace();
  addFargateProfile();
  ServiceAccount serviceAccount = setupServiceAccountWithIAM();
  serviceAccount.getNode().addDependency(namespace);
  KubernetesManifest configMap = addConfigMap();
  configMap.getNode().addDependency(serviceAccount);
  addDeploymentAndService(configMap);
}
</code></pre></div></div>

<p>The stack establishes dependencies to ensure resources are created in the correct order, with the <code class="language-plaintext highlighter-rouge">ConfigMap</code> depending on
the <code class="language-plaintext highlighter-rouge">ServiceAccount</code>, and the <code class="language-plaintext highlighter-rouge">Deployment</code> depending on the <code class="language-plaintext highlighter-rouge">ConfigMap</code>.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@SuppressWarnings("unchecked")
private void addDeploymentAndService(KubernetesManifest configMap) throws IOException
{
  List&lt;Map&lt;String, Object&gt;&gt; manifests = loadYamlManifests("k8s/customer-service.yaml");
  KubernetesManifest previous = configMap;
  for (int i = 0; i &lt; manifests.size(); i++)
  {
    KubernetesManifest current =
      cluster.addManifest("CustomerService-%d".formatted(i), manifests.get(i));
    current.getNode().addDependency(previous);
    previous = current;
  }
}
</code></pre></div></div>

<p>The code above shows how the file <code class="language-plaintext highlighter-rouge">customer-service.yaml</code>, containing the <code class="language-plaintext highlighter-rouge">ServiceAccount</code> and the <code class="language-plaintext highlighter-rouge">Deployment</code> manifests,
is parsed and the manifests added to the cluster, each one being dependent of the previous one, in order to prevent possible
cyclic dependencies.</p>

<h3 id="the-cicdpipelinestack">The <code class="language-plaintext highlighter-rouge">CiCdPipelineStack</code></h3>

<p>This stack implements a complete CI/CD pipeline using AWS native services to automate the build and deployment process.
It consists of three stages:</p>

<ol>
  <li>
    <p>Source Stage: integrates with GitHub using a webhook trigger. When code is pushed to the repository, the pipeline automatically retrieves the source code using a GitHub OAuth token stored in AWS Secrets Manager.</p>
  </li>
  <li>
    <p>Build Stage: Uses AWS CodeBuild with a Standard 7.0 Linux image to build the Quarkus application, create a Docker image using the JIB Maven plugin and push the image to Amazon ECR (<em>Elastic Container Registry</em>). The build project has privileged mode enabled for Docker operations and is granted necessary IAM permissions for ECR operations.</p>
  </li>
  <li>
    <p>Deploy Stage: Uses a separate CodeBuild project to update the kubeconfig to access the EKS cluster and apply the updated Kubernetes manifests with the new container image. The deploy project is granted EKS cluster access through IAM role assumption.</p>
  </li>
</ol>

<p>The pipeline uses build specifications defined in <code class="language-plaintext highlighter-rouge">buildspecs/build-spec.yaml</code> and <code class="language-plaintext highlighter-rouge">buildspecs/deploy-spec.yaml</code>, and
stores artifacts in S3 between stages. All configuration is externalized through the <code class="language-plaintext highlighter-rouge">CiCdConfig</code> interface using
MicroProfile Config.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>  public void initStack()
  {
    IRepository ecrRepo = Repository.fromRepositoryName(this,
      "CustomerServiceRepo", cicdConfig.repository().name());

    Project buildProject = Project.Builder.create(this, "CustomerServiceBuild")
      .source(Source.gitHub(GitHubSourceProps.builder()
      ...
      .build();
    ecrRepo.grantPullPush(buildProject);
    buildProject.addToRolePolicy(PolicyStatement.Builder.create()
      .actions(List.of("ecr:GetAuthorizationToken"))
      .resources(List.of("*"))
      .build());
    buildProject.addToRolePolicy(PolicyStatement.Builder.create()
      .actions(List.of("secretsmanager:GetSecretValue"))
      .resources(List.of("arn:aws:secretsmanager:eu-west-3:" + this.getAccount() + ":secret:redhat-registry-credentials-*"))
      .build());

    Project deployProject = Project.Builder.create(this, "CustomerServiceDeploy")
      ....
      build();
    deployProject.getRole().addManagedPolicy(
      ManagedPolicy.fromAwsManagedPolicyName("AmazonEKSClusterPolicy"));
    eksStack.getCluster().getRole().grantAssumeRole(deployProject.getRole());
    deployProject.addToRolePolicy(PolicyStatement.Builder.create()
      .actions(List.of("eks:DescribeCluster"))
      .resources(List.of(eksStack.getCluster().getClusterArn()))
      .build());

    GitHubSourceAction sourceAction = GitHubSourceAction.Builder.create()
      .actionName(cicdConfig.pipeline().actions().source())
      ...
     .build();

    CodeBuildAction buildAction = CodeBuildAction.Builder.create()
      .actionName(cicdConfig.pipeline().actions().build())
      ...
      .build();

    CodeBuildAction deployAction = CodeBuildAction.Builder.create()
      .actionName(cicdConfig.pipeline().actions().deploy())
      ...
      .build();

    Pipeline pipeline = Pipeline.Builder.create(this, cicdConfig.pipeline().name())
      .build();
    pipeline.addStage(StageOptions.builder()
      .stageName(cicdConfig.pipeline().stages().source())
      .actions(List.of(sourceAction))
      .build());
    pipeline.addStage(StageOptions.builder()
      .stageName(cicdConfig.pipeline().stages().build())
      .actions(List.of(buildAction))
      .build());
    pipeline.addStage(StageOptions.builder()
      .stageName(cicdConfig.pipeline().stages().deploy())
      .actions(List.of(deployAction))
      .build());
  }
</code></pre></div></div>

<p>The code above create two CodeBuild projects: a build and a deploy one. It assigns to them the required security policies,
like <code class="language-plaintext highlighter-rouge">AmazonEKSClusterPolicy</code> and it creates then three actions: one <code class="language-plaintext highlighter-rouge">GitHubSourceAction</code> and two <code class="language-plaintext highlighter-rouge">CodeBuildAction</code>, one
for the build and the other one for the deploy operation. Last but not least, a <code class="language-plaintext highlighter-rouge">Pipeline</code> is created and the three mentioned
actions are added as its stages.</p>

<h3 id="the-monitoringstack">The <code class="language-plaintext highlighter-rouge">MonitoringStack</code></h3>

<p>This stack provides observability and monitoring capabilities for the EKS cluster and running applications. It creates
a dedicated CloudWatch log group named <code class="language-plaintext highlighter-rouge">/aws/eks/customer-service</code> with a one-week retention policy to collect and store
logs from the EKS pods and cluster components and a CloudWatch dashboard named <code class="language-plaintext highlighter-rouge">customer-service-eks</code> that visualizes key
metrics including but not limited to pod CPU utilization from the EKS namespace.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>public void initStack()
{
  LogGroup.Builder.create(this, "EksLogGroup")
    .logGroupName("/aws/eks/customer-service")
    .retention(RetentionDays.ONE_WEEK)
    .build();
  Dashboard dashboard = Dashboard.Builder.create(this, "CustomerServiceDashboard")
  .dashboardName("customer-service-eks")
  .build();
  dashboard.addWidgets(
    GraphWidget.Builder.create()
      .title("Pod CPU Utilization")
      .left(List.of(Metric.Builder.create()
        .namespace("AWS/EKS")
        .metricName("pod_cpu_utilization")
        .build()))
      .build()
  );
}
</code></pre></div></div>

<p>This stack depends on the EksClusterStack to ensure the cluster exists before monitoring resources are created. The
monitoring infrastructure enables real-time visibility into cluster health, performance metrics, and troubleshooting
capabilities through centralized log aggregation.</p>

<h2 id="building-deploying-and-testing">Building, deploying and testing</h2>

<p>The API to be built a deployed on EKS with Fargate is the same as the one we used previously for the ECS project (see the
<code class="language-plaintext highlighter-rouge">customer-service-api</code> module). Other shared artifacts are provided by the <code class="language-plaintext highlighter-rouge">customer-service-cdk-common</code> module. Here is
their list:</p>

<ul>
  <li><code class="language-plaintext highlighter-rouge">DatabaseConstruct</code>: implements a CDK construct for RDS (<em>Relational Database Service</em>) with PostgreSQL;</li>
  <li><code class="language-plaintext highlighter-rouge">RedisCluster</code>: implements a CDK construct for ElasticCache with Redis;</li>
  <li><code class="language-plaintext highlighter-rouge">RedisClusterProps</code>: gropus together, in one record, several common Redis properties like the cluster ID, the number of nodes, their types, etc.</li>
  <li><code class="language-plaintext highlighter-rouge">DatabaseStack</code>: implements a database CDK stack which includes the previous mentioned PostgreSQL and Redis constructs.</li>
</ul>

<p>Since these common artifacts are all required in order to build and deploy our stack, they need to be installed in the
local Maven repository:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ cd aws-cdk-quarkus/customer-service-api
$ mvn clean install
$ cd aws-cdk-quarkus/customer-service-cdk-common
$ mvn clean install
</code></pre></div></div>

<p>The Maven build process of the <code class="language-plaintext highlighter-rouge">customer-service-api</code> will run 2 integration tests, one using RESTassured against the
Quarkus embedded web service, the other against a full local containerized infrastructure described by a <code class="language-plaintext highlighter-rouge">docker-compose.yaml</code>
file. This has been fully documented and explained in te 1st part of this series.</p>

<p>Now, we can build, deploy and test our new stack. The <code class="language-plaintext highlighter-rouge">customer-service-eks</code> module provides two ways to do it:</p>

<ul>
  <li>in development mode, using minikube;</li>
  <li>in production mode, using AWS infrastructure;</li>
</ul>

<p>Please notice that <code class="language-plaintext highlighter-rouge">localstack</code>, which is a very practical way to test AWS based IaC code without the cloud heavyness and
costs, isn’t an option here, as it doesn’t support EKS, VPC, ECR, etc.</p>

<h3 id="building-deploying-and-testing-in-dev-mode">Building, deploying and testing in dev mode</h3>

<p>As mentioned, using the dev mode, all our stacks are deployed locally, on minikube. So, this mode requires minikube to be
installed and running.</p>

<p>The <code class="language-plaintext highlighter-rouge">pom.xml</code> file defines two profiles:</p>

<ul>
  <li>a dev mode one named <code class="language-plaintext highlighter-rouge">local</code>;</li>
  <li>a prod mode one named <code class="language-plaintext highlighter-rouge">e2e</code>;</li>
</ul>

<p>Here is the dev mode one, which is also the default one:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>&lt;profile&gt;
  &lt;id&gt;local&lt;/id&gt;
  &lt;activation&gt;
    &lt;activeByDefault&gt;true&lt;/activeByDefault&gt;
  &lt;/activation&gt;
  &lt;build&gt;
    &lt;plugins&gt;
      &lt;plugin&gt;
        &lt;groupId&gt;org.codehaus.mojo&lt;/groupId&gt;
        &lt;artifactId&gt;exec-maven-plugin&lt;/artifactId&gt;
        &lt;executions&gt;
          &lt;execution&gt;
            &lt;id&gt;start-minikube&lt;/id&gt;
            &lt;phase&gt;pre-integration-test&lt;/phase&gt;
            &lt;goals&gt;&lt;goal&gt;exec&lt;/goal&gt;&lt;/goals&gt;
            &lt;configuration&gt;
              &lt;executable&gt;minikube&lt;/executable&gt;
              &lt;arguments&gt;
                &lt;argument&gt;start&lt;/argument&gt;
                &lt;argument&gt;--driver=docker&lt;/argument&gt;
              &lt;/arguments&gt;
            &lt;/configuration&gt;
          &lt;/execution&gt;
          &lt;execution&gt;
            &lt;id&gt;deploy-to-minikube&lt;/id&gt;
            &lt;phase&gt;pre-integration-test&lt;/phase&gt;
            &lt;goals&gt;&lt;goal&gt;exec&lt;/goal&gt;&lt;/goals&gt;
            &lt;configuration&gt;
              &lt;executable&gt;bash&lt;/executable&gt;
              &lt;arguments&gt;
                &lt;argument&gt;src/main/resources/scripts/deploy-to-minikube.sh&lt;/argument&gt;
              &lt;/arguments&gt;
            &lt;/configuration&gt;
          &lt;/execution&gt;
          &lt;execution&gt;
            &lt;id&gt;stop-minikube&lt;/id&gt;
            &lt;phase&gt;clean&lt;/phase&gt;
            &lt;goals&gt;&lt;goal&gt;exec&lt;/goal&gt;&lt;/goals&gt;
            &lt;configuration&gt;
              &lt;executable&gt;minikube&lt;/executable&gt;
              &lt;arguments&gt;
                &lt;argument&gt;delete&lt;/argument&gt;
              &lt;/arguments&gt;
            &lt;/configuration&gt;
          &lt;/execution&gt;
        &lt;/executions&gt;
      &lt;/plugin&gt;
    &lt;/plugins&gt;
  &lt;/build&gt;
&lt;/profile&gt;
</code></pre></div></div>

<p>As you can see, here we’re using the <code class="language-plaintext highlighter-rouge">exec-maven-plugin</code> with 3 executions that starts minikube, deploy to minikube and,
respectively, stop minikube. As already mentioned, minikube should be installed in order that the <code class="language-plaintext highlighter-rouge">local</code> profile be
effective and, the action with ID <code class="language-plaintext highlighter-rouge">start-minikube</code> simply executes the <code class="language-plaintext highlighter-rouge">start</code> command.</p>

<p>Once minikube started, the action with ID <code class="language-plaintext highlighter-rouge">deploy-to-minikube</code> executes the <code class="language-plaintext highlighter-rouge">deploy-to-minikube.sh</code> script, shown below:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>#!/bin/bash
set -e

echo "&gt;&gt;&gt; Loading image..."
docker save nicolasduminil/customers-api:1.0-SNAPSHOT | minikube image load -

echo "&gt;&gt;&gt; Creating namespace..."
kubectl create namespace customer-service --dry-run=client -o yaml | kubectl apply -f -

echo "&gt;&gt;&gt; Deploying PostgreSQL and Redis..."
kubectl apply -f src/test/resources/k8s/postgres-redis.yaml

echo "&gt;&gt;&gt; Waiting for database..."
for i in {1..5}; do
  if kubectl get pod -l app=postgres -n customer-service 2&gt;/dev/null | grep -q postgres; then
    break
  fi
  echo "Waiting for postgres pod to be created... ($i/5)"
  sleep 5
done

kubectl wait --for=condition=ready pod -l app=postgres -n customer-service --timeout=60s

echo "&gt;&gt;&gt; Deploying application..."
kubectl apply -f target/kubernetes/minikube.yml

echo "&gt;&gt;&gt; Waiting for application..."
for i in {1..5}; do
  if kubectl get pod -l app.kubernetes.io/name=customer-service-api -n customer-service 2&gt;/dev/null | grep -q customer-service; then
    break
  fi
  echo "Waiting for app pod to be created... ($i/5)"
  sleep 5
done
kubectl wait --for=condition=ready pod -l app.kubernetes.io/name=customer-service-api -n customer-service --timeout=120s

echo "&gt;&gt;&gt; Final status:"
kubectl get all -n customer-service

echo "&gt;&gt;&gt; Starting port-forward..."
kubectl port-forward -n customer-service service/customer-service-api 9090:80 &gt; /dev/null 2&gt;&amp;1 &amp;
echo "Port-forward started (PID: $!)"
sleep 2
</code></pre></div></div>

<p>The <code class="language-plaintext highlighter-rouge">deploy-to-minikube.sh</code> script above is structured for performing several operations. First, the Docker image
<code class="language-plaintext highlighter-rouge">nicolasduminil/customers-api:1.0-SNAPSHOT</code> built during the previous step (the shared components) is loaded to minikube
via the command <code class="language-plaintext highlighter-rouge">image load</code>. Then, the <code class="language-plaintext highlighter-rouge">kubectl</code> tool, which is another prerequisite, is used to create the customized
namespace <code class="language-plaintext highlighter-rouge">customer-service</code> and to apply the two manifests: <code class="language-plaintext highlighter-rouge">postgresql-manifest.yaml</code> and <code class="language-plaintext highlighter-rouge">minikube.yml</code>. Last but not
least, after having waited that all the services be on, the same <code class="language-plaintext highlighter-rouge">kubectl</code> is used to start the port-forward process.</p>

<p>At that point we’re able to test our API locally deployed on minikube using the Swagger UI. Fire your preferred browser
at http://localhost:9090/q/swagger-ui to take advantage of the 80 to 9090 port-forward. You’re ready to test the API.</p>

<p>Please notice that the <code class="language-plaintext highlighter-rouge">minikube.yml</code> manifest file mentioned above is automatically generated by the JIB extension for
Quarkus, while the <code class="language-plaintext highlighter-rouge">postgres-redis.yaml</code> was written on the purpose, to define the Kubernetes deployment and service
controllers associated to the PostgreSQL databse and Redis cache. Don’t hesitate to have a look at this file and make sure
you understand what everything is about there.</p>

<h3 id="building-deploying-and-testing-in-prod-mode">Building, deploying and testing in prod mode</h3>

<p>While the Maven building process is the same and consists in running</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ mvn -Pe2e -DskipTests clean install
</code></pre></div></div>

<p>deploying is, this time, a much longer and heavier operation as it targets real AWS infrastructure. Look at the <code class="language-plaintext highlighter-rouge">e2e</code>
Maven profile below:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>&lt;profile&gt;
  &lt;id&gt;e2e&lt;/id&gt;
  &lt;build&gt;
    &lt;plugins&gt;
      &lt;plugin&gt;
        &lt;groupId&gt;org.codehaus.mojo&lt;/groupId&gt;
        &lt;artifactId&gt;exec-maven-plugin&lt;/artifactId&gt;
        &lt;executions&gt;
          &lt;execution&gt;
            &lt;id&gt;deploy-to-aws&lt;/id&gt;
            &lt;phase&gt;pre-integration-test&lt;/phase&gt;
            &lt;goals&gt;
              &lt;goal&gt;exec&lt;/goal&gt;
            &lt;/goals&gt;
            &lt;configuration&gt;
              &lt;executable&gt;bash&lt;/executable&gt;
              &lt;arguments&gt;
                &lt;argument&gt;./src/main/resources/scripts/deploy-to-aws.sh&lt;/argument&gt;
              &lt;/arguments&gt;
              &lt;workingDirectory&gt;${project.basedir}&lt;/workingDirectory&gt;
              &lt;environmentVariables&gt;
                &lt;CDK_DEFAULT_ACCOUNT&gt;${CDK_DEFAULT_ACCOUNT}&lt;/CDK_DEFAULT_ACCOUNT&gt;
                &lt;CDK_DEFAULT_REGION&gt;${CDK_DEFAULT_REGION}&lt;/CDK_DEFAULT_REGION&gt;
                &lt;CDK_DEFAULT_USER&gt;${CDK_DEFAULT_USER}&lt;/CDK_DEFAULT_USER&gt;
                &lt;CONTAINER_IMAGE_GROUP&gt;${CONTAINER_IMAGE_GROUP}&lt;/CONTAINER_IMAGE_GROUP&gt;
                &lt;CONTAINER_IMAGE_NAME&gt;${CONTAINER_IMAGE_NAME}&lt;/CONTAINER_IMAGE_NAME&gt;
                &lt;CONTAINER_PORT&gt;${CONTAINER_PORT}&lt;/CONTAINER_PORT&gt;
              &lt;/environmentVariables&gt;
            &lt;/configuration&gt;
          &lt;/execution&gt;
        &lt;/executions&gt;
      &lt;/plugin&gt;
    &lt;/plugins&gt;
  &lt;/build&gt;
&lt;/profile&gt;
</code></pre></div></div>

<p>What this profile is doing is simply running the <code class="language-plaintext highlighter-rouge">deploy-to-aws.sh</code> script via the <code class="language-plaintext highlighter-rouge">exec-maven-plugin</code>.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>#!/bin/bash
set -e
...
../customer-service-cdk-common/src/main/resources/scripts/deploy-ecr.sh

echo "&gt;&gt;&gt; Updating kubeconfig..."
aws eks update-kubeconfig --region $CDK_DEFAULT_REGION --name customer-service-cluster

echo "&gt;&gt;&gt; Checking EKS access..."
USER_ARN=$(aws sts get-caller-identity --query 'Arn' --output text)
grant_eks_access "$USER_ARN" "current user"

echo "&gt;&gt;&gt; Granting EKS access to CodeBuild deploy role..."
DEPLOY_ROLE_ARN=$(aws iam list-roles --query 'Roles[?contains(RoleName, `CustomerServiceDeployRole`)].Arn' --output text --region $CDK_DEFAULT_REGION)
if [ -n "$DEPLOY_ROLE_ARN" ]; then
  grant_eks_access "$DEPLOY_ROLE_ARN" "deploy role"
else
  echo "&gt;&gt;&gt; Deploy role not found (pipeline not deployed yet)"
fi

echo "&gt;&gt;&gt; Retrieving database password from Secrets Manager..."
SECRET_ARN=$(jq -r '.DatabaseStack.DatabaseSecretArn' cdk-outputs.json)
DB_PASSWORD=$(aws secretsmanager get-secret-value --secret-id $SECRET_ARN --region $CDK_DEFAULT_REGION --query SecretString --output text | jq -r .password)

echo "&gt;&gt;&gt; Creating Kubernetes secret with database password..."
kubectl create secret generic db-credentials \
  --from-literal=QUARKUS_DATASOURCE_PASSWORD="$DB_PASSWORD" \
  -n customer-service --dry-run=client -o yaml | kubectl apply -f -

echo "&gt;&gt;&gt; Waiting for pods to be ready..."
kubectl wait --for=condition=ready pod -l app=customer-service-api -n customer-service --timeout=300s || true

echo "&gt;&gt;&gt; Deployment complete!"
echo "&gt;&gt;&gt; To access the API locally, run:"
echo "&gt;&gt;&gt;   ./src/main/resources/scripts/test-api.sh"
echo "&gt;&gt;&gt; Then test with:"
echo "&gt;&gt;&gt;   curl http://localhost:8080/q/health"
</code></pre></div></div>

<p>The script above contains several distinct sections. First, it runs the shared script <code class="language-plaintext highlighter-rouge">deploy-ecr.sh</code>, present in the
<code class="language-plaintext highlighter-rouge">customer-service-cdk-common</code> module, which deploys to ECR (<em>Elastic Container Registry</em>) the image
<code class="language-plaintext highlighter-rouge">nicoladuminil/customer-service-api::1.0-SNAPSHOT</code>, built previously, before running the <code class="language-plaintext highlighter-rouge">cdk deploy</code> command, which
deploys to AWS all the CloudFormation stacks. This process is very complex and long and, depending on your network speed
, it may take 15 - 20 minutes.</p>

<p>Then the script updates the <code class="language-plaintext highlighter-rouge">.kube/config</code> file with the EKS cluster required parameters, such that it could be handled
further by <code class="language-plaintext highlighter-rouge">kubectl</code>. Next it grants the <code class="language-plaintext highlighter-rouge">AmazonEKSClusterAdminPolicy</code> to the current user and the deployer user,
identified by the <code class="language-plaintext highlighter-rouge">CustomerServiceDeployRole</code>. Then it gets the AWS secret containing the PostgreSQL database user
password and creates a Kubernetes secret to be used by the associated pod. Once that all the pods started and are healthy,
the script displays instructions of how to proceed further for testing purposes.</p>

<p>Several tests are available, once that the deployment process has succeeded. First, an e2e test, named <code class="language-plaintext highlighter-rouge">CustomerServiceE2EIT</code>
can be run as folowws:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ mvn -Pe2e failsafe:integration-test
</code></pre></div></div>

<p>Here is the listing:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>public class CustomerServiceE2EIT extends AbstractCustomerServiceE2E
{
  private static Process portForwardProcess;

  @BeforeAll
  static void setup() throws Exception
  {
    startPortForward();
    configureEndpoint("localhost:8080");
    waitForServiceReady();
  }

  @AfterAll
  static void teardown()
  {
    if (portForwardProcess != null &amp;&amp; portForwardProcess.isAlive())
    {
      portForwardProcess.destroy();
      System.out.println("&gt;&gt;&gt; Port-forward stopped");
    }
  }

  private static void startPortForward() throws Exception
  {
    System.out.println("&gt;&gt;&gt; Waiting for deployment to be ready...");
    Process waitProcess = new ProcessBuilder(
      "kubectl", "wait", "--for=condition=Available",
      "deployment/customer-service-api-deployment",
      "-n", "customer-service",
      "--timeout=300s"
      ).start();

    if (waitProcess.waitFor() != 0)
      throw new RuntimeException("### Deployment not available");

    System.out.println("&gt;&gt;&gt; Starting port-forward...");
    portForwardProcess = new ProcessBuilder(
      "kubectl", "port-forward",
      "deployment/customer-service-api-deployment",
      "8080:8080",
      "-n", "customer-service"
    ).start();

    Thread.sleep(3000);
    System.out.println("&gt;&gt;&gt; Port-forward established on localhost:8080");
  }
}
</code></pre></div></div>

<p>As you can see, the test extends the <code class="language-plaintext highlighter-rouge">AbstractCustomerServiceE2E</code> present in the shared module <code class="language-plaintext highlighter-rouge">customer-service-cdk-common</code>.
This abstract class defines the test case to be run as they are the same whatever the cloud runtime is, be it ECS or EKS.
The only operation specific to the cloud runtime is the port-forward process start, implemented by the method
<code class="language-plaintext highlighter-rouge">startPortForward()</code>.</p>

<p>Of course, you can test your API using the Swagger UI, as you did before, in dev mode. The only thing you need to do is
to start the port-forward and, for this, the script <code class="language-plaintext highlighter-rouge">test-api.sh</code>, here below, comes very handy:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>#!/bin/bash
echo "&gt;&gt;&gt; Starting port-forward to access API locally..."
echo "&gt;&gt;&gt; API will be available at http://localhost:8080"

nohup kubectl port-forward svc/customer-service-api-service -n customer-service 8080:80 2&gt;/dev/null &amp;
</code></pre></div></div>

<p>Then fire your preferred browser, as usual, at http://localhost:8080/q/swagger-ui. Other test scripts, like
<code class="language-plaintext highlighter-rouge">load-distribution-demo.sh</code>, <code class="language-plaintext highlighter-rouge">perf-demo.sh</code>, <code class="language-plaintext highlighter-rouge">pods-monitoring.sh</code>, <code class="language-plaintext highlighter-rouge">scaling-demo.sh</code>, are available as well,
just run them.</p>

<p>Once you finished testing, please cancel the port-forwarding by running:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ pkill -f "kubectl port-forward"
</code></pre></div></div>

<p>And don’t forget to clean up your cloud by running:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ cd aws-cdk-quarkus/customer-service/eks
$ cdk destroy --all --force
</code></pre></div></div>

<p>A script named <code class="language-plaintext highlighter-rouge">destroy-all.sh</code> is also available for a more atomic destruction.</p>

<h4 id="troubleshooting-in-prod-mode">Troubleshooting in prod mode</h4>

<p>Working in prod mode, i.e. running the API and the associated tests against real AWS infrastructure, is challenging.
The environment is very complex and, at any step, dozens of issues might prevent things to happen as expected. Hence, the
necessity to be able to visualize the cloud infrastructure current status and the most recent events.</p>

<p>While the AWS Console is a very usefull tool, designed on the purpose to optimize the visualization of the cloud
infrastructure status, using the <code class="language-plaintext highlighter-rouge">kubectl</code> utility represents the Kubernetes most traditional way to check the cluster
health. Also, AWS CLI is able to perform all the AWS Console functions, in a less intuitive way, which requires a deep
knowledge, but which might be more practical, less repetitive as scriptable, and less error-prone.Accordingly, a <code class="language-plaintext highlighter-rouge">kubectl</code>
and AWS CLI commands breviary could be helpful in order to fix issues.</p>

<h5 id="verifying-the-eks-cluster-deployment">Verifying the EKS cluster deployment</h5>

<p>The following <code class="language-plaintext highlighter-rouge">kubectl</code> commands can be used to verify the EKS cluster deployment:</p>

<ol>
  <li>Check the cluster:</li>
</ol>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ kubectl get nodes -n customer-service
</code></pre></div></div>

<ol>
  <li>Check the pods:</li>
</ol>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ kubectl get pods -n customer-service
</code></pre></div></div>

<ol>
  <li>Check the services:</li>
</ol>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl get services -n customer-service
</code></pre></div></div>

<ol>
  <li>View all logs from pods:</li>
</ol>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ kubectl logs -f -l app=customer-service-api -n customer-service
</code></pre></div></div>

<ol>
  <li>Check rollout status:</li>
</ol>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ kubectl rollout status deployment/customer-service-api-deployment -n customer-service
</code></pre></div></div>

<ol>
  <li>View deployment details:</li>
</ol>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ kubectl describe deployment customer-service-api-deployment -n customer-service
</code></pre></div></div>

<ol>
  <li>Verify ECR repository exists:</li>
</ol>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ aws ecr describe-repositories --region eu-west-3
</code></pre></div></div>

<ol>
  <li>Check IAM permissions</li>
</ol>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ aws iam get-role-policy --role-name &lt;build-role-name&gt; --policy-name &lt;policy-name&gt;
</code></pre></div></div>

<ol>
  <li>Verify EKS cluster access</li>
</ol>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ aws eks describe-cluster --name customer-service-cluster --region eu-west-3
</code></pre></div></div>

<ol>
  <li>Check pod events</li>
</ol>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ kubectl describe pod &lt;pod-name&gt; -n customer-service
</code></pre></div></div>

<ol>
  <li>Verify RDS endpoint</li>
</ol>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ aws rds describe-db-instances --region eu-west-3
</code></pre></div></div>

<h5 id="verifying-th-cicd-pipeline-deployment">Verifying th CI/CD Pipeline deployment</h5>

<ol>
  <li>Get the webhook URL:</li>
</ol>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>aws codepipeline list-webhooks --region eu-west-3
</code></pre></div></div>

<ol>
  <li>Check pipeline execution status:</li>
</ol>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ PIPELINE=$(aws codepipeline list-pipelines --region eu-west-3 --query 'pipelines[?starts_with(name, `CiCdPipelineStack`)].name' --output text)
$ aws codepipeline get-pipeline-state --name $PIPELINE --region eu-west-3
</code></pre></div></div>

<ol>
  <li>List recent pipeline executions:</li>
</ol>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ aws codepipeline list-pipeline-executions --pipeline-name $PIPELINE --region eu-west-3 --max-items 5
</code></pre></div></div>

<ol>
  <li>Get CodeBuild project names:</li>
</ol>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ aws codebuild list-projects --region eu-west-3 --query 'projects[?contains(@, `CustomerService`)]'
</code></pre></div></div>

<ol>
  <li>Check build project status:</li>
</ol>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ BUILD_PROJECT=$(aws codebuild list-projects --region eu-west-3 --query 'projects[?contains(@, `CustomerServiceBuild`)]' --output text)
$ aws codebuild batch-get-projects --names $BUILD_PROJECT --region eu-west-3
</code></pre></div></div>

<ol>
  <li>List recent builds:</li>
</ol>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ aws codebuild list-builds-for-project --project-name $BUILD_PROJECT --region eu-west-3 --max-items 5
</code></pre></div></div>

<ol>
  <li>Get detailed build information:</li>
</ol>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ BUILD_ID=$(aws codebuild list-builds-for-project --project-name $BUILD_PROJECT --region eu-west-3 --query 'ids[0]' --output text)
$ aws codebuild batch-get-builds --ids $BUILD_ID --region eu-west-3
</code></pre></div></div>

<ol>
  <li>View build logs:</li>
</ol>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ aws logs tail /aws/codebuild/$BUILD_PROJECT --since 30m --follow --region eu-west-3
</code></pre></div></div>

<ol>
  <li>Check deploy project status:</li>
</ol>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ DEPLOY_PROJECT=$(aws codebuild list-projects --region eu-west-3 --query 'projects[?contains(@, `CustomerServiceDeploy`)]' --output text)
$ aws codebuild list-builds-for-project --project-name $DEPLOY_PROJECT --region eu-west-3 --max-items 5
</code></pre></div></div>

<ol>
  <li>View deploy logs:</li>
</ol>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ aws logs tail /aws/codebuild/$DEPLOY_PROJECT --since 30m --follow --region eu-west-3
</code></pre></div></div>

<ol>
  <li>Verify GitHub OAuth token secret:</li>
</ol>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ aws secretsmanager describe-secret --secret-id github-oauth-token --region eu-west-3
</code></pre></div></div>

<ol>
  <li>Check CodeBuild service role permissions:</li>
</ol>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ BUILD_ROLE=$(aws iam list-roles --query 'Roles[?contains(RoleName, `CustomerServiceBuildRole`)].RoleName' --output text)
$ aws iam list-attached-role-policies --role-name $BUILD_ROLE
$ aws iam list-role-policies --role-name $BUILD_ROLE
</code></pre></div></div>

<ol>
  <li>Check deploy role EKS access:</li>
</ol>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ DEPLOY_ROLE_ARN=$(aws iam list-roles --query 'Roles[?contains(RoleName, `CustomerServiceDeployRole`)].Arn' --output text)
$ aws eks list-access-entries --cluster-name customer-service-cluster --region eu-west-3
</code></pre></div></div>

<h5 id="verifying-the-monitor-pipeline">Verifying the Monitor Pipeline</h5>

<ol>
  <li>List the existent pipeleines</li>
</ol>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ aws codepipeline list-pipelines --region eu-west-3 --query 'pipelines[?starts_with(name, `CiCdPipelineStack`)].name' --output text
CiCdPipelineStack-CustomerServicePipelineB3195C39-t9UMJeMAQlDN
</code></pre></div></div>

<ol>
  <li>Get the monitoring pipeline status</li>
</ol>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ aws codepipeline get-pipeline --name $PIPELINE --region eu-west-3 --query 'pipeline.stages[?name==`Build`].actions[0].configuration.ProjectName' --output text
CustomerServiceBuild0A9B7C3-YIk2RDA0JP1B
</code></pre></div></div>

<ol>
  <li>View the monitoring pipeline log file</li>
</ol>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>aws logs tail /aws/codebuild/CustomerServiceBuild0A9B7C3-YIk2RDA0JP1B --since 30m --follow --region eu-west-3
</code></pre></div></div>

<h2 id="conclusions">Conclusions</h2>

<p>This project demonstrates a production-ready approach to deploying containerized Quarkus applications on AWS EKS with Fargate, leveraging Infrastructure as Code through the AWS CDK. By combining Kubernetes orchestration with serverless compute, we achieve operational simplicity without sacrificing the flexibility and portability that Kubernetes provides.</p>

<p>The automated CI/CD pipeline ensures consistent deployments from code commit to production, while the comprehensive monitoring and troubleshooting capabilities enable reliable operations at scale. Whether you’re migrating from ECS to EKS or building cloud-native applications from scratch, this architecture provides a solid foundation for modern microservices deployment on AWS.</p>

<p><a href="https://github.com/nicolasduminil/aws-cdk-quarkus.git">Source code</a></p>]]></content><author><name>Nicolas DUMINIL</name></author><category term="Java" /><category term="Quarkus" /><category term="AWS" /><category term="EKS" /><category term="CodeBuild" /><category term="CodeDeploy" /><category term="ECR" /><category term="Fargate" /><category term="DZone" /><summary type="html"><![CDATA[In a recent post, I have demonstrated the benefits of using AWS ECS (Elastic Container Service), with Quarkus and the CDK (Cloud Development Kit), in order to implement an API for the customer management.]]></summary></entry><entry><title type="html">Building a Containerized Quarkus API on AWS ECS/Fargate with CDK</title><link href="https://nicolasduminil.github.io/posts-archive/customer-service-ecs/" rel="alternate" type="text/html" title="Building a Containerized Quarkus API on AWS ECS/Fargate with CDK" /><published>2025-11-16T00:00:00+00:00</published><updated>2025-11-16T13:05:34+00:00</updated><id>https://nicolasduminil.github.io/posts-archive/customer-service-ecs</id><content type="html" xml:base="https://nicolasduminil.github.io/posts-archive/customer-service-ecs/"><![CDATA[<p>In a three articles series published recently on this site (<a href="https://dzone.com/articles/aws-cdk-infrastructure-as-abstract-data-types">Part 1</a>,
<a href="https://dzone.com/articles/aws-cdk-infrastructure-as-abstract-data-types-pt-2">Part 2</a>, <a href="https://dzone.com/articles/aws-cdk-infrastructure-as-abstract-data-types-3">Part 3</a>),
I’ve been demonstrating the power of the AWS Cloud Development Kit (CDK) in the
Infrastructure as Code (IaC) area, especially when coupled with the ubiquitous
Java and its supersonic / subatomic cloud-native stack: Quarkus.</p>

<p>While focusing on the CDK fundamentals in Java, like <code class="language-plaintext highlighter-rouge">Stack</code> and <code class="language-plaintext highlighter-rouge">Construct</code>,
together with their Quarkus implementations, this series was a bit frugal as far
as the infrastructure elements were concerned. Indeed, for the sake of the clarity
and simplification, the infrastructure used to illustrate how to use the CDK with
Java and Quarkus was inherently consensual. Hence, the idea of a new series, of
which this article is the first one, a series less concerned by CDK internals
and more dedicated to the infrastructure itself.</p>

<p>This first article demonstrates how to build and deploy a modern, cloud-native
customer management system using Quarkus, AWS CDK, and ECS/Fargate. It covers
the complete journey from application development to infrastructure as code,
containerization and comprehensive testing strategies. Once again, it doesn’t
emphasise on the exposed API and its possible business value, but rather on the
infrastructure elements required in order to provide the global solution in practice.</p>

<h2 id="architecture-overview">Architecture Overview</h2>

<p>The diagram below shows an overview of the project’s architecture:</p>

<p><img src="/assets/images/architecture.png" alt="Architecture Diagram" /></p>

<p>This presented solution implements the following architecture layers:</p>
<ul>
  <li>Presentation Layer : A Quarkus REST API exposing, as an example, a couple of simple customer management endpoints.</li>
  <li>Application Layer : A Quarkus main application running on ECS Fargate</li>
  <li>Data Layer : PostgreSQL (RDS) for persistence, Redis (ElastiCache) for caching</li>
  <li>Infrastructure Layer (Iaas): The AWS CDK-managed cloud infrastructure implemented in Quarkus</li>
</ul>

<p>Let’s try now to look in more details at these layers.</p>

<h3 id="the-presentation-layer">The Presentation Layer</h3>

<p>This layer is a Quarkus REST API which exposes a couple os simple endpoints to
CRUD customers. More than a real business API, this one is an example allowing
to illustrate how containerized applications could be deployed and hosted in an
AWS ECS (Elastic Container Service).</p>

<p>In order to separate concerns, our Maven project is structured in two modules:</p>

<ul>
  <li>the <code class="language-plaintext highlighter-rouge">customer-service-ecs-api</code> module which implements the Quarkus REST API to be deployed and executed as a Docker image in the AWS ECS service;</li>
  <li>the <code class="language-plaintext highlighter-rouge">customer-service-exce-cdk</code> module which bootstraps the CDK and creates the required elements in order to implement the cloud infrastructure presented in the figure above.</li>
</ul>

<p>The Presentation Layer is contained in the <code class="language-plaintext highlighter-rouge">customer-service-ecs-api</code> module.
The exposed REST API is simple and consists in the following endpoints to CRUD
<code class="language-plaintext highlighter-rouge">Customer</code> entities:</p>

<ul>
  <li><code class="language-plaintext highlighter-rouge">GET /customers</code>: returns a response containing the list of the currently existent customers;</li>
  <li><code class="language-plaintext highlighter-rouge">POST /customers</code>: creates a new customer by persisting the entity passed in the request’s body;</li>
  <li><code class="language-plaintext highlighter-rouge">PUT /customers/{id}</code>: updates the existent customer having the ID equal to the one passed as the <code class="language-plaintext highlighter-rouge">id</code> parameter. If such a customer doesn’t exist then HTTP 404 is returned.</li>
  <li><code class="language-plaintext highlighter-rouge">GET /customers/{id}</code>: returns a response containing the customer having the ID equal to the one passed as the <code class="language-plaintext highlighter-rouge">id</code> parameter. If such a customer doesn’t exist then HTTP 404 is returned.</li>
  <li><code class="language-plaintext highlighter-rouge">DELETE /customers/{id}</code>: deletes the customer having the ID equal to the one passed as the <code class="language-plaintext highlighter-rouge">id</code> parameter. If such a customer doesn’t exist then HTTP 404 is returned.</li>
</ul>

<p>The endpoints above are implemented in the class <code class="language-plaintext highlighter-rouge">CustomerResource</code> which is a
CDI (Context Dependency Injection) bean, annotated with <code class="language-plaintext highlighter-rouge">@ApplicationScoped</code>. This
is a very realistic example of using CDI in AWS deployed infrastructure elements.</p>

<h3 id="the-application-layer">The Application Layer</h3>

<p>This layer is the “brain” of the system, the place where the actual customer
management business logic resides, separated from how it is exposed by the
presentation layer. In our project it is included in the module <code class="language-plaintext highlighter-rouge">customer-service
-ecs-api</code> as well and it consists in:</p>

<ul>
  <li>the <code class="language-plaintext highlighter-rouge">Customer</code> entity which is the domain model representing the business object;</li>
  <li>the <code class="language-plaintext highlighter-rouge">CustomerService</code> class containing the core business logic to CRUD operations;</li>
  <li>the caching strategies using Redis;</li>
  <li>the transaction management;</li>
  <li>the business rules and validation logic;</li>
</ul>

<p>We mentioned precedently that the <code class="language-plaintext highlighter-rouge">CustomerResource</code> class, as the pilar of the
presentation layer, is a CDI bean and, as such, it injects another CDI bean, the
<code class="language-plaintext highlighter-rouge">CustomerService</code> class, which performs the effective CRUD operations on <code class="language-plaintext highlighter-rouge">Customer</code>
business objects, using Quarkus Panache. The listing below shows the <code class="language-plaintext highlighter-rouge">Customer</code>
entity:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@Entity
@Table(name = "customers")
public class Customer extends PanacheEntity
{
  @NotBlank
  public String firstName;
  @NotBlank
  public String lastName;
  @Email
  @NotBlank
  public String email;
  public String phone;
  public String address;

  public Customer(){}
  ...
}
</code></pre></div></div>

<p>As you can see, the validation rules are expressed using Jakarta Validation
constraints.</p>

<p>Given this very simplified representation of a customer, the <code class="language-plaintext highlighter-rouge">CustomerService</code>
class uses the <code class="language-plaintext highlighter-rouge">PanacheEntity</code> methods to CRUD customers, as shown below:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@ApplicationScoped
public class CustomerService
{
  @Inject
  RedisDataSource redisDS;

  @Transactional
  public Customer create(Customer customer)
  {
    customer.persist();
    invalidateCache("customers:all");
    return customer;
  }

  public List&lt;Customer&gt; findAll()
  {
    return Customer.listAll();
  }

  public Customer findById(Long id)
  {
    ValueCommands&lt;String, Customer&gt; cache = redisDS.value(Customer.class);
    Customer cached = cache.get("customer:" + id);
    return Optional.ofNullable(cached).orElseGet(() -&gt; {
      Customer customer = Customer.findById(id);
      if (customer != null)
        cache.setex("customer:" + id, 300, customer);
      return customer;
    });
  }

  @Transactional
  public Customer update(Long id, Customer updates)
  {
    return Optional.ofNullable((Customer) Customer.findById(id))
      .map(customer -&gt;
      {
        customer.updateFrom(updates);
        invalidateCache("customer:" + id);
        invalidateCache("customers:all");
        return customer;
      })
      .orElse(null);
  }

  @Transactional
  public boolean delete(Long id)
  {
    boolean deleted = Customer.deleteById(id);
    if (deleted)
    {
      invalidateCache("customer:" + id);
      invalidateCache("customers:all");
    }
    return deleted;
  }

  private void invalidateCache(String key)
  {
    redisDS.key().del(key);
  }
}
</code></pre></div></div>

<p>Nothing very spectacular here, just an usual Quarkus Panache service to CRUD
customers. As you can see, the transaction management that we mentioned previously
are implemented on the behalf of the <code class="language-plaintext highlighter-rouge">@Transactional</code> annotation provided by
the Jakarta Transaction specification, implemented by Quarkus.</p>

<p>The application layer isn’t directly invoked but through the API endpoints, in the
<code class="language-plaintext highlighter-rouge">CustomerResource</code> class, for example:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>  ...
  @POST
  public Response create(@Valid Customer customer)
  {
    return Response.status(Response.Status.CREATED)
      .entity(customerService.create(customer)).build();
  }
  ...
</code></pre></div></div>

<p>The endpoint above is invoked through HTTP by a REST client and, in turn, it calls
<code class="language-plaintext highlighter-rouge">CustomerService</code>. And talking about REST clients, we also provide a MicroProfile
(MP) REST Client, which aims at facilitating the integration, by giving the API
consumers an easy and practical way to invoke it. Look at the interface
<code class="language-plaintext highlighter-rouge">CustomerClient</code> below:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@RegisterRestClient(configKey = "customers-api")
@Path("/customers")
@Produces(MediaType.APPLICATION_JSON)
@Consumes(MediaType.APPLICATION_JSON)
public interface CustomerClient
{
  @POST
  Response createCustomer(Customer customer);
  @GET
  @Path("/{id}")
  Response getCustomer(@PathParam("id") Long id);
  @GET
  public Response getCustomers();
  @PUT
  @Path("/{id}")
  Response updateCustomer(@PathParam("id") Long id, @Valid Customer customer);
  @DELETE
  @Path("/{id}")
  Response delete(@PathParam("id") Long id);
}
</code></pre></div></div>

<p>For those not yet familiar with the MP REST Client specification and its Quarkus
implementation, this interface is all you need in order to probe your API. I’ll
come back later to it when we’ll discuss testing.</p>

<p>Let’s look now at the infrastructure layer.</p>

<h3 id="the-infrastructure-layer">The Infrastructure Layer</h3>

<p>This layer makes the object of the 2nd project’s module: <code class="language-plaintext highlighter-rouge">customer-service-ecs-cdk</code>.
It consists of a Quarkus main class, named <code class="language-plaintext highlighter-rouge">CustomerManagementMain</code>, shown below:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@QuarkusMain
public class CustomerManagementMain
{
  public static void main(String... args)
  {
    Quarkus.run(CustomerManagementApp.class, args);
  }
}
</code></pre></div></div>

<p>This class is the entry point class that bootstraps the Quarkus CDK application.
It uses <code class="language-plaintext highlighter-rouge">@QuarkusMain</code> to define the main method and delegates to Quarkus runtime
to run the <code class="language-plaintext highlighter-rouge">CustomerManagementApp</code> class, shown below;</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@ApplicationScoped
public class CustomerManagementApp implements QuarkusApplication
{
  private CustomerManagementStack customerManagementStack;
  private App app;

  @Inject
  public CustomerManagementApp (App app, CustomerManagementStack customerManagementStack)
  {
    this.app = app;
    this.customerManagementStack = customerManagementStack;
  }

  @Override
  public int run(String... args) throws Exception
  {
    Tags.of(app).add("project", "Containerized Customer Management Application on ECS/Fargate");
    Tags.of(app).add("environment", "development");
    Tags.of(app).add("application", "CustomerManagementApp");
    customerManagementStack.initStack();
    app.synth();
    return 0;
  }
}
</code></pre></div></div>

<p>This class is the main application class, as opposed to the Quarkus main class.
It orchestrates the CDK stack creation by:</p>

<p>The main application class implementing QuarkusApplication. It orchestrates the CDK stack creation by:</p>

<ul>
  <li>injecting the CDK App and CustomerManagementStack via CDI;</li>
  <li>adding global tags to the CDK app for project identification;</li>
  <li>initializing the stack infrastructure;</li>
  <li>synthesizing the CloudFormation templates;</li>
</ul>

<p>The class <code class="language-plaintext highlighter-rouge">CustomerManagementStack</code>, too long to be reproduced here, defines the
CDK stack to be deployed. This stack consists in the following AWS infrastructure:</p>

<ul>
  <li>a VPC (<em>Virtual Private Cloud</em>) with a public and a private subnet across multiple AZs (<em>Availability Zone</em>);</li>
  <li>a NAT (<em>Network Address Translation</em>) gateway to outbound the internet access for private resources;</li>
  <li>an RDS (<em>Relational Database Service</em>) with a PostgreSQL database with automated backups and secrets’ management;</li>
  <li>a Redis cluster using AWS ElastiCache for in-memory caching and performance optimization;</li>
  <li>an ECS (<em>Elastic Container Service</em>) Fargate serverless container hosting platform;</li>
  <li>an ALB (<em>Application Load Balancer</em>) for the traffic distribution and health checking;</li>
  <li>a Secrets Manager for the secure credential store and rotation;</li>
  <li>all the required security groups and network level access control;</li>
  <li>a CloudWatch log group for monitoring;</li>
  <li>the required IAM (<em>Identity and Access Management</em>) roles for the fine-grained permission management;</li>
</ul>

<p>The Java CDK provides the familiar pattern Builder which makes easy to instantiate
complex structures and class hierarchies. The code excerpt below provides an example:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>    ApplicationLoadBalancedFargateService fargateService =
      ApplicationLoadBalancedFargateService.Builder.create(this, "CustomerService")
        .cluster(cluster)
        .cpu(config.ecs().cpu())
        .memoryLimitMiB(config.ecs().memoryLimitMiB())
        .desiredCount(config.ecs().desiredCount())
        .taskImageOptions(ApplicationLoadBalancedTaskImageOptions.builder()
          .image(ContainerImage.fromRegistry(imageName))
          .containerPort(containerPort)
          .logDriver(LogDriver.awsLogs(AwsLogDriverProps.builder()
            .logGroup(logGroup)
            .streamPrefix(config.logging().streamPrefix())
            .build()))
          .environment(Map.of(
            "QUARKUS_DATASOURCE_JDBC_URL",
              "jdbc:postgresql://" + database.getInstanceEndpoint().getHostname() +
              ":5432/" + config.database().databaseName(),
            "QUARKUS_REDIS_HOSTS", "redis://" + redis.getPrimaryEndpoint() + ":6379"
          ))
          .secrets(Map.of(
            "QUARKUS_DATASOURCE_USERNAME",
              Secret.fromSecretsManager(database.getSecret(), "username"),
            "QUARKUS_DATASOURCE_PASSWORD",
              Secret.fromSecretsManager(database.getSecret(), "password")
          ))
          .build())
        .publicLoadBalancer(true)
        .healthCheckGracePeriod(Duration.seconds(config.ecs().healthCheckGracePeriodSeconds()))
        .serviceName(config.ecs().serviceName())
        .minHealthyPercent(100)
        .build();
</code></pre></div></div>

<p>This code sequence uses different builders in order to instantiate a full ECS
Fargate serverless hosting platform. Given the high number of parameters that
this process requires, the <code class="language-plaintext highlighter-rouge">InfrastructureConfig</code> interface, here below, provides
a type-safe Quarkus <code class="language-plaintext highlighter-rouge">@ConfigMap</code>.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@ConfigMapping(prefix = "cdk.infrastructure")
public interface InfrastructureConfig
{
  VpcConfig vpc();
  EcsConfig ecs();
  DatabaseConfig database();
  RedisConfig redis();
  LoggingConfig logging();
  interface VpcConfig
  {
    @WithDefault("2")
    int maxAzs();
    @WithDefault("1")
    int natGateways();
  }
  interface EcsConfig
  {
    @WithDefault("256")
    int cpu();
    @WithDefault("512")
    int memoryLimitMiB();
    @WithDefault("2")
    int desiredCount();
    @WithDefault("60")
    int healthCheckGracePeriodSeconds();
    @WithDefault("customer-service")
    String serviceName();
  }
  interface DatabaseConfig
  {
    @WithDefault("BURSTABLE3")
    String instanceClass();
    @WithDefault("MICRO")
    String instanceSize();
    @WithDefault("customers")
    String databaseName();
    @WithDefault("postgres")
    String secretUsername();
    @WithDefault("false")
    boolean deletionProtection();
  }
  interface RedisConfig
  {
    @WithDefault("cache.t3.micro")
    String nodeType();
    @WithDefault("1")
    int numNodes();
    @WithDefault("customer-cache")
    String clusterId();
    @WithDefault("Redis cache for customer service")
    String description();
  }
  interface LoggingConfig
  {
    @WithDefault("/ecs/customer-service")
    String logGroupName();
    @WithDefault("ONE_WEEK")
    String retentionDays();
    @WithDefault("ecs")
    String streamPrefix();
  }
}
</code></pre></div></div>

<p>This <code class="language-plaintext highlighter-rouge">@ConfigMap</code> defines nested configuration structures for different
infrastructure components and <code class="language-plaintext highlighter-rouge">@WithDefault</code> annotations for default values and
provides compile-time configuration validation while organizing settings into
logical groups like VPC, ECS, database, Redis, and logging.</p>

<h2 id="cdk-configuration-and-deployment">CDK Configuration and Deployment</h2>

<p>AWS CDK uses the <code class="language-plaintext highlighter-rouge">cdk.json</code> file as its primary configuration mechanism to define
how the CDK application should be executed and deployed. This file serves as the
entry point that tells the CDK toolkit how to run the infrastructure application.</p>

<p>Here below is the file <code class="language-plaintext highlighter-rouge">cdk.json</code> used for this project:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>{
  "app": "java -jar target/quarkus-app/quarkus-run.jar",
  "context": {
    "aws-cdk:enableDiffNoFail": true,
    "@aws-cdk/aws-ecs:minHealthyPercent": true
  },
  "requireApproval": "never",
  "output": "cdk.out",
  "progress": "bar",
  "ci": true,
  "verbose": false,
  "acknowledgements": {
    "@aws-cdk/aws-ecs:ecrImageRequiresPolicy": true,
    "@aws-cdk/aws-ecs:minHealthyPercent": true,
    "34892": true
  },
  "notices": false
}
</code></pre></div></div>

<p>Looking at this file, several categories of key aspects are to be brought into
focus:</p>

<ul>
  <li>The <code class="language-plaintext highlighter-rouge">app</code> element: defines the command which executes the application. Our application being a Quarkus one, the <code class="language-plaintext highlighter-rouge">app</code> element reflects that by defining the standard way to run a Quarkus JVM application.</li>
  <li>The <code class="language-plaintext highlighter-rouge">context</code> element: stores environment specific settings. In our case:
    <ul>
      <li><code class="language-plaintext highlighter-rouge">"aws-cdk:enableDiffNoFail": true</code> controls the behavior of the <code class="language-plaintext highlighter-rouge">cdk diff</code> command such that to continue the execution even if it encounters errors, for example missing permissions to describe resources, etc.</li>
      <li><code class="language-plaintext highlighter-rouge">"@aws-cdk/aws-ecs:minHealthyPercent": true</code> is an ECS specific flag that enables the <code class="language-plaintext highlighter-rouge">minHealthyPercent</code> property for ECS services. Here, it allows setting the minimum percentage of healthy tasks during deployments (e.g., 50% for rolling updates).</li>
    </ul>
  </li>
  <li>The feature flags: control CDK behavior and enables/disables specific features. In our case:
    <ul>
      <li><code class="language-plaintext highlighter-rouge">"requireApproval": "never"</code> says that the CDK will never prompt for manual approval during <code class="language-plaintext highlighter-rouge">cdk deploy</code> operations.</li>
      <li><code class="language-plaintext highlighter-rouge">"output": "cdk.out"</code> sets the directory where the CloudFormation templates, generated by the <code class="language-plaintext highlighter-rouge">cdk synth</code> command, will be stored.</li>
      <li><code class="language-plaintext highlighter-rouge">"progress": "bar"</code> shows progress bar during CDK operations instead of detailed logs.</li>
      <li><code class="language-plaintext highlighter-rouge">"ci": true</code> optimizes output for CI/CD environments (less interactive, more structured).</li>
      <li><code class="language-plaintext highlighter-rouge">"verbose": false</code> suppresses detailed debug information during execution.</li>
      <li><code class="language-plaintext highlighter-rouge">"notices": false</code> disables CDK notices about new features or deprecations.</li>
    </ul>
  </li>
  <li>The acknowledgements:
    <ul>
      <li><code class="language-plaintext highlighter-rouge">"@aws-cdk/aws-ecs:ecrImageRequiresPolicy": true</code> acknowledges that ECR images require IAM policies for access;</li>
      <li><code class="language-plaintext highlighter-rouge">"@aws-cdk/aws-ecs:minHealthyPercent": true</code> confirms understanding of ECS health check behavior</li>
      <li><code class="language-plaintext highlighter-rouge">"34892": true</code> acknowledges specific CDK issue/warning (likely related to a GitHub issue number)</li>
    </ul>
  </li>
</ul>

<p>This <code class="language-plaintext highlighter-rouge">cdk.json</code> file is used by the CDK toolkit to:</p>

<ul>
  <li>Synthesis: execute the app command to generate CloudFormation templates in <code class="language-plaintext highlighter-rouge">cdk.out</code>.</li>
  <li>Deployment: use the synthesized templates to deploy infrastructure to AWS.</li>
  <li>Context Management: cache AWS account/region specific information for consistent deployments.</li>
</ul>

<h2 id="running-and-testing">Running and testing</h2>

<p>There are several test categories that come with the project, as follows:</p>

<ul>
  <li>integration tests;</li>
  <li>system integration tests;</li>
  <li>Open API / Swagger tests;</li>
  <li>end-to-end tests.</li>
</ul>

<p>As you can see, we don’t provide unit tests because we think that this category
of tests is completely useless. But this is another topic which doesn’t belong
to the scope of this article.</p>

<h3 id="the-integration-tests">The Integration tests</h3>

<p>These tests aim at testing the complete REST API layer with the Quarkus runtime.
They use the test infrastructure automatically provide by the Quarkus Dev Services
with an in-memory H2 database, in order to validate the API contracts, the
requests/responses and the business logic integration.</p>

<p>The class <code class="language-plaintext highlighter-rouge">CustomerResourceTest</code> is one test in this category. It is executed
by the <code class="language-plaintext highlighter-rouge">maven-surefire-plugin</code> in the Maven test phase, hence its naming
convention: <code class="language-plaintext highlighter-rouge">*Test</code>.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@QuarkusTest
public class CustomerResourceTest
{
  @Test
  void testCreateCustomer()
  {
    given()
      .contentType(ContentType.JSON)
      .body("""
        {
          "firstName": "John",
          "lastName": "Doe",
          "email": "john@example.com"
        }
      """)
      .when()
      .post("/customers")
      .then()
      .statusCode(201)
      .body("firstName", equalTo("John"));
  }
  ...
}
</code></pre></div></div>

<p>We reproduced here only one test method, the one creating new customers. Feel
free to extensively look at this class which uses the RESTassured library as
REST client.</p>

<p>Another integration test is the class <code class="language-plaintext highlighter-rouge">CloudFormationTemplateIT</code>. As opposed to
the previous one, this class is executed by the <code class="language-plaintext highlighter-rouge">maven-failsafe-plugin</code> in the
Maven <code class="language-plaintext highlighter-rouge">verify</code> phase. The reason is that it needs to be run after the <code class="language-plaintext highlighter-rouge">cdk synth</code>
command, executed by the <code class="language-plaintext highlighter-rouge">exec-maven-plugin</code>. This command synthesizes the required
AWS infrastructure in the form of a CloudFormation template, stored in the directory
<code class="language-plaintext highlighter-rouge">cdk.out</code>. Then the test class checks the files in this directory for the presence
and the validity of these AWS infrastructure elements.</p>

<p>In order to perform the integration test:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ cd aws-cdk-quarkus
$ mvn clean verify
</code></pre></div></div>

<p>The Maven command above will execute, in addition to the integration tests, the
system integration tests.</p>

<h3 id="the-system-integration-tests">The System Integration Tests</h3>

<p>These tests are a more realistic version of the integration ones. As opposed to
the former ones, which were relying on a Quarkus runtime related infrastructure,
these tests are more realistic in the sense that they are performed against a
local, while production similar, containerized infrastructure. During the Maven
<code class="language-plaintext highlighter-rouge">verify</code> phase, the <code class="language-plaintext highlighter-rouge">exec-maven-plugin</code> executes the <code class="language-plaintext highlighter-rouge">docker-compose.yaml</code> file against
the currently running Docker daemon and starts all the required services, as follows:</p>

<ul>
  <li>a PostgreSQL database;</li>
  <li>the <code class="language-plaintext highlighter-rouge">adminer</code> tool to administrate the database;</li>
  <li>a Redis node for in-memory caching purposes;</li>
  <li>the <code class="language-plaintext highlighter-rouge">redis-insight</code> tool to administrate the local Redis instance;</li>
  <li>the customer management API as a Quarkus application.</li>
</ul>

<p>Once all this infrastructure is started, the <code class="language-plaintext highlighter-rouge">CustomerResourceIT</code> class uses the
<code class="language-plaintext highlighter-rouge">CustomerClient</code> to test the API running locally. The usage of the MP REST Client
isn’t mandatory, of course, other REST clients, like RESTassured or simply Jakarta
REST Client, can be used. However, the MP REST Client is, in my opinion, the
simplest and the most effective solution.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@QuarkusTest
@TestProfile(IntegrationTestProfile.class)
public class CustomerResourceIT
{
  @Inject
  @RestClient
  CustomerClient customerClient;

  @Test
  void testCreateCustomer()
  {
    Customer customer = new Customer("John", "Doe", "john@example.com",
      "000000000000", "123 Main St");
    Response response = customerClient.createCustomer(customer);
    assertThat(response.getStatus()).isEqualTo(201);
    customer = response.readEntity(Customer.class);
    assertThat(customer.firstName).isEqualTo("John");
    assertThat(customer.lastName).isEqualTo("Doe");
    assertThat(customer.email).isEqualTo("john@example.com");
  }
  ...
}
</code></pre></div></div>

<p>As you can see, the test is using a Quarkus customized profile, named
<code class="language-plaintext highlighter-rouge">IntegationTestProfile</code>, shown below:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>public class IntegrationTestProfile implements QuarkusTestProfile
{
  @Override
  public Map&lt;String, String&gt; getConfigOverrides()
  {
    return Map.of(
      "quarkus.datasource.db-kind", "postgresql",
      "quarkus.datasource.username", "nicolas",
      "quarkus.datasource.password", "dev123",
      "quarkus.datasource.jdbc.url", "jdbc:postgresql://localhost:5432/customers",
      "quarkus.datasource.devservices.enabled", "false",
      "quarkus.redis.hosts", "redis://localhost:6379",
      "quarkus.redis.devservices.enabled", "false"
    );
  }
}
</code></pre></div></div>

<h3 id="the-swagger-tests">The Swagger tests</h3>

<p>The module <code class="language-plaintext highlighter-rouge">customer-service-ecs-api</code> exposes a Swagger interface that you can
use to manually test the API. Just fire your preferred browser at
http://localhost:8080/q/swagger-ui and you’ll be presented with that:</p>

<p><img src="/assets/images/swagger.png" alt="swagger" /></p>

<p>This will allow you to probe your API.</p>

<h3 id="the-e2e-tests">The E2E Tests</h3>

<p>The last test category is the end-to-end one. These tests have the particularity
to be performed against the real AWS services (ECS, RDS, ElasticCache, etc.).
The class CustomerServiceE2EIT is such a test. It’s similar to the integration
test <code class="language-plaintext highlighter-rouge">CustomerResourceTest</code> in the sense that it uses RESTassured to probe the
API but, instead of invoking local endpoints, it invokes endpoints onto the real
API deployed on the AWS Fargate platform and executed as a Docker image.</p>

<p>Everything happens in the code sequence below:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>  @BeforeAll
  static void setup()
  {
    cfClient = CloudFormationClient.builder()
     .region(Region.EU_WEST_3)
     .build();
    String loadBalancerUrl = getStackOutput("QuarkusCustomerManagementStack", "CustomerServiceLoadBalancerDNS");
    RestAssured.baseURI = "http://" + loadBalancerUrl;
    RestAssured.port = 80;
    System.out.println("&gt;&gt;&gt; Connecting to: " + RestAssured.baseURI + ":" + RestAssured.port);
    waitForServiceReady();
  }
</code></pre></div></div>

<p>This method orchestrates the connection to teh AWS environment. It creates a
CloudFormation client configured for the EU-WEST-3 region where the infrastructure
is deployed. Then it queries the deployed CloudFormation stack to retrieve the
ALB DNS name, eliminating hardcoded URLs and ensuring tests always connect to
the correct deployed instance. The RESTassured client is configured with the
dynamically discovered ALB URL and the standard HTTP port (80). The remaining
is very similar to what we did in <code class="language-plaintext highlighter-rouge">CustomerResourceTest</code> and <code class="language-plaintext highlighter-rouge">CustomerResourceIT</code>.
The method <code class="language-plaintext highlighter-rouge">waitForServiceReady()</code> is, however, new and ensure the the ECS service
is fully operational before running the tests, preventing this way false failures
due to deployment timing.</p>

<p>To run the E2E tests execute the following Maven command:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ cd aws-cdk-quarkus
$ mvn -Pe2e clean verify
</code></pre></div></div>

<p>The workflow is as follows:</p>

<ol>
  <li>The module’s <code class="language-plaintext highlighter-rouge">pom.xml</code> file uses the <code class="language-plaintext highlighter-rouge">exec-maven-plugin</code> to run, in the Maven’s <code class="language-plaintext highlighter-rouge">pre-integration-test</code> phase, the command <code class="language-plaintext highlighter-rouge">cdk synth</code>. This command creates the associated CloudFormation template in the directory <code class="language-plaintext highlighter-rouge">cdk.out</code>.</li>
  <li>Then the same <code class="language-plaintext highlighter-rouge">exec-maven-plugin</code> runs, in the same Maven’s <code class="language-plaintext highlighter-rouge">pre-integration-test</code> phase, the script <code class="language-plaintext highlighter-rouge">deploy-ecr.sh</code> which automates the complete container deployment workflow.</li>
  <li>This script creates the ECR (<em>Elastic Container Repository</em>) repository if it doesn’t exist, using environment variables for dynamic naming and region configuration.</li>
  <li>Then it authenticates with ECR, retags the local Docker image with the ECR registry URL, and pushes it to the remote repository.</li>
  <li>If the stack already exists, due to a previous execution, then it is updated, otherwise the complete infrastructure is created from scratch.</li>
</ol>

<p>Here is the script:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>#!/bin/bash
registry=$CDK_DEFAULT_ACCOUNT.dkr.ecr.$CDK_DEFAULT_REGION.amazonaws.com
echo "&gt;&gt;&gt; Creating ECR registry $registry"
aws ecr create-repository --repository-name $CONTAINER_IMAGE_GROUP/$CONTAINER_IMAGE_NAME  --region $CDK_DEFAULT_REGION 2&gt;/dev/null || echo "### Repository already exists"
echo "&gt;&gt;&gt; Logging into ECR..."
aws ecr get-login-password --region $CDK_DEFAULT_REGION | docker login --username AWS --password-stdin $registry
echo "&gt;&gt;&gt; Tagging and pushing existing image..."
docker tag $CONTAINER_IMAGE_GROUP/$CONTAINER_IMAGE_NAME:1.0-SNAPSHOT $registry/$CONTAINER_IMAGE_GROUP/$CONTAINER_IMAGE_NAME:latest
docker push $registry/$CONTAINER_IMAGE_GROUP/$CONTAINER_IMAGE_NAME:latest
echo "&gt;&gt;&gt; Checking if stack exists..."
if aws cloudformation describe-stacks --stack-name QuarkusCustomerManagementStack --region $CDK_DEFAULT_REGION &gt;/dev/null 2&gt;&amp;1; then
  echo "&gt;&gt;&gt; Stack exists - updating ECS service ..."
  CLUSTER_NAME=$(aws cloudformation describe-stack-resources --stack-name QuarkusCustomerManagementStack \
    --query 'StackResources[?ResourceType==`AWS::ECS::Cluster`].PhysicalResourceId' \
    --output text --region $CDK_DEFAULT_REGION)
  if [ -n "$CLUSTER_NAME" ]; then
    echo "&gt;&gt;&gt; Found cluster: $CLUSTER_NAME - updating ECS service..."
    aws ecs update-service \
      --cluster $CLUSTER_NAME \
      --service customer-service \
      --force-new-deployment \
      --region $CDK_DEFAULT_REGION
    echo "&gt;&gt;&gt; Waiting for service update to complete..."
    aws ecs wait services-stable \
      --cluster $CLUSTER_NAME \
      --services customer-service \
      --region $CDK_DEFAULT_REGION
    echo "&gt;&gt;&gt; Service update complete!"
    exit 0
  fi
fi
echo "&gt;&gt;&gt; Deploying full infrastructure..."
cdk deploy --all --require-approval never
echo "&gt;&gt;&gt; Deployment finished !"
</code></pre></div></div>

<p>Beware that the deployment operation is a long-running process which can take
more than 15 minutes. Also, once deployed and running, you’ll be invoiced for
the cost of the associated infrastructure.</p>

<p>During the script execution, you can check the progression by running scripts
like <code class="language-plaintext highlighter-rouge">describe-services.sh</code>, <code class="language-plaintext highlighter-rouge">describe-events.sh</code>, <code class="language-plaintext highlighter-rouge">describe-stacks.sh</code>, etc.,
or simply using in the AWS Console to look for possible error messages in the
CloudWatch log groups. In order to run scripts you need to:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ cd aws-cdk-quarkus/customer-service-ecs
$ ./customer-service-ecs-cdk/src/main/resources/scripts/&lt;script-name&gt;
</code></pre></div></div>

<p>Once the deployed successful and after having exercised your infrastructure,
don’t forget to run the <code class="language-plaintext highlighter-rouge">delete-stack.sh</code> script, which will remove everything
you deployed, avoiding you this way to be invoiced by AWS.</p>

<p>Enjoy !</p>]]></content><author><name>Nicolas DUMINIL</name></author><category term="Java" /><category term="Quarkus" /><category term="AWS" /><category term="ECS" /><category term="Fargate" /><category term="DZone" /><summary type="html"><![CDATA[In a three articles series published recently on this site (Part 1, Part 2, Part 3), I’ve been demonstrating the power of the AWS Cloud Development Kit (CDK) in the Infrastructure as Code (IaC) area, especially when coupled with the ubiquitous Java and its supersonic / subatomic cloud-native stack: Quarkus.]]></summary></entry><entry><title type="html">EIP: Back to Fundamentals - The Content Enricher</title><link href="https://nicolasduminil.github.io/posts-archive/content-enricher/" rel="alternate" type="text/html" title="EIP: Back to Fundamentals - The Content Enricher" /><published>2025-08-17T00:00:00+00:00</published><updated>2025-08-17T13:05:34+00:00</updated><id>https://nicolasduminil.github.io/posts-archive/content-enricher</id><content type="html" xml:base="https://nicolasduminil.github.io/posts-archive/content-enricher/"><![CDATA[<h2 id="the-content-enricher">The Content Enricher</h2>

<p>Let’s continue with the next integration pattern in alphabetical order. We skip the Channel Adapter and the Content Based
Router, that we have already seen in the two previous modules, <code class="language-plaintext highlighter-rouge">aggregator</code> and <code class="language-plaintext highlighter-rouge">canonical-data-model</code>, so let’s go to the next
relevant one which is the Content Enricher. The name of the module is, with no surprises, <code class="language-plaintext highlighter-rouge">content-enricher</code>.</p>

<h3 id="scenario">Scenario</h3>

<p>The business scenario chosen to illustrate this pattern is presented below:</p>

<p><img src="/assets/images/content-enricher.png" alt="Content enricher diagram" /></p>

<p>Here we’re coming back to our business scenario previously used to illustrate the Aggregator pattern. The same order
generator is reused here to generate several random orders. Once generated, these orders are submitted to an enrichment
process. A Camel enricher is implemented by the <code class="language-plaintext highlighter-rouge">enrich</code> DSL statement, which uses the following two components:</p>

<ul>
  <li>an enrichment source responsible to provide the enrichment data;</li>
  <li>an enrichment aggregator which, on the behalf of its aggregation strategy, describes the enrichment logic.</li>
</ul>

<p>Our orders enrichment process happens in two stages:</p>

<ul>
  <li>in the 1st stage, the order item enricher source is called to provide the required data for the order items enrichment. Then, the order item aggregator effectively performs the enrichment operations, by adding the enrichment data to the existent one;</li>
  <li>in the 2nd stage, is the turn of the order itself to be enriched. In a similar way, the order enrichment source is called to provide the enrichment data, after which the order enrichment aggregator performs the enrichment.</li>
</ul>

<p>The Content Enricher pattern beauty consists in its ability to progressively add data from external sources, while
maintaining a clean separation between the enricher itself, its source and its aggregation strategy.</p>

<h3 id="architecture">Architecture</h3>

<p>The diagram below shows the software architecture of the implementation:</p>

<p><img src="/assets/images/content-enricher-sd.png" alt="Content enricher sequence diagram" /></p>

<p>As you can see, the two stages of the enrichment process are distinctly represented here. The <code class="language-plaintext highlighter-rouge">EcommerceRoute</code> class is
our Camel <code class="language-plaintext highlighter-rouge">RouteBuilder</code>. It uses our old friend <code class="language-plaintext highlighter-rouge">OrderGeneratorProcessor</code> to generate orders and the
<code class="language-plaintext highlighter-rouge">OrderItemEnrichmentService</code>, together with <code class="language-plaintext highlighter-rouge">OrderItemEnrichmentStrategy</code>, to construct orders having their <code class="language-plaintext highlighter-rouge">enrichedItems</code>
properties enriched with the product details. Then, on the behalf of <code class="language-plaintext highlighter-rouge">OrderEnrichmentService</code> and <code class="language-plaintext highlighter-rouge">OrderEnrichmentStrategy</code>,
it enriches the orders themselves, by adding to them the customer details.</p>

<h3 id="key-components">Key components</h3>

<p>There are two categories of key components for this implementation: the enrichment source services and the enrichment
strategies. The enrichment source services are simulated in our case. For example, the <code class="language-plaintext highlighter-rouge">OrderEnrichmentService</code> is as
simple as that:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@ApplicationScoped
@Named("orderEnrichmentService")
public class OrderEnrichmentService implements Processor
{
  @Override
  public void process(Exchange exchange) throws Exception
  {
    CustomerDetails customerDetails = new CustomerDetails(
      "John Doe",
      "john@example.com",
      "GOLD"
    );
    exchange.getIn().setBody(customerDetails);
  }
}
</code></pre></div></div>

<p>In a real application, these enrichment data should probably be extracted from a data store or provided by invoking some
API endpoints. In our simple case, which is a test case, we are just hard coding them. A point to notice is the fact that
a Camel enrichment source only provides the enrichment data and, accordingly, it isn’t responsible for effectively doing
the enrichment process. This is the role of the aggregation strategies which, in some cases may be quite simple, as shown
below:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@ApplicationScoped
@Named("orderEnrichmentStrategy")
public class OrderEnrichmentStrategy implements AggregationStrategy
{
  @Override
  public Exchange aggregate(Exchange original, Exchange enrichment)
  {
    EnrichedOrder enrichedOrder = original.getIn().getBody(EnrichedOrder.class);
    CustomerDetails customerDetails = enrichment.getIn().getBody(CustomerDetails.class);
    original.getIn().setBody(enrichedOrder.withCustomerDetails(customerDetails));
    return original;
  }
}
</code></pre></div></div>

<p>Here the aggregation strategy is really straightforward as it consists in simply enriching the order with customer
details. But some other times the strategy is much more complicated, as for example when enriching the enriched order
items of the enriched orders, by adding to them the product details.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@ApplicationScoped
@Named("orderItemEnrichmentStrategy")
public class OrderItemEnrichmentStrategy implements AggregationStrategy
{
  @Override
  public Exchange aggregate(Exchange original, Exchange enrichment)
  {
    Order order = original.getIn().getBody(Order.class);
    Map&lt;String, ProductDetails&gt; productMap = enrichment.getIn().getBody(Map.class);
    //
    // Transform order items to enriched order items:
    //   find matching product details for each item,
    //   create EnrichedOrderItem if match found,
    //   filter out items without matches
    //
    List&lt;EnrichedOrderItem&gt; enrichedItems = order.items().stream()
      .map(item -&gt; findProductDetails(productMap, item.productId())
      .map(pd -&gt; new EnrichedOrderItem(item, pd)))
      .filter(Optional::isPresent)
      .map(Optional::get)
      .toList();
    EnrichedOrder fullyEnriched = new EnrichedOrder(
      order.orderId(),
      order.customerId(),
      order.shippingAddress(),
      order.orderDate(),
      null,
      enrichedItems
    );
    original.getIn().setBody(fullyEnriched);
    return original;
  }

  private Optional&lt;ProductDetails&gt; findProductDetails(Map&lt;String, ProductDetails&gt; productMap, String productId)
  {
    //
    // Extract the product ID prefix
    //
    String productPrefix = productId.split("-")[0];
    //
    // Returns the `ProductDetails` instance which ket name starts
    // with the prefix ID prefix.
    //
      return productMap.entrySet().stream()
        .filter(entry -&gt; entry.getKey().startsWith(productPrefix))
        .map(Map.Entry::getValue)
        .findFirst();
  }
}
</code></pre></div></div>

<p>As you can see, the difficulty here consists in the fact that we need to find the product details that match the order
items that we want to enrich, hence these quite convoluted filters and maps statements. This might not be necessary in
a real case where the enricment source is a data store and, hence, provides a query language, be it SQL or NoSQL.</p>

<h3 id="testing">Testing</h3>

<p>Camel routes are easy to test using the Hawtio console, as we’ve seen precedently. Quarkus provides a test framework
covering the wide spectrum, fom unit to E2E, passing through integration tests.
For example, look at the following integration test:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@QuarkusTest
public class TestEcommerceRoute
{
  @Inject
  CamelContext camelContext;

  @Inject
  ProducerTemplate producerTemplate;

  @Test
  public void testContentEnricherDemo() throws Exception {
    Order testOrder = new Order(
      "BOOK-1820",
      "CUST-123",
      "123 Test St",
      LocalDateTime.now(),
      List.of(new OrderItem("BOOK-1", "Computer book",
        "SUPPLIER_BOOKS", 1, new BigDecimal(41.75)))
      );
    Exchange result = producerTemplate.request("direct:enrichOrder",
    exchange -&gt; exchange.getIn().setBody(testOrder));
    EnrichedOrder enrichedOrder = result.getIn().getBody(EnrichedOrder.class);
    assertNotNull(enrichedOrder, "Enriched order should not be null");
    assertEquals("BOOK-1820", enrichedOrder.orderId());
    assertNotNull(enrichedOrder.customerDetails(), "Customer details should be enriched");
    assertFalse(enrichedOrder.enrichedItems().isEmpty(), "Items should be present");
    EnrichedOrderItem enrichedItem = enrichedOrder.enrichedItems().get(0);
    assertNotNull(enrichedItem.productDetails(), "Product details should be enriched");
  }
}
</code></pre></div></div>

<p>As you can see, instead of using mocks, we’re using here real Camel routes and processors. It’s a special kind of
integration tests, unique to Quarkus, which runs in the same JVM as the test runner, which by the way, allows us to
inject the <code class="language-plaintext highlighter-rouge">CamelContext</code>, as well as the <code class="language-plaintext highlighter-rouge">ProducerTemplate</code>.</p>

<p>Quarkus also provides the <code class="language-plaintext highlighter-rouge">@QuarkusIntegrationTest</code> annotation which, as opposed to what its name implies,
doesn’t annotate integration tests, but E2E ones. Sometimes, the Quarkus naming is indeed confusing and counterintuitive.
This is a common complaint in the community.</p>

<h3 id="sample-output">Sample output</h3>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>2025-08-16 00:07:04,144 INFO  [contentEnricherDemo] (Camel (camel-1) thread #1 - timer://orderGenerator) === ORDER ===
2025-08-16 00:07:04,216 INFO  [contentEnricherDemo] (Camel (camel-1) thread #1 - timer://orderGenerator) Order {orderId = 'ORD-1755295624138', customerId = 'CUST-801', items = 5}
2025-08-16 00:07:04,216 INFO  [content-enricher] (Camel (camel-1) thread #1 - timer://orderGenerator) Exchange[ExchangePattern: InOnly, BodyType: fr.simplex_software.ecommerce.model.Order, Body: Order {orderId = 'ORD-1755295624138', customerId = 'CUST-801', items = 5}]
2025-08-16 00:07:04,226 INFO  [orderItemEnrichment] (Camel (camel-1) thread #1 - timer://orderGenerator)         &gt;&gt;&gt; Product details retrieved: {BOOK-1=ProductDetails[name=Java Guide, price=45.99, category=Books, stockLevel=100], FASHION-1=ProductDetails[name=T-shirt, price=19.99, category=Fashion, stockLevel=200], LAPTOP-1=ProductDetails[name=Gaming Laptop, price=1299.99, category=Electronics, stockLevel=25]}
2025-08-16 00:07:04,230 INFO  [orderEnrichment] (Camel (camel-1) thread #1 - timer://orderGenerator)     &gt;&gt;&gt; Customer details retrieved: CustomerDetails[name=John Doe, email=john@example.com, loyaltyTier=GOLD]
2025-08-16 00:07:04,231 INFO  [doEnrichment] (Camel (camel-1) thread #1 - timer://orderGenerator) === ENRICHED ORDER ===
2025-08-16 00:07:04,238 INFO  [doEnrichment] (Camel (camel-1) thread #1 - timer://orderGenerator) EnrichedOrder[orderId=ORD-1755295624138, customerId=CUST-801, shippingAddress=789 Pine Rd, Marseille, orderDate=2025-08-16T00:07:04.141387943, customerDetails=CustomerDetails[name=John Doe, email=john@example.com, loyaltyTier=GOLD], enrichedItems=[EnrichedOrderItem[orderItem=OrderItem {productId = 'LAPTOP-87', supplierId = 'SUPPLIER_ELECTRONICS', quantity = 2}, productDetails=ProductDetails[name=Gaming Laptop, price=1299.99, category=Electronics, stockLevel=25]]]]
2025-08-16 00:07:04,239 INFO  [content-enricher] (Camel (camel-1) thread #1 - timer://orderGenerator) Exchange[ExchangePattern: InOnly, BodyType: fr.simplex_software.ecommerce.model.EnrichedOrder, Body: EnrichedOrder[orderId=ORD-1755295624138, customerId=CUST-801, shippingAddress=789 Pine Rd, Marseille, orderDate=2025-08-16T00:07:04.141387943, customerDetails=CustomerDetails[name=John Doe, email=john@example.com, loyaltyTier=GOLD], enrichedItems=[EnrichedOrderItem[orderItem=OrderItem {productId = 'LAPTOP-87', supplierId = 'SUPPLIER_ELECTRONICS', quantity = 2}, productDetails=ProductDetails[name=Gaming Laptop, price=1299.99, category=Electronics, stockLevel=25]]]]]
</code></pre></div></div>

<h3 id="key-patterns-demonstrated">Key Patterns Demonstrated</h3>

<ul>
  <li>Enrichment source services</li>
  <li>Aggregation strategies (merge original + enrichment data)</li>
  <li>Route orchestration (coordinate the enrichment flow)</li>
</ul>]]></content><author><name>Nicolas DUMINIL</name></author><category term="Java" /><category term="Quarkus" /><category term="Apache Camel" /><category term="EIP" /><category term="Blog" /><summary type="html"><![CDATA[The Content Enricher]]></summary></entry><entry><title type="html">Java: From Imperative to Functional - A Complete Use Case</title><link href="https://nicolasduminil.github.io/posts-archive/fpjava/" rel="alternate" type="text/html" title="Java: From Imperative to Functional - A Complete Use Case" /><published>2025-08-09T00:00:00+00:00</published><updated>2025-08-09T13:05:34+00:00</updated><id>https://nicolasduminil.github.io/posts-archive/fpjava</id><content type="html" xml:base="https://nicolasduminil.github.io/posts-archive/fpjava/"><![CDATA[<p>Java, as anybody knows, isn’t a functional language. It doesn’t allow for <em>functional
programming</em>. But once having said that, it’s important to mention that there isn’t
any general agreed definition of what the <em>functional programming</em> is.</p>

<p>In simple terms, the <em>functional programming</em> is a programming paradigm which
consists in programming <em>with functions</em>. In the real world, <em>functions</em> are
primarily mathematics concepts defining relations between a <em>domain</em> and a <em>codomain</em>.
But in traditional Java, they are methods.</p>

<p>Well, when I’m saying that, in Java, functions are methods, what I mean is that,
functions may be represented by methods, provided that they satisfy the following
conditions:</p>

<ul>
  <li>They don’t mutate anything outside their scope, meaning that no internal mutation may be visible from outside.</li>
  <li>They don’t mutate their arguments.</li>
  <li>They don’t throw exceptions.</li>
  <li>They return a value.</li>
  <li>They always return the same result when called with the same arguments.</li>
</ul>

<p>Methods satisfying the rules above are called <em>functional methods</em>. However, they
still cannot be considered as the equivalent of functions in functional programming.
As a matter of fact, what they’re missing is the ability to be passed as arguments
or to be returned as result values. Consequently, <em>functional methods</em> cannot be
composed. One can compose <em>functional method</em> applications, but not <em>functional
methods</em> themselves, because they belong to classes.</p>

<p>Things have dramatically changed since 2014, with Java 8, which brought a powerful
new syntactic improvement: the <em>functional interfaces</em>. Starting from this moment,
functions have become a first class Java citizen, thanks to the <code class="language-plaintext highlighter-rouge">java.util.function.Function</code>
class and to the <em>lambda expressions</em>. However, this major improvement doesn’t
make Java a functional programming language. Like its ancestors SmallTalk and C++,
it stays an <em>imperative</em> programming language, while becoming <em>functional friendly</em>.</p>

<p><em>Imperative programming</em> is another programming paradigm where programs are composed
of elements that <em>do</em> things, as opposed to functional programs that are composed
of elements that <em>are</em> things. Doing something means an initial state, a set of
transitions and an end state. Hence, imperative programs consist in a series of
mutations, from the initial to the final state, separated by conditions testing.
As opposed to the imperative style, functional-style programs don’t <em>do</em> things.
For example, a function implementing the addition of two integers, let’s say 2
and 3, doesn’t <em>make</em> 5, but <em>is</em> 5. Consequently, each time you encounter 2 + 3
you can replace it by this function.</p>

<p>Can we do that in Java ? Well, sometimes we can but, very often, it requires to
change the program outcome. If our function, that we want to use to replace the
expression, doesn’t have any other effect than returning the result, then we can,
but this isn’t generally the case because we systematically need to mutate
variables, print out something, write to databases, raise exceptions etc. These
are called <em>side effects</em>.</p>

<p>So, functional programming means writing programs without <em>side effects</em>. And
while we certainly can do that in Java for simple cases as the one in the example,
the question is: “can we do it in Java enterprise grade applications ?”</p>

<p>This article tries to answer this question. And, for doing that, I considered a
simple, yet realistic, use case.</p>

<h2 id="a-simple-yet-realistic-use-case">A Simple Yet Realistic Use Case</h2>

<p>The simple, yet realistic, case considered here, in order to illustrate my point,
is the one of an SMS notifications service. You may find the project here: https://github.com/nicolasduminil/sms-notifications.
Once you’ve cloned it, you’ll find several implementations of the case, each one making
the object of a separated module in the Maven multi-module project. We start with
<code class="language-plaintext highlighter-rouge">sms-notification-initial</code> which is the most traditional imperative implementation
and, iteration by iteration, from <code class="language-plaintext highlighter-rouge">i1</code> to <code class="language-plaintext highlighter-rouge">i5</code>, we successively refactor it, until
getting the most functional style implementation.</p>

<p>The code below shows our initial, full imperative, implementation of the SMS
notification service.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>public class Notification
{
  private static final PhoneNumberUtil phoneNumberUtil = PhoneNumberUtil.getInstance();

  public void sendNotification(String phoneNumber, String region, String message)
  {
    if (isValid (phoneNumber, region))
    {
      SmsService sms = new SmsService();
      sms.send(phoneNumber, message);
    }
    else
      throw new IllegalArgumentException("### Invalid phone number format: %s"
        .formatted(phoneNumber));
  }

  private static boolean isValid(String number, String defaultRegion)
  {
    try
    {
      Phonenumber.PhoneNumber phoneNumber = phoneNumberUtil
       .parse(number, defaultRegion);
      return phoneNumberUtil.isValidNumber(phoneNumber);
    }
    catch (NumberParseException e)
    {
      return false;
    }
  }
  ...
}
</code></pre></div></div>

<p>As you can see, our notification service uses the <code class="language-plaintext highlighter-rouge">com.google.i18n.phonenumbers.PhoneNumberUtil</code>
class to check the validity of the phone number and its associated region. Then,
it sends the SMS notification, on the behalf of the <code class="language-plaintext highlighter-rouge">SmsService</code> class, or it
raises an <code class="language-plaintext highlighter-rouge">IllegalArgumentException</code>. The <code class="language-plaintext highlighter-rouge">SmsService</code> itself is just a fictive
one, which only logs a message.</p>

<p>You can test this simple service as shown below:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ git clone https://github.com/nicolasduminil/sms-notifications.git
$ cd sms-notifications
$ mvn test
</code></pre></div></div>

<p>There are a couple of JUnit tests in this module that will succeed.</p>

<p>The code above is purely imperative. The <code class="language-plaintext highlighter-rouge">sendNotification(...)</code> method doesn’t
return anything and it mixes data processing with side effects, like sending an
SMS or throwing an exception. Additionally, it isn’t possible to test the phone
number validation in isolation, without the mentioned side effects.</p>

<p>So, in order to improve this code, one of the first thing we can make is to
separate validation from side effects.</p>

<h2 id="separating-side-effects">Separating side effects</h2>

<p>In this first iteration, that you can find in the module <code class="language-plaintext highlighter-rouge">sms-notifications-i1</code>,
we separate the side effects from the validation process, as follows:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>public class Notification
{
  private static final PhoneNumberUtil phoneNumberUtil = PhoneNumberUtil.getInstance();

  public final BiFunction&lt;String, String, Boolean&gt; phoneNumberValidator = (number, region) -&gt; {
    try
    {
      Phonenumber.PhoneNumber phoneNumber = phoneNumberUtil.parse(number, region);
      return phoneNumberUtil.isValidNumber(phoneNumber);
    }
    catch (NumberParseException e)
    {
      return false;
    }
  };

  public void sendNotification(String phoneNumber, String region, String message)
  {
    if (phoneNumberValidator.apply (phoneNumber, region))
    {
      SmsService sms = new SmsService();
      sms.send(phoneNumber, message);
    }
    else
      throw new IllegalArgumentException("### Invalid phone number format: %s".formatted(phoneNumber));
  }
}
</code></pre></div></div>

<p>Here, by isolating the phone number validation in the <code class="language-plaintext highlighter-rouge">phoneNumberValidator</code>
function, we separate the validation operation from its side effects. Exceptions
are still thrown by the Google API and we cannot change anything here, but they
are caught and the appropriate result is returned.</p>

<p>If you look in the <code class="language-plaintext highlighter-rouge">TestNotifications</code> class of this module, you may see tests
like:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>...
assertTrue(notification.phoneNumberValidator
  .apply("+33615229808", "FR"), "Salut !");
...
assertFalse(notification.phoneNumberValidator
  .apply("+33615229808123", "FR"), "Salut !");
...
</code></pre></div></div>

<p>that weren’t possible before. The 2nd test in the code above fails because the
phone number passed as an input argument isn’t valid in the given region (too
long). But there is no difference between the case of an invalid phone number
and the one of an invalid region. Or the one where the phone number is null or
empty. To address this point, we can define a component able to handle in
a more specific way the validation result.</p>

<h2 id="a-more-functional-result">A more functional Result</h2>

<p>This component may be found in the <code class="language-plaintext highlighter-rouge">sms-notifications-i2</code> module and its class
diagram is shown below:</p>

<p><img src="/assets/images/result.png" alt="class diagram" title="Class Diagram" /></p>

<p>The class diagram above shows the Result interface implemented by the <code class="language-plaintext highlighter-rouge">Success</code> and
<code class="language-plaintext highlighter-rouge">Failure</code> classes. Now, the new version of our <code class="language-plaintext highlighter-rouge">Notification</code> is as follows:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>public class Notification
{
  private static final PhoneNumberUtil phoneNumberUtil = PhoneNumberUtil.getInstance();

  public static BiFunction&lt;String, String, Result&gt; phoneNumberValidator = (number, region) -&gt;
  {
    if (number == null)
      return new Failure("### The phone number can not be null");
    else if (number.length() == 0)
      return new Failure("### The phone number can not be empty");
    else
      try
      {
        if (phoneNumberUtil.isValidNumber(phoneNumberUtil.parse(number, region)))
          return new Success();
        else
          return new Failure("### The phone number %s is not valid for region %s".formatted(number, region));
      }
      catch (NumberParseException e)
      {
        return new Failure("### Unexpected exception while parsing the phone number %s".formatted(number));
      }
  };

  public void sendNotification(String phoneNumber, String region, String message)
  {
    Result result = phoneNumberValidator.apply (phoneNumber, region);
    if (result instanceof Success)
    {
      SmsService sms = new SmsService();
      sms.send(phoneNumber, message);
    }
    else
      throw new IllegalArgumentException("### Invalid phone number format: %s".formatted(phoneNumber));
  }
}
</code></pre></div></div>

<p>Running the JUnit tests against this new version will produce the expected output
in the mentioned cases, where the phone numbers or the regions are null or empty.
But this isn’t yet satisfactory as the method <code class="language-plaintext highlighter-rouge">sendNotification(...)</code> doesn’t
return any result and, consequently, it is hardly testable. But the worst is
that it throws exceptions, which is a side effect.</p>

<p>So, how could we get rid of these drawbacks ? One of the solutions would be,
instead of sending the SMS or throwing an exception, to return an action that does
whatever we need to do in each case. For example, to send the SMS if the validation
is successful or to log an error message otherwise. And this could be easily done,
thanks to lambda functions.</p>

<h2 id="abstracting-the-actions">Abstracting the actions</h2>

<p>Let’s switch now to the <code class="language-plaintext highlighter-rouge">sms-notifications-i3</code> module.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>public class Notification
{
  private static final Logger LOG = Logger.getLogger(Notification.class.getName());
  private static final PhoneNumberUtil phoneNumberUtil = PhoneNumberUtil.getInstance();

  public static BiFunction&lt;String, String, Result&gt; phoneNumberValidator = (number, region) -&gt;
  {
    try
    {
      return number == null
        ? new Failure("### The phone number can not be null")
        : number.length() == 0
          ? new Failure("### The phone number can not be empty")
          : phoneNumberUtil.isValidNumber(phoneNumberUtil.parse(number, region))
            ? new Success()
            : new Failure("### The phone number %s is not valid for region %s"
              .formatted(number, region));
    }
    catch (NumberParseException e)
    {
      return new Failure ("### The phone number %s is not valid for region %s"
        .formatted(number, region));
    }
  };

  public Runnable sendNotification(String phoneNumber, String region, String message)
  {
    Result result = phoneNumberValidator.apply(phoneNumber, region);
    return (result instanceof Success)
      ? () -&gt; sendSms(phoneNumber, message)
      : () -&gt; logError(((Failure) result).getMessage());
  }

  private void sendSms(String phoneNumber, String message)
  {
    new SmsService().send(phoneNumber, message);
  }

  private void logError(String message)
  {
    LOG.info("### Error: %s".formatted(message));
  }
}
</code></pre></div></div>

<p>We took advantage of this new refactoring to simplify the <code class="language-plaintext highlighter-rouge">phoneNumberValidator(...)</code>
function by replacing the <code class="language-plaintext highlighter-rouge">if..then.else..</code> structures with the more concise
ternary operator based notation. But more important, our <code class="language-plaintext highlighter-rouge">sendNotification(...)</code>
method doesn’t return any more <code class="language-plaintext highlighter-rouge">void</code> but a <code class="language-plaintext highlighter-rouge">Runnable</code> which, depending on the
validation success or failure, is a call to either the <code class="language-plaintext highlighter-rouge">sendSms(...)</code> method,
or to the <code class="language-plaintext highlighter-rouge">logError(...)</code> one. And here we’re touching at one of the most advanced
functionalities of the functional programming: the ability to abstract actions
and to handle them as lambda functions which could be used as input arguments
or returned as result values.</p>

<p>Now the <code class="language-plaintext highlighter-rouge">sendNotidfication(...)</code> method is almost functional as it starts
to be closer and closer to a pure function. It is also much easier testable as
it allows now successful test like:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>...
Runnable action = notification.sendNotification("+33615229808", "FR", "Test message");
assertNotNull(action);
assertDoesNotThrow(() -&gt; action.run());
...
</code></pre></div></div>

<p>or unsuccessful ones like:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>...
Runnable action = notification.sendNotification("+33615229808123", "FR", "Test message");
assertNotNull(action);
action.run();
assertTrue(logHandler.hasLoggedMessage("### Error: The phone number +33615229808123 is not valid for region FR"));
...
</code></pre></div></div>

<p>However, using <code class="language-plaintext highlighter-rouge">instanceof</code> to check whether the result is a success or a failure
is an antipattern widely not advisable. Another problem is the <code class="language-plaintext highlighter-rouge">sendNotification(...)</code>
method dependency on <code class="language-plaintext highlighter-rouge">sendSms(...)</code> or <code class="language-plaintext highlighter-rouge">logError(...)</code>. What if we want to invoke
different actions ? Or no action at all, just to compose the result with some
other function ? Well, in this case we need to decouple the <code class="language-plaintext highlighter-rouge">sendNotification(...)</code>
method from its success or failure actions.</p>

<h2 id="decoupling-functional-methods-from-their-actions">Decoupling functional methods from their actions</h2>

<p>In order to achieve this goal we need to refactor the <code class="language-plaintext highlighter-rouge">Result</code> hierarchy such that
to be able to bind actions to its <code class="language-plaintext highlighter-rouge">Success</code> and <code class="language-plaintext highlighter-rouge">Failure</code> implementations. Then
our class diagram becomes as follows:</p>

<p><img src="/assets/images/result2.png" alt="class diagram" title="Class Diagram" /></p>

<p>And here the listing of our new version of <code class="language-plaintext highlighter-rouge">Notification</code>, refactored as required:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>public class Notification
{
  private static final Logger LOG = Logger.getLogger(Notification.class.getName());
  private static final PhoneNumberUtil phoneNumberUtil = PhoneNumberUtil.getInstance();

  static BiFunction&lt;String, String, Result&lt;String&gt;&gt; phoneNumberValidator = (number, region) -&gt;
  {
    try
    {
      return number == null
        ? new Failure("### The phone number can not be null")
        : number.length() == 0
          ? new Failure("### The phone number can not be empty")
          : phoneNumberUtil.isValidNumber(phoneNumberUtil.parse(number, region))
            ? new Success(number)
            : new Failure("### The phone number %s is not valid for region %s"
              .formatted(number, region));
    }
    catch (NumberParseException e)
    {
      return new Failure("### Unexpected exception %s"
        .formatted(e.getMessage()));
    }
  };

  public void sendNotification (String phoneNumber, String region, String message)
  {
    phoneNumberValidator.apply(phoneNumber, region).ifSuccess(success, failure);
  }

  static Consumer&lt;String&gt; success = to -&gt; sendSms(to, "&gt;&gt;&gt; SMS sent to %s"
    .formatted(to));

  static Consumer&lt;String&gt; failure = msg -&gt; logError(msg);

  static void logError(String message)
  {
    LOG.info("### Error: %s".formatted(message));
  }

  static void sendSms(String phoneNumber, String message)
  {
    new SmsService().send(phoneNumber, message);
  }
}
</code></pre></div></div>

<p>In this new version, the function <code class="language-plaintext highlighter-rouge">phoneNumberValidator(...)</code> returns a parameterized
<code class="language-plaintext highlighter-rouge">Result&lt;String&gt;</code>, the <code class="language-plaintext highlighter-rouge">Success</code> class holds a value of type <code class="language-plaintext highlighter-rouge">T</code>, while the <code class="language-plaintext highlighter-rouge">Failure</code>
one holds a <code class="language-plaintext highlighter-rouge">String</code>. Two functions are now defined, one for success and the
other one for failure. They are both bound to actions using the <code class="language-plaintext highlighter-rouge">ifSuccess(...)</code>
method, which hasn’t probably the right name.</p>

<p>During the previous refactoring, we have already replaced the <code class="language-plaintext highlighter-rouge">if..then..else</code>
structure with the ternary operator. This operator is considered functional as
it returns a value and it doesn’t have side effects. This is as opposed to the
<code class="language-plaintext highlighter-rouge">if..then..else</code> controls which, in general, have side effects. Accordingly,
programs having several <code class="language-plaintext highlighter-rouge">if..then..else</code> structures should be refactored, not
only because they aren’t functional, but also because they are hardly readable
and maintainable.</p>

<p>This isn’t, of course, our case here, however, given that the ternary operator
can be also made non-functional, let’s try to get rid of it as well.</p>

<h2 id="abstracting-control-structures">Abstracting control structures</h2>

<p>Can we do that, can we completely remove the conditional structures or operators
from our code ? In order to verify that, let’s start by implenting th following
class:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>public class Condition&lt;T&gt; extends Tuple&lt;Supplier&lt;Boolean&gt;, Supplier&lt;Result&lt;T&gt;&gt;&gt;
{
  public Condition(Supplier&lt;Boolean&gt; condition, Supplier&lt;Result&lt;T&gt;&gt; result)
  {
    super(condition, result);
  }

  public static &lt;T&gt; Condition&lt;T&gt; when(Supplier&lt;Boolean&gt; condition, Supplier&lt;Result&lt;T&gt;&gt; value)
  {
    return new Condition&lt;&gt;(condition, value);
  }

  public static &lt;T&gt; DefaultCondition&lt;T&gt; when(Supplier&lt;Result&lt;T&gt;&gt; value)
  {
    return new DefaultCondition&lt;&gt;(() -&gt; true, value);
  }

  @SafeVarargs
  public static &lt;T&gt; Result&lt;T&gt; select(DefaultCondition&lt;T&gt; defaultCondition, Condition&lt;T&gt;... matchers)
  {
    for (Condition&lt;T&gt; aCondition : matchers)
      if (aCondition.getFirst().get()) return aCondition.getSecond().get();
    return defaultCondition.getSecond().get();
  }
}
</code></pre></div></div>

<p>This class extends <code class="language-plaintext highlighter-rouge">Tuple</code>, a base class that holds a pair of generics and which
is here parameterized with a <code class="language-plaintext highlighter-rouge">Suplier&lt;Boolean&gt;</code> representing a condition, and
a <code class="language-plaintext highlighter-rouge">Supplier&lt;Result&lt;T&gt;&gt;</code> holding the result of the condition evaluation.</p>

<p>Then two methods, named <code class="language-plaintext highlighter-rouge">when</code>, are provided. The 1st one defines the normal case
with a condition and a resulting boolean value. The 2nd one defines a default case
represented by the <code class="language-plaintext highlighter-rouge">DefaultCondition</code> subclass.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>public class DefaultCondition&lt;T&gt; extends Condition&lt;T&gt;
{
  public DefaultCondition(Supplier&lt;Boolean&gt; condition, Supplier&lt;Result&lt;T&gt;&gt; result)
  {
    super(condition, result);
  }
}
</code></pre></div></div>

<p>The <code class="language-plaintext highlighter-rouge">select(...)</code> method selects a condition. The figure below shows the complete
class diagram:</p>

<p><img src="/assets/images/condition.png" alt="class diagram" title="Class Diagram" /></p>

<p>And here is the new version of the <code class="language-plaintext highlighter-rouge">Notification</code> class:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>public class Notification
{
  ...
  public BiFunction&lt;String, String, Result&lt;String&gt;&gt; phoneNumberValidator = (number, region) -&gt;
  {
    return select(
      when(() -&gt; new Success&lt;&gt;(number)),
      when(() -&gt; number == null, () -&gt; new Failure&lt;&gt;("### The phone number cannot be null.")),
      when(() -&gt; number.length() == 0, () -&gt; new Failure&lt;&gt;("### The phone number cannot not be empty.")),
      when(() -&gt;
      {
        try
        {
          return !phoneNumberUtil.isValidNumber(phoneNumberUtil.parse(number, region));
        }
        catch (NumberParseException e)
        {
          return false;
        }
      }, () -&gt; new Failure&lt;&gt;("### The phone number %s is not for region %s"
        .formatted(number, region))));
  };

  public void sendNotification(String phoneNumber, String region, String message)
  {
    phoneNumberValidator.apply(phoneNumber, region).ifSuccess(success, failure);
  }
  ...
}
</code></pre></div></div>

<p>This way we have removed the long ternary operator expression from the previous
version, by which we have replaced the <code class="language-plaintext highlighter-rouge">if..then.else</code> control structure. Our
implementation starts looking more as a functional style one.</p>

<p>However, we still have this ugly <code class="language-plaintext highlighter-rouge">try..catch</code> structure and, unfortunately,
there is not much to do here. Java is, inherently, an imperative programming
language, built on the <code class="language-plaintext highlighter-rouge">try..catch</code> concept. And we’re using an external service,
the Google phone number validator, which throws exceptions. This exceptions should
either be caught, and that’s ugly, or thrown, and then we have side effects.</p>

<p>So yes, we need to admit it, weu cannot completely eliminate the <code class="language-plaintext highlighter-rouge">try..catch</code>
when calling exception-throwing services. The <code class="language-plaintext highlighter-rouge">try..catch</code> will always exist
somewhere in the codebase when interfacing with imperative APIs.</p>

<h2 id="the-reality-of-functional-programming-in-java">The reality of functional programming in Java</h2>

<p>Pure functional languages like Haskell don’t have exceptions. They use types
like <code class="language-plaintext highlighter-rouge">Maybe</code> or <code class="language-plaintext highlighter-rouge">Either</code> for error handling. Java’s ecosystem is built on exceptions,
so we must bridge between the imperative and the functional world.</p>

<p>The best we can do is isolate the imperative exception handling to specific
boundary methods and try to keep the core business logic purely functional.So,
our <code class="language-plaintext highlighter-rouge">phoneNumberValidator(...)</code> method will always need that `try..catchè somewhere,
but we can:</p>

<ul>
  <li>Move it to a dedicated method (cleaner separation).</li>
  <li>Keep the main validation logic functional.</li>
  <li>Minimize the imperative surface area.</li>
</ul>

<p>This is the pragmatic reality of functional programming in Java: we achieve
functional style where possible while acknowledging that complete purity isn’t
feasible when working with exception-based APIs.</p>]]></content><author><name>Nicolas DUMINIL</name></author><category term="Java" /><category term="Functional Programming" /><category term="Blog" /><summary type="html"><![CDATA[Java, as anybody knows, isn’t a functional language. It doesn’t allow for functional programming. But once having said that, it’s important to mention that there isn’t any general agreed definition of what the functional programming is.]]></summary></entry><entry><title type="html">EIP: Back to Fundamentals - The Canonical Data Model</title><link href="https://nicolasduminil.github.io/posts-archive/eip-canonical-data-model/" rel="alternate" type="text/html" title="EIP: Back to Fundamentals - The Canonical Data Model" /><published>2025-07-31T00:00:00+00:00</published><updated>2025-07-31T13:05:34+00:00</updated><id>https://nicolasduminil.github.io/posts-archive/eip-canonical-data-model</id><content type="html" xml:base="https://nicolasduminil.github.io/posts-archive/eip-canonical-data-model/"><![CDATA[<p>This project demonstrates how to implement a simple, yet realistic, business case
that uses the Canonical Data Model enterprise pattern.</p>

<h2 id="scenario">Scenario</h2>

<p>An online marketplace aggregates products from multiple suppliers with different
data formats:</p>

<ul>
  <li>Supplier A (Electronics): JSON format with nested specifications.</li>
  <li>Supplier B (Fashion): XML format with size/color variants.</li>
  <li>Supplier C (Books): CSV format with ISBN/author details.</li>
</ul>

<p>All supplier formats are transformed to a canonical <code class="language-plaintext highlighter-rouge">Product</code> model for unified
catalog management, search, and display.</p>

<h3 id="sample-data-format-for-supplier-a">Sample Data Format for Supplier A</h3>

<p>This supplier uses JSON as the data format.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>{
  "item_id": "ELEC001",
  "name": "Gaming Laptop",
  "cost": 1299.99,
  "specs": {"cpu": "Intel i7", "ram": "16GB"}
}
</code></pre></div></div>

<h3 id="sample-data-format-for-supplier-b">Sample Data Format for Supplier B</h3>

<p>This supplier uses XML as the data format.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>&lt;product&gt;
  &lt;sku&gt;FASH002&lt;/sku&gt;
  &lt;title&gt;Designer Jacket&lt;/title&gt;
  &lt;price&gt;299.50&lt;/price&gt;
  &lt;variants&gt;
    &lt;variant size="M" color="Blue"/&gt;
  &lt;/variants&gt;
&lt;/product&gt;
</code></pre></div></div>

<h3 id="sample-data-format-for-supplier-c">Sample Data Format for Supplier C</h3>

<p>This supplier uses CSV as the data format.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>isbn,book_title,author,retail_price
978-0134685991,Effective Java,Joshua Bloch,45.99
</code></pre></div></div>

<h2 id="architecture">Architecture</h2>

<p>The diagram below shows the software architecture of the implementation:</p>

<p><img src="/assets/images/canonical-data-model.png" alt="Canonical Data Model" /></p>

<p>Everything starts with the <code class="language-plaintext highlighter-rouge">ProductGeneratorProcessor</code> which generates random
test products in JSON, XML or CSV notation. So, supplier A provides
electronics products in JSON format, supplier B fashion ones in XML format, as for
the supplier C, they provide books in CSV format.</p>

<p>The messages are generated on a time based frequency, one every 15 seconds, using
the <code class="language-plaintext highlighter-rouge">timer</code> Camel component. Once generated, these messages are passed to a CBR
(<em>Content Based Router</em>) which marshals each payload to its
Java corresponding record type, as follows:</p>

<ul>
  <li>JSON messages, coming from the Supplier A, are marshaled to instances of <code class="language-plaintext highlighter-rouge">ElectronicsProduct</code> record type;</li>
  <li>XML messages, coming from the supplier B, are marshaled to instances of <code class="language-plaintext highlighter-rouge">FashionProduct</code> record type;</li>
  <li>CSV messages, coming from the supplier C, are marsheled to instances of <code class="language-plaintext highlighter-rouge">BookProduct</code> record type.</li>
</ul>

<p>These Java record instances are further processed by dedicated processors, as
follows:</p>

<ul>
  <li><code class="language-plaintext highlighter-rouge">ElectricsProduct</code> instances are trasformed by the <code class="language-plaintext highlighter-rouge">ElectronicsTransformer</code> processor to <code class="language-plaintext highlighter-rouge">Product</code> canonical instances;</li>
  <li><code class="language-plaintext highlighter-rouge">FashionProduct</code> instances are transformed by the <code class="language-plaintext highlighter-rouge">FashionTransformer</code> processor to canonical <code class="language-plaintext highlighter-rouge">Product</code> instances;</li>
  <li><code class="language-plaintext highlighter-rouge">BookProduct</code> instances are transformed by the <code class="language-plaintext highlighter-rouge">BookTransformer</code> processor to canonical <code class="language-plaintext highlighter-rouge">Product</code> instances.</li>
</ul>

<p>All these processors are subclasses of the abstarct class <code class="language-plaintext highlighter-rouge">ProductTransformer</code>
which implements the transformation general strategy, while bein specialized
by each concrete subclass.</p>

<p>Last but not least, the <code class="language-plaintext highlighter-rouge">Product</code> instances, ready to be shipped, are just
printed out in the Camel log file. In a real case, of course, they would have
been sent to a delivery channel.</p>

<h2 id="flow">Flow</h2>

<p>The following sequence diagram is illustrating the implementation’s flow:</p>

<p><img src="/assets/images/canonical-sd.png" alt="Canonical model sequence diagram" /></p>

<h2 id="key-components">Key Components</h2>

<ul>
  <li><strong>Generators</strong>. A set of generators are available in order to generate test data. They generate data in a supplier specific format, i.e. JSON for the Supplier A, XML for the supplier B and CSV for the supplier C. They all implement the <code class="language-plaintext highlighter-rouge">ProductGenerator</code> interface. See the class diagram below:</li>
</ul>

<p><img src="/assets/images/canonical-generator-cd.png" alt="Canonical model generators class diagram" /></p>

<ul>
  <li><strong>Transformers</strong>. A set of transformers responsible for mapping the specific data model to the canonical one. See the class diagram below:</li>
</ul>

<p><img src="/assets/images/canonical-transformer-cd.png" alt="Canonical model transformers class diagram" /></p>

<ul>
  <li><strong>BookProduct</strong>. A record modeling a Supplier C specific product representation.</li>
  <li><strong>ElectronicsProduct</strong>. A record modeling a Supplier A specific product representation.</li>
  <li><strong>FashionProduct</strong>. A record modeling a Supplier B specific product representation.</li>
  <li><strong>Product</strong>. A record modeling a canonical product representation.</li>
  <li><strong>ProductCatalogRoute</strong>. The Camel main route. Its listing is shown below:</li>
</ul>

<p>Here below is the listing of the <code class="language-plaintext highlighter-rouge">ProductCatalogRoute</code> class which defines the
Camel routes required by our implementation.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@ApplicationScoped
public class ProductCatalogRoute extends RouteBuilder
{
  @Override
  public void configure() throws Exception
  {
    from("timer:generator?period=15000")
     .routeId("dataGenerationRoute")
     .autoStartup(false)
     .process("productGenerator")
     .to("direct:processProduct");
   from("direct:processProduct")
    .routeId("dataProcessingRoute")
    .choice()
      .when(header("supplierType").isEqualTo("ELECTRONICS"))
        .unmarshal().json(JsonLibrary.Jackson, ElectronicsProduct.class)
        .process("electronicsTransformer")
      .when(header("supplierType").isEqualTo("FASHION"))
        .unmarshal().jacksonXml(FashionProduct.class)
        .process("fashionTransformer")
      .when(header("supplierType").isEqualTo("BOOKS"))
        .unmarshal().csv()
        .process("csvToBookTransformer")
        .process("bookTransformer")
    .end()
    .to("log:canonical-product?showBody=true");
  }
}
</code></pre></div></div>

<h2 id="sample-output">Sample output</h2>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>... Body: Product[id=ELEC001, name=Gaming Laptop, price=1299.99, category=Electronics, attributes={specifications={cpu=Intel i7, ram=16GB}}, supplierId=SUPPLIER_A]]
... Body: Product[id=FASH002, name=Designer Jacket, price=299.50, category=Fashion, attributes={variants=[Variant[size=M, color=Blue]]}, supplierId=SUPPLIER_B]]
... Body: Product[id=ELEC001, name=Gaming Laptop, price=1299.99, category=Electronics, attributes={specifications={cpu=Intel i7, ram=16GB}}, supplierId=SUPPLIER_A]]
... Body: Product[id=FASH002, name=Designer Jacket, price=299.50, category=Fashion, attributes={variants=[Variant[size=M, color=Blue]]}, supplierId=SUPPLIER_B]]
... Body: Product[id=FASH002, name=Designer Jacket, price=299.50, category=Fashion, attributes={variants=[Variant[size=M, color=Blue]]}, supplierId=SUPPLIER_B]]
... Body: Product[id=978-0134685991, name=Effective Java, price=45.99, category=Books, attributes={author=Joshua Bloch}, supplierId=SUPPLIER_C]]
... Body: Product[id=FASH002, name=Designer Jacket, price=299.50, category=Fashion, attributes={variants=[Variant[size=M, color=Blue]]}, supplierId=SUPPLIER_B]]
</code></pre></div></div>

<h2 id="key-patterns-demonstrated">Key Patterns Demonstrated</h2>

<ul>
  <li><strong>Canonical Data Model</strong>: Transforms messages from a supplier specific format to the common canonical one.</li>
  <li><strong>Message Transformer</strong>: Effectivelly performs messages transformation from a source to a target format.</li>
  <li><strong>Content-Based Router</strong>: Routes messages to the appropriate Message Transformer.</li>
</ul>]]></content><author><name>Nicolas DUMINIL</name></author><category term="Java" /><category term="Apache Camel" /><category term="Quarkus" /><category term="EIP" /><category term="Blog" /><summary type="html"><![CDATA[This project demonstrates how to implement a simple, yet realistic, business case that uses the Canonical Data Model enterprise pattern.]]></summary></entry><entry><title type="html">EIP: Back to Fundamentals !</title><link href="https://nicolasduminil.github.io/posts-archive/eip-aggregator/" rel="alternate" type="text/html" title="EIP: Back to Fundamentals !" /><published>2025-07-08T00:00:00+00:00</published><updated>2025-07-08T13:05:34+00:00</updated><id>https://nicolasduminil.github.io/posts-archive/eip-aggregator</id><content type="html" xml:base="https://nicolasduminil.github.io/posts-archive/eip-aggregator/"><![CDATA[<p>AI, LLM, ML, NLP, … Unless you’ve been living under a rock for the past two years,
you’ve probably had your fill of these syntagma. As for me, I can’t read on this
site, or anywhere else in the tech community, any IT post or article, without being
bombarded with these acronyms, by an endless stream of self-proclaimed AI gurus.
Everyone wants to demonstrate, through pages of listing, how to do RAG or MCP,
all this such that, finally, to be able to ask a model stupid things like the
first names of the four Beatles or what does a crocodile eat for the dinner.</p>

<p>In my opinion, there is currently a real overwhelming amount of AI hype and
buzzword fatigue in the tech community. At such an extent that I felt a
compelling need to return to fundamentals. And, in my case, these fundamentals where the EIP
(<em>Enterprise Integration Patterns</em>). Accordingly, I searched my library for the
Hohpe and Wolf black book, I removed the dust from its cover and I started to
read it again, from the beginning to end.</p>

<p>I use to react to posts on this site recommending books, like <em>Clean Architecture</em>,
published 8 years ago, which I consider outdated. It happened also to me recently to
advice against <em>Spring in Action</em>, published initially in 2019 and currently in
its 6th edition, dated 2022. So, this book has been published 6 times in 3 years !
Which wasn’t obviously enough to cover its full spectrum as it still lacks lots
of topics.</p>

<p>Anyway, if I’m dwelling on this subject here, this is to say that I’m not really
a big fan of old books because, in our field, things are changing so fast, for
the best and for the worst. But this EIP book, published in 2003, is incredibly
up to date.</p>

<p>So, after reading it again more than 20 years later, I thought that I definitely need to
contribute somehow to promote these EIPs which, in my opinion, represent the most
important foundation of the software industry. And the only way I found to
contribute is to provide <em>sui generis</em> implementations of these EIPs.</p>

<p>But once this decision taken, the difficulty of the technology-agnostic requirement
of such implementations appears immediately. How to implement these patterns without
getting bound to any technology or product ? As the book’s authors state in its
preface, they would have been tempted to provide implementations as well but,
given the wide diversity of the suitable products and technologies, the book
would have been “likely to never finish or else to be published so late as to
be irrelevant”.</p>

<p>It’s valuable to see the authors’ concern to avoid a possible irrelevance of their
work, due to a too late publishing date, but they may be at ease that this isn’t
the case, even today, more than 20 years after. And since, in any case, a
technology-agnostic implementation would be neither possible, nor useful, I
choose the one and only Java based enterprise grade integration platform:
Apache Camel.</p>

<p>This having beed said, I’m planning to take, one by one, most of the EIPs in the Hohpe
and Wolf book and to implement them, using Apache Camel and its Quarkus extensions.
And while I’m at it, I’ll try to find credible and realistic use cases, extracted
from my daily experience with enterprise grade applications, far from the “hello
world” residual examples. I’m not sure how useful my approach might be, but I
really need to contribute to this foundation, if only in the most modest way
possible.</p>

<p>The Hohpe and Wolf book is organized in a very systematical and methodical way,
based on the patterns classifications. But I won’t follow the same approach.
Instead, I’m proceeding in alphabetical order. And since the first pattern, in
the alphabetical order, is the <em>aggregator</em>,I’m starting with it. This might
not seem to be a very pedagogical approach as the <em>aggregator</em> is probably one
of the most complex patterns and the professional practice is to start from
simple to complex, not the other way around. But having gone through the book,
from beginning to end, is one of the pre-requisites here, accordingly, I thought
that the patterns order isn’t essential.</p>

<p>As you’ll see, each pattern implementation is documented
by its associated README.md file, observing the same template. This template
consists in the following paragraphs:</p>

<ul>
  <li>Scenario: this is a short description of the business case chosen to illustrate the pattern.</li>
  <li>Architecture: the software architecture, i.e. the libraries, the frameworks, the dependencies, the extensions, etc. if any, required by the implementation.</li>
  <li>Flow: a simple graphical sketch of the use case showing the involved components in a similar way to a sequence diagram.</li>
  <li>Key components: description of the most important components and their role.</li>
  <li>Business value: optional.</li>
  <li>Test and run: full instructions guiding how to test and run the case.</li>
</ul>

<p>So, let’s start !</p>

<h2 id="the-aggregator">The Aggregator</h2>

<p>This project demonstrates Apache Camel’s <strong>Splitter</strong> and <strong>Aggregator</strong> patterns using a realistic e-commerce scenario.</p>

<h3 id="scenario">Scenario</h3>

<p>An e-commerce platform processes orders that contain items from multiple suppliers. The system:</p>
<ol>
  <li><strong>Splits</strong> orders into individual items</li>
  <li><strong>Aggregates</strong> items by supplier and shipping address to optimize shipments</li>
  <li>Creates consolidated shipments for cost efficiency</li>
</ol>

<h3 id="architecture">Architecture</h3>

<p>The diagram below shows the software architecture of the implementation.</p>

<p><img src="/assets/images/aggregator.png" alt="Aggregator" /></p>

<p>Everything starts with the <code class="language-plaintext highlighter-rouge">OrderGenerator</code> processor which generates random test
orders. These orders are instances of the <code class="language-plaintext highlighter-rouge">Order</code> record. They are generated on
time based frequency, one every 10 seconds, using the <code class="language-plaintext highlighter-rouge">timer</code> Camel component.</p>

<p>Once generated, each order is split in a list of its corresponding <code class="language-plaintext highlighter-rouge">OrderItem</code>
instances, by the <code class="language-plaintext highlighter-rouge">OrderSplitter</code> Camel processor. After which, the individual
<code class="language-plaintext highlighter-rouge">OrderItem</code> instances are aggregated, based on their supplier ID and shipping
address, into instances of <code class="language-plaintext highlighter-rouge">Shipment</code>. This is the role of the <code class="language-plaintext highlighter-rouge">ShipmentAggregator</code>
Camel processor which defines the aggregation strategy.</p>

<p>Last but not least, the <code class="language-plaintext highlighter-rouge">Shipment</code> instances, ready to be delivered, are just
printed out in the Camel log file. In a real case, of course, they would have
been sent to a delivery channel.</p>

<h3 id="flow">Flow</h3>

<p>The following sequence diagram is illustrating the implementation’s flow:</p>

<p><img src="/assets/images/aggregator-sd.png" alt="Aggregator sequence diagram" /></p>

<h3 id="key-components">Key Components</h3>

<ul>
  <li><strong>OrderSplitter</strong>: Breaks orders into individual items with context</li>
  <li><strong>ShipmentAggregator</strong>: Groups items by supplier + shipping address</li>
  <li><strong>OrderGenerator</strong>: Creates realistic sample orders</li>
</ul>

<p>The <code class="language-plaintext highlighter-rouge">OrderSplitter</code> class implements the <code class="language-plaintext highlighter-rouge">Processor</code> Camel interface and, in
its <code class="language-plaintext highlighter-rouge">process(Exchange exchange)</code> method, it splits an <code class="language-plaintext highlighter-rouge">Order</code> instance, passed as
an input message, to its corresponding <code class="language-plaintext highlighter-rouge">OrderItem</code> instances list.</p>

<p>Here is the source code:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@ApplicationScoped
@Named("orderSplitter")
public class OrderSplitter implements Processor
{
  @Override
  public void process(Exchange exchange) throws Exception
  {
    Order order = exchange.getIn().getBody(Order.class);
    List&lt;OrderItem&gt; enrichedItems = order.items().stream()
      .map(item -&gt; item.withOrderContext(order.orderId(),
        order.shippingAddress()))
      .toList();
    exchange.getIn().setBody(enrichedItems);
  }
}
</code></pre></div></div>

<p>As for the <code class="language-plaintext highlighter-rouge">ShipmentAggregator</code>, it performs the complementary operation of
grouping the individual <code class="language-plaintext highlighter-rouge">OrderItem</code> instances, issued from the splitting process,
using an aggregation key which consist in the concatenation of the supplier ID
and the shipment address.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@ApplicationScoped
@Named("shipmentAggregator")
public class ShipmentAggregator implements AggregationStrategy
{
  @Override
  public Exchange aggregate(Exchange oldExchange, Exchange newExchange)
  {
    OrderItem newItem = newExchange.getIn().getBody(OrderItem.class);
    @SuppressWarnings("unchecked")
    List&lt;OrderItem&gt; items = Optional.ofNullable(oldExchange)
      .map(ex -&gt; (List&lt;OrderItem&gt;) ex.getIn().getBody(List.class))
      .orElse(new ArrayList&lt;&gt;());
    items.add(newItem);
    Exchange exchange = Optional.ofNullable(oldExchange)
      .orElse(newExchange);
    exchange.getIn().setBody(items);
    return exchange;
  }

  public Shipment createShipment(List&lt;OrderItem&gt; items)
  {
    return items.stream()
     .findFirst()
     .map(first -&gt; new Shipment(first.supplierId(), first.shippingAddress(), items))
     .orElse(null);
  }
}
</code></pre></div></div>

<p>The code above accumulates <code class="language-plaintext highlighter-rouge">OrderItems</code> instances, having the same aggregation
key, into a single list. It accepts two input arguments:</p>

<ul>
  <li>the <code class="language-plaintext highlighter-rouge">oldExchange</code> which represents the current state accumulated from previous aggregations; it is null initially;</li>
  <li>the <code class="language-plaintext highlighter-rouge">newExchange</code> containing the incoming message;</li>
</ul>

<p>The <code class="language-plaintext highlighter-rouge">oldExchange</code> argument is checked for the null value, i.e. no previous
accumulation exists and, then, a new item list is instantiated. Otherwise, if
the <code class="language-plaintext highlighter-rouge">oldExchange</code> isn’t null, then the list of the previously accumulated
<code class="language-plaintext highlighter-rouge">OrderItem</code> is extracted from it and the new <code class="language-plaintext highlighter-rouge">OrderItem</code> instance is added to
it from the <code class="language-plaintext highlighter-rouge">newExchange</code> argument.</p>

<h3 id="business-value">Business Value</h3>

<ul>
  <li><strong>Cost Reduction</strong>: Fewer shipments per supplier</li>
  <li><strong>Efficiency</strong>: Consolidated deliveries</li>
  <li><strong>Scalability</strong>: Handles multiple suppliers automatically</li>
</ul>

<h3 id="running-the-application">Running the Application</h3>

<p>In order to run the application, perform the following steps:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>git clone https://github.com/nicolasduminil/eip.git
<span class="nv">$ </span><span class="nb">cd </span>eip
<span class="nv">$ </span>mvn package
<span class="nv">$ </span>java <span class="nt">-jar</span> aggregator/target/quarkus-app/quarkus-run.jar
</code></pre></div></div>

<p>Now the application is up and running. It will:</p>
<ul>
  <li>Generate sample orders every 10 seconds</li>
  <li>Split orders by supplier</li>
  <li>Aggregate items into optimized shipments</li>
  <li>Log the entire process</li>
</ul>

<p>The route labeled <code class="language-plaintext highlighter-rouge">orderProcessing</code>, which triggers the whole flow, is declared
with <code class="language-plaintext highlighter-rouge">autoStartup(false)</code> in the <code class="language-plaintext highlighter-rouge">ECommerceRoute</code> class. This means that it won’t
be started automatically but, in order to give you full control, it should be
handled via the Hawtio console. The following Maven dependency:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>...
&lt;dependency&gt;
  &lt;groupId&gt;io.hawt&lt;/groupId&gt;
  &lt;artifactId&gt;hawtio-quarkus&lt;/artifactId&gt;
  &lt;version&gt;4.4.1&lt;/version&gt;
&lt;/dependency&gt;
...
</code></pre></div></div>

<p>includes the Hawtio console in your application JAR. Then by fireing your prefered
browser at http://localhost:8080/hawtio, you’ll see something similar to the
picture below:</p>

<p><img src="/assets/images/hawtio.png" alt="Hawtio console" /></p>

<p>Now, go to <code class="language-plaintext highlighter-rouge">Camel-&gt;Routes-&gt;orderProcessing</code> and, in the right most pane, select
the tab labeled <code class="language-plaintext highlighter-rouge">Operations</code>. Then scrool down until you see the method
<code class="language-plaintext highlighter-rouge">void start()</code>. Unfold it and click the red button <code class="language-plaintext highlighter-rouge">Execute</code>. The message <code class="language-plaintext highlighter-rouge">Operation
successful</code> should be displayed and the route will start. You can tell that as
your Camel log file will show trace messages.</p>

<p>Executing the <code class="language-plaintext highlighter-rouge">String getState()</code> method will show that the route is active.
Whenever you think that you finished experiencing with th use case, you can
execute the method <code class="language-plaintext highlighter-rouge">void stop()</code> and the process will terminate. Don’t hesitate
to play with different operations exposed here, in the Hawtio console.</p>

<h3 id="sample-output">Sample Output</h3>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>=== Processing new order ===
Generated order: Order{orderId='ORD-123', customerId='CUST-456', items=5}
Processing item: OrderItem{productId='LAPTOP-1', supplierId='SUPPLIER_ELECTRONICS', quantity=2}
=== SHIPMENT CREATED ===
Shipment: Shipment{id='SHIP-SUPPLIER_ELECTRONICS-123', supplier='SUPPLIER_ELECTRONICS', items=2, value=250.50}
HIGH VALUE shipment (250.50€) - Priority processing
</code></pre></div></div>

<h3 id="key-patterns-demonstrated">Key Patterns Demonstrated</h3>

<ul>
  <li><strong>Splitter Pattern</strong>: <code class="language-plaintext highlighter-rouge">split(body())</code> breaks orders into items</li>
  <li><strong>Aggregator Pattern</strong>: Groups by <code class="language-plaintext highlighter-rouge">aggregationKey</code> (supplier + address)</li>
  <li><strong>Content-Based Router</strong>: Routes high-value shipments differently</li>
</ul>]]></content><author><name>Nicolas DUMINIL</name></author><category term="Java" /><category term="Apache Camel" /><category term="Quarkus" /><category term="EIP" /><category term="Blog" /><summary type="html"><![CDATA[AI, LLM, ML, NLP, … Unless you’ve been living under a rock for the past two years, you’ve probably had your fill of these syntagma. As for me, I can’t read on this site, or anywhere else in the tech community, any IT post or article, without being bombarded with these acronyms, by an endless stream of self-proclaimed AI gurus. Everyone wants to demonstrate, through pages of listing, how to do RAG or MCP, all this such that, finally, to be able to ask a model stupid things like the first names of the four Beatles or what does a crocodile eat for the dinner.]]></summary></entry><entry><title type="html">The switch..case mammoth</title><link href="https://nicolasduminil.github.io/posts-archive/mammoth/" rel="alternate" type="text/html" title="The switch..case mammoth" /><published>2025-07-03T00:00:00+00:00</published><updated>2025-07-03T13:05:34+00:00</updated><id>https://nicolasduminil.github.io/posts-archive/mammoth</id><content type="html" xml:base="https://nicolasduminil.github.io/posts-archive/mammoth/"><![CDATA[<h1 id="the-switchcase-mammoth">The <code class="language-plaintext highlighter-rouge">switch..case</code> mammoth</h1>

<p>As a Java developer, I think that there isn’t anything uglier than a piece of code having in its middle a <code class="language-plaintext highlighter-rouge">switch</code> with
43 cases. Whenever I’m seeing something like this, I remember the Windows desktop applications of my beginnings. C++ was
already <em>the programming language</em> at that time but most of the developers were writing C code and compiling it with the
Borland or Microsoft C++ compiler. The other alternative was the IBM C++ compiler for OS/2 and, then, we were dealing with
Presentation Manager, Communication Manager and others OS/2 specific stuff, including Database Manager, which came with
a mini DB2 version. A couple of years later, all this stuff got unified in Windows NT 4 but, at that moment, I have had
already moved to Unix and Linux, and it was definitive.</p>

<p>Anyway, I still remember these huge <code class="language-plaintext highlighter-rouge">switch..case</code> structures, on several listing pages, supposed to process all the
possible messages, like <code class="language-plaintext highlighter-rouge">WM_INITDIALOG</code>, <code class="language-plaintext highlighter-rouge">WM_COMMAND</code>, etc., and to react to different user actions through menus,
controls, accelerators and others. They were, already at that time, broadly inadvisable and largely discouraged, but
commonly adopted. So, imagine my surprise when, recently on this site, I came across a post discussing a design solution
for exception handling, based on a 43 cases <code class="language-plaintext highlighter-rouge">switch</code> statement. Hence, the idea to write this blog ticket.</p>

<h2 id="a-realistic-use-case">A realistic use case</h2>

<p>In order to illustrate my point, I was looking for an use case and, while at it, for a realistic one, if possible. The
previously mentioned Windows applications would have been perfect for that, but I left this field a couple of decades ago.
Accordingly, I tried to find something in FinTech, which my current field and, since most of the clients I’m working
for are banks and other financial organisations, I came to imagine a simplified derivatives example.</p>

<p>Okay so, for starters, derivatives are titles whose values are contingent on the values of underlying assets, such that
interest rates or commodities like oil. They “derive” their values from the ones of the contingent resources. There are
several types of derivatives: options, forwards, futures, swaps, warrants, etc. and, without going into details, they all
have prices, which are calculated in a specific way, depending on the derivative type.</p>

<p>In order to clarify expectations, I have prepared a simple Java project (https://github.com/nicolasduminil/switch-the-mammoth.git).
If you look at, you can see the class <code class="language-plaintext highlighter-rouge">Title</code> which implements several methods for the price calculation, specific to each
type of derivative. Let’s look at it:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>public class Title extends Observable

{
  private final TitleType titleType;

  public Title(TitleType titleType)
  {
    this.titleType = titleType;
  }

  public BigDecimal recalculate (CalculationData calculationData) throws IllegalTitleTypeException
  {
    BigDecimal result = performRecalculations(calculationData);
    this.notifyObservers();
    return result;
  }

  public BigDecimal performRecalculations(CalculationData calculationData) throws IllegalTitleTypeException
  {
    BigDecimal subtract = calculationData.spotPrice().subtract(calculationData.strikePrice());
    BigDecimal rate = calculationData.riskFreeRate().multiply(calculationData.timeToExpiry());
    switch (titleType)
    {
      case OPTION:
        // Option value: max(0, spotPrice - strikePrice) + timeValue
        BigDecimal intrinsicValue = subtract.max(BigDecimal.ZERO);
        BigDecimal timeValue = calculationData.volatility()
          .multiply(calculationData.timeToExpiry()).multiply(new BigDecimal("0.1"));
        return intrinsicValue.add(timeValue);
      case FUTURE:
        // Future value = spotPrice * e^(riskFreeRate * timeToExpiry)
        BigDecimal multiplier = BigDecimal.valueOf(Math.exp(rate.doubleValue()));
        return calculationData.spotPrice().multiply(multiplier);
      case FORWARD:
        // Forward value = spotPrice * (1 + riskFreeRate * timeToExpiry)
        BigDecimal rateComponent = BigDecimal.ONE.add(rate);
        return calculationData.spotPrice().multiply(rateComponent);
      case SWAP:
        // Swap value = notional * (fixedRate - floatingRate) * timeToExpiry
        BigDecimal rateDiff = calculationData.fixedRate().subtract(calculationData.floatingRate());
        return calculationData.notional().multiply(rateDiff).multiply(calculationData.timeToExpiry());
      case WARRANT:
        // Warrant value = option value * dilution factor
        BigDecimal optionValue = subtract.max(BigDecimal.ZERO);
        BigDecimal dilutionFactor = new BigDecimal("0.95"); // 5% dilution
        return optionValue.multiply(dilutionFactor);
      default:
        throw new IllegalTitleTypeException("### Illegal title type %s".formatted(titleType.name()));
    }
  }
}
</code></pre></div></div>

<p>As you can see, each derivative type, be it option, future, forward, etc., has its own price calculation algorithm. These
algorithms are all implemented in the <code class="language-plaintext highlighter-rouge">performRecaclculations(...)</code> method. It is invoked by the <code class="language-plaintext highlighter-rouge">recalculate(...)</code> method
and, once the recalculations done, all the class observers are notified, such that to eventually update prices.</p>

<p>Okay, so as you probably have guessed, now that we have looked at this class, my point is to say that, should you ever see
some code designed in this manner, then you mandatory have to refactor it. But why is that, what might be wrong with this code ?
Why do you need to refactor it when all the KISS (<em>Keep It Simple Stupid</em>) advocates will tell you how easy to understand
and convenient it is ? And as a matter of fact, I have to admit that, for being stupid, it is really stupid and, hence,
perfect for any KISS apologist. So to answer this question, I could talk about OCP (<em>Open Closed Principle</em>), tightly
coupling, testability, reusability, extensibility, modularity and other “[a-z]*ity” words. But instead, I prefer
to say just that: <strong>this code is ugly</strong>.</p>

<p>Yes, you got it right, ugly ! And yes, ugliness, or more exactly esthetics, is also a software architecture and design
criteria. In this respect, software architecture is similar to civil architecture. Imagine two house projects side by
side, one having only one large room with everything inside, a second one with different living spaces, bathrooms,
kitchens, designed such that to fulfil the needs of the inhabitants, where the architect has meticulously designed every
detail and carefully planned the finishing touches. Which one would you prefer ? Well, software architecture is similar:
it has to be beautifull.</p>

<h2 id="refactoring-the-mammoth-step-1">Refactoring the mammoth: step 1</h2>

<p>Okay, so let’s refactor now our <code class="language-plaintext highlighter-rouge">switch..case</code> mammoth. And our first approach will be using polymorphism. Having said
that, I’m aware that I lost almost all my OOP phobist readers who hate polymorphism because it is not supported by Rust.
Well, when I’m saying that Rust doesn’t support polymorphism, what I’m trying to say is that they needed to change
the polymorphism definition in order to claim that Rust supports it. Anyway, Rust is beyond OOP and,
accordingly, it doesn’t have to support polymorphism, neither anything else. Rust is beyond everything.
But this is another discussion.</p>

<p>Let’s look at the class diagram below:</p>

<p><img src="/assets/images/derivatives.png" alt="class diagram" title="Class Diagram" /></p>

<p>As we can see, this class diagrams models the derivatives business domain in the form of a class hierarchy whose root is the
<code class="language-plaintext highlighter-rouge">AbstractTitle</code> class which, as its name implies, is an abstract one. It extends <code class="language-plaintext highlighter-rouge">Observable</code> such that it supports observers
that it could notify as soon as recalculations were done. Each derivative type is, subsequently, modelled as a subclass of
this abstract class. This is important in order to recognize the fact that options, futures, forwards and warrants are all
derivatives.</p>

<p>The recalculations are done by two methods: <code class="language-plaintext highlighter-rouge">recalculate(...)</code> and <code class="language-plaintext highlighter-rouge">performCalculations(...)</code>, as shown by the activity
diagram below:</p>

<p><img src="/assets/images/activity.png" alt="activity diagram" title="Activity diagram" /></p>

<p>Here is the code:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>public abstract class AbstractTitle extends Observable implements Derivative
{
  protected TitleType titleType;

  protected AbstractTitle(TitleType titleType)
  {
    this.titleType = titleType;
  }

  public BigDecimal recalculate(CalculationData calculationData) throws IllegalTitleTypeException
  {
    BigDecimal titlePrice = performCalculations(calculationData);
    this.notifyObservers();
    return titlePrice;
  }

  public abstract BigDecimal performCalculations(CalculationData calculationData) throws IllegalTitleTypeException;
}
</code></pre></div></div>

<p>So, the <code class="language-plaintext highlighter-rouge">recalculate(...)</code> method is in charge of the price calculation and, since this process is different for each
type of derivative, the effective algorithm implementation is delegated to the <code class="language-plaintext highlighter-rouge">performCalculations(...)</code> method,
declared here as abstract. Then, it is the obligation of each subclass to implement this method. For example, here is
how it looks in the <code class="language-plaintext highlighter-rouge">Option</code> class:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>public class Option extends AbstractTitle
{
  protected Option(TitleType titleType)
  {
    super(titleType);
  }

  // Option value: max(0, spotPrice - strikePrice) + timeValue
  @Override
  public BigDecimal performCalculations(CalculationData calculationData)
  {
    BigDecimal intrinsicValue = calculationData.spotPrice()
      .subtract(calculationData.strikePrice()).max(BigDecimal.ZERO);
    BigDecimal timeValue = calculationData.volatility()
      .multiply(calculationData.timeToExpiry())
      .multiply(new BigDecimal("0.1"));
    return intrinsicValue.add(timeValue);
  }

  @Override
  public TitleType getTitleType()
  {
    return TitleType.OPTION;
  }
}
</code></pre></div></div>

<p>What we’ve implemented here is the <em>strategy</em> pattern, also called the <em>template method</em> one. But what about this
<code class="language-plaintext highlighter-rouge">Derivative</code> interface that can be seen in the class diagram above ? Well, it allows us to handle in a
generic way derivatives. Now, given this new implementation, our derivative price calculation is as simple as that:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>...
Derivative option = ... //some abstract factory to build options
option.recalculate(...);
...
</code></pre></div></div>

<p>Let’s look now more carefully at how we could improve further our design.</p>

<h2 id="refactoring-the-mammoth-step-2">Refactoring the mammoth: step 2</h2>

<p>Okay, so we refactored our initial <code class="language-plaintext highlighter-rouge">Title</code> class such that to replace the <code class="language-plaintext highlighter-rouge">switch..case</code> mammoth by polymorphism and,
the result of this refactoring is to have 6 classes instead of one. Here, our friends, the KISS advocates, will throw up
their hands in horror. And they are right to point that our design here doesn’t observe the KISS principle
as it isn’t stupid at all. This refactoring is what they call <em>over-engineering</em> and they identify it as being the most
serious issue in the software industry. As opposed to them, I think that the most serious issue of the software industry
is the poor design and the over-simplification, leading to stupid implementations and ugly code.</p>

<p>So, let’s go now further and, after the <em>strategy</em> pattern, lets apply the <em>registry</em> one as well. Look at the
<code class="language-plaintext highlighter-rouge">CalculationRegistry</code> class below:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>public class CalculationRegistry&lt;T, R&gt;
{
  private final Map&lt;T, Function&lt;R, BigDecimal&gt;&gt; strategies = new HashMap&lt;&gt;();

  public void register(T key, Function&lt;R, BigDecimal&gt; strategyFunc)
  {
    strategies.put(key, strategyFunc);
  }

  public BigDecimal apply(T key, R request)
  {
    return strategies.getOrDefault(key, r -&gt; BigDecimal.ZERO).apply(request);
  }

  public void unregister(T key)
  {
    strategies.remove(key);
  }

  public void clear()
  {
    strategies.clear();
  }

  public Map&lt;T, Function&lt;R, BigDecimal&gt;&gt; getStrategies()
  {
    return strategies;
  }
}
</code></pre></div></div>

<p>This is a generic class parameterized with a derivative type and a price calculation strategy. The <code class="language-plaintext highlighter-rouge">T</code> generic
argument represents a derivative type while the <code class="language-plaintext highlighter-rouge">R</code> one is the strategy to be applied to calculate the given
derivative price. It implements the <em>registry</em> pattern, quite commonly used for the kind of processing that we’re
here at, by maintaining a map which associates a calculation strategy to a derivative type. For example:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Calculationregistry&lt;TitleType, CalculationData&gt; calculationRegistry = new CalculationRegistry&lt;&gt;();
calculationRegistry.register(TitleType.OPTION, data -&gt;
{
  BigDecimal intrinsicValue = data.spotPrice()
    .subtract(data.strikePrice()).max(BigDecimal.ZERO);
  BigDecimal timeValue = data.volatility()
    .multiply(data.timeToExpiry())
    .multiply(new BigDecimal("0.1"));
  return intrinsicValue.add(timeValue);
});
</code></pre></div></div>

<p>The code above registers the strategy required for the price calculation of options. Now, in order to effectively
perform the operation:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>BigDecimal optionPrice = calculationRegistry.apply(TitleType.OPTION, testData);
</code></pre></div></div>

<p>This way we’re embracing the functional programming style available in Java since its 8th release. This allows us to
use strategies, i.e. methods, as data, by passing them as input arguments and storing them in maps. Don’t hesitate to
look at the unit test <code class="language-plaintext highlighter-rouge">TestCalculationRegistry</code> to make sure you understand how everything works. And to run these tests.</p>

<p>Conclusion: while over-engineering might be an issue, leading to hardly maintainable code, the most important
issue of the software industry is the over-simplification and the poor design.
So, keep it simple but not simplistic and, above all, not stupid !</p>]]></content><author><name>Nicolas DUMINIL</name></author><category term="Java" /><category term="OOP" /><category term="Blog" /><summary type="html"><![CDATA[The switch..case mammoth]]></summary></entry><entry><title type="html">SaC It Up: Dive Deep into DevSecOps with Java, Quarkus and Keycloak</title><link href="https://nicolasduminil.github.io/posts-archive/devsecops/" rel="alternate" type="text/html" title="SaC It Up: Dive Deep into DevSecOps with Java, Quarkus and Keycloak" /><published>2025-06-07T00:00:00+00:00</published><updated>2025-06-07T13:05:34+00:00</updated><id>https://nicolasduminil.github.io/posts-archive/devsecops</id><content type="html" xml:base="https://nicolasduminil.github.io/posts-archive/devsecops/"><![CDATA[<h1 id="sac-it-up-dive-deep-into-devsecops-with-java-quarkus-and-keycloak">SaC It Up: Dive Deep into DevSecOps with Java, Quarkus and Keycloak</h1>

<p>In the ever advancing world of software industry, code and security often
feel like antagonistic vectors. But what if security could move at the pace of code ?
Enter <em>Security as Code</em> (SaC): the next phase of the DevSecOps evolution. By
including security rules, expectations and policies directly into the development lifecycle,
SaC shifts security left, turning traditionally manual and repetitive tasks into
automated, version-controlled artifacts. This isn’t just a technical upgrade
but a cultural change where developers, security teams, and operations speak the
same language: code.</p>

<p>Whilst software delivery pipelines progressively become automated, making security
an integral part of the code isn’t anymore a best practice, but a necessity. From
this perspective, SaC is considered as being the
natural progression of DevSecOps. Addressing security strategies, controls and policies
as development artifacts which could be tested, audited and versioned, just like infrastructure
and application code, not only scales better but also ensures security
is enforced consistently across environments and teams.</p>

<p>In this article, we’ll explore how SaC allows DevOps and DevSecOps teams to build fast,
to ensure appropriate security and to trustfully scale in modern cloud-native environments
based on Java, Quarkus and Keycloak platforms.</p>

<h2 id="from-devops-to-sac-a-short-piece-of-history">From DevOps to SaC: a short piece of history</h2>

<p>The term of DevOps, as a contraction of “Development” and “Operations”, emerged
in the late Y2K as a response to the traditional software development process,
where development and IT operations were often disconnected. It aimed at breaking
down barriers between development and operations teams such that to favorize faster
deployment cycles, increased collaboration and automation of the software delivery
process through CI/CD platforms.</p>

<p>However, in these early days of DevOps, security was often an afterthought. Traditional
security practices were manual and slow, to such an extent that they became a bottleneck
in the DevOps process and in the whole development lifecycle. Starting with 2012,
organizations realized that security had to be integrated into the DevOps workflow.
The core idea was to “shift left security”, i.e. bringing it earlier in the
development process. A couple of years later, in 2015, the term of DevSecOps was
coined, standing for “Development, Security and Operations”, as it brought security
as a shared responsibility across these three fields.</p>

<p>DevSecOps transformed security, from a separate team post-deployment concern, into
an embedded from start, shared responsibility. IaC (<em>Infrastructure as Code</em>) was
one of the most important approaches to inforce DevSecOps practices, by using the
programming languages power in order to automate security startegies, configurations
and policies. And since a continuously increasing part of the IaC was dedicated
to make security repeatable, scalable and version-controlled, this part finished
by becoming a separate discipline in itself and was named <em>Security as Code</em> (SaC).</p>

<p>Today SaC is considered as being the next logical maturity step for organizations
already having embraced DevSecOps. As a matter of fact, if DevSecOps made security
collaborative and brought it in the earliest stages of the development lifecycle,
SaC makes it reliable, scalable and automated.</p>

<h2 id="the-iam-service-a-must-have-for-any-sac-platform">The IAM Service: a must-have for any SaC platform</h2>

<p>During the software development evolution from DevOps to DevSecOps and then to SaC,
the requirements around <em>Identity and Access Management</em> (IAM) services evolved as well.
Once a secondary concern handled by infrastructure or external providers, IAM services became
gradually first-class citizens of the development lifecycle, driven by the need for automated,
auditable, and scalable security.</p>

<p>With IaC permitting the automated deployment of infrastructure at scale, managing
who can access what within these systems becomes more complex and more critical.
This is where IAM comes into the picture, providing the foundation for secure, role
based control across cloud-native environments. In this context, IAM has become
an essential service of the infrastructure automation,
ensuring that only the right users, services, and machines have access to the
right resources.</p>

<p>With the growth of DevSecOps and IaC, development teams have started more and
more to get into the habit of including security policies and controls earlier
in the development process. This implied to delegate authentication and authorization
to centralized IAM platforms. By leveraging standards like OpenID Connect (OIDC)
and OAuth 2.0, these platforms allowed developers to offload security logic from
applications while still maintaining strict control over access policies. However,
manually configuring IAM, as it was often the case, used to dramatically limit
its scalability and its repeatability.</p>

<p>SaC was the critical point at which IAM configurations were no longer manually
managed but instead treated as code. Among the many IAM available solutions,
Keycloak, a RedHat open-source identity server, stands out as a powerful platform
designed to provide centralized authentication and authorization for modern
applications. With tools like Keycloak, teams can now codify realms, clients,
roles, and access policies, store them in version control, and deploy them automatically
using tools like Terraform, Ansible or Helm. This approach makes IAM a fully
integrated, testable, and repeatable infrastructure element which guarantees
consistent and traceable security across environments.</p>

<p>Keycloak’s extensive support for automation makes it an ideal candidate for IaC.
Its CLI, REST API, and declarative configuration import / export feature allow
organizations to enforce zero-trust principles, enable fine-grained access control,
and maintain compliance, by incorporating Keycloak configurations into the broader
DevSecOps toolchain, without sacrificing development speed. This integration
ensures that identity policies evolve with the application, not as an afterthought
but as a core part of the software lifecycle.</p>

<h2 id="introducing-keycloak">Introducing Keycloak</h2>

<p>Keycloak is a IAM server dedicated to supply identity services to modern applications
such as SPA (<em>Single Page Application</em>), mobile applications or REST APIs. Started
at RedHat in 2014 as an open-source project, it has grown little by little into
a well recognized product, with a solid community and a strong user base.</p>

<p>Keycloak supports industry standard protocols like OAuth 2.0, OpenID Connect
and SAML 2.0, this way allowing the developers to disregard the necessity of
mastering the full complexity of the authentication and authorization process,
by delegating its responsibility to the server, while guaranteeing a high security
level to applications that don’t have access to the users’ credentials.</p>

<p>It is important to mention also that Keycloak provides a wide range of authentication
mechanisms, including but not limited to MFA (<em>Multi Factor Authentication</em>),
SA (<em>Strong Authentication</em>), using OTPs (<em>One Time Password</em>), security devices,
WebAuthn passwords or a combination of them all. Thanks to its session management
capabilities, Keycloak is an SSO (<em>Single Sign On</em>) service as well, allowing
users to access several applications, while only having to authenticate once.</p>

<p>As any IAM server, the notion of user is central to Keycloack but, as opposed to
other IAM servers, Keycloak comes with its own user database. For simplicity’s
sake and in order to avoid possible licensing issues, this default database is
a very simple H2 file-based one that shouldn’t be used in production. Instead,
any other production-ready database, like Oracle, PostgreSQL, MySQL, MariaDB,
etc., may be configured. Additionally, Keycloak provides a strong caching layer
designed to avoid database hits as much as possible. And as the vast majority
of organizations uses LDAP directories as their single source
of truth for user management and digital identities. Consequently, Keycloak
supports integration with different LDAP directory implementations like Microsoft
Active Directory, RedHat Directory Server, ApacheDS, OpenLDAP, etc.</p>

<h2 id="sac-and-keycloak">SaC and Keycloak</h2>

<p>While Keycloak itself is not SaC tool, the way to manage and deploy it and its
configurations can absolutely become part of a SaC strategy. Its extensive support
for automation through its CLI, REST API and declarative import/export makes
it an ideal platform for SaC. Here are the most essential criteria showing how
Keycloak fits into SaC:</p>

<ol>
  <li>Configuration as code.
    <ul>
      <li>Defining realms, clients, roles, users, identity providers, etc. using the <code class="language-plaintext highlighter-rouge">kadm</code> scripts and storing these scripts in GIT repositories.</li>
      <li>Defining realms, clients, roles, users, identity providers, etc. ind JSON or YAML files, storing these files in GIT repositories and importing them using the <code class="language-plaintext highlighter-rouge">kadm</code> tool.</li>
      <li>Exporting Keycloak realm configurations as JSON files, storing them in GIT repositories and re-import them during deployments.</li>
    </ul>
  </li>
  <li>Automated deployment.
    <ul>
      <li>Using OCI (<em>Open Container Initiative</em>) compliant images to run Keycloak and dynamically apply configurations described by versioned scripts or JSON/YAML files.</li>
      <li>Using tools like Terraform, Ansible, Helm or Kubernetes Operators to deploy and configure Keycloak.</li>
    </ul>
  </li>
  <li>Programmatic access policies.
    <ul>
      <li>Using RBAC (<em>Resource Based Acces Control</em>) policies as JSR 250 annotations in versioned Java code.</li>
      <li>Using RBAC (<em>Resource Based Acces Control</em>) policies as versioned JSON files.</li>
    </ul>
  </li>
  <li>CI/CD integration.
    <ul>
      <li>Using CI/CD pipelines to automatically test and deploy security artifacts.</li>
      <li>Making security repeatable, auditable and scalable.</li>
    </ul>
  </li>
</ol>

<p>One of the most classical scenario of implementing SaC with Keycloak would probably
include the use of <code class="language-plaintext highlighter-rouge">kadm</code> versioned scripts to create OAuth 2.0 clients, users,
groups, roles, authentication flows, authorization policies, etc. and automatically
applying them using OCI complinat images or tools like Terraform, Ansible, Helm
or Kubernetes Operators. However, doing the same thing by manually clicking around
the Keycloak administration console without versioning and documenting these
one-off changes, would <em>not</em> be SaC.</p>

<h2 id="running-keycloak">Running Keycloak</h2>

<p>While Keycloak may be installed locally, like any other software, by downloading
and uncompressing it, the easiest way to run it is as an OCI compliant image.</p>

<p>Here is how you can run it as a Docker image:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ docker run -d --name keycloak \
    --rm -e KEYCLOAK_ADMIN=admin \
    -e KEYCLOAK_ADMIN_PASSWORD=admin \
    -p 8080:8080 quay.io/keycloak/keycloak:latest start-dev
</code></pre></div></div>

<p>This command will pull the Docker image <code class="language-plaintext highlighter-rouge">quay.io/keycloak/keycloak:latest</code> from
the RedHat repository and, if it isn’t already present locally, it will store it
there. Then, the Docker daemon will run it in the background (option <code class="language-plaintext highlighter-rouge">-d</code>), listening
for HTTP trafic on the container TCP port 8080, mapped on the same TCP port of
the host (option <code class="language-plaintext highlighter-rouge">-p</code>). The temporary administrator user name, as well as the
associated password, are <code class="language-plaintext highlighter-rouge">admin</code> (options <code class="language-plaintext highlighter-rouge">-e KEYCLAOK_ADMIN</code> and
<code class="language-plaintext highlighter-rouge">-e KEYCLOAK_ADMIN_PASSWORD</code>). Ultimately, the name of the running container
is <code class="language-plaintext highlighter-rouge">keycloak</code> (option <code class="language-plaintext highlighter-rouge">--name</code>) and, when the Docker container is stopped,
the associated image will be removed (option <code class="language-plaintext highlighter-rouge">--rm</code>).</p>

<p>You can check that everything is working as expected by executing the following
command:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ docker images
REPOSITORY                     TAG               IMAGE ID       CREATED        SIZE
...
quay.io/keycloak/keycloak      latest            152827b20b9e   2 months ago   443MB
...
</code></pre></div></div>

<p>The output above shows that the Docker image <code class="language-plaintext highlighter-rouge">quay.io/keycloak/keycloak:latest</code>
was pulled and installed locally.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ docker ps
CONTAINER ID   NAMES      IMAGE                              PORTS                                        STATUS
...
ded779e9c153   keycloak   quay.io/keycloak/keycloak:latest   8443/tcp, 0.0.0.0:8080-&gt;8080/tcp, 9000/tcp   Up 6 seconds
...
</code></pre></div></div>

<p>Here you can see that the Docker container named <code class="language-plaintext highlighter-rouge">keycloak</code> is up and running.
You can see its log file as shown below:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ docker logs keycloak --details --follow
Updating the configuration and installing your custom providers, if any. Please wait.
...
2025-05-15 16:06:34,954 INFO  [io.qua.dep.QuarkusAugmentor] (main) Quarkus augmentation completed in 6489ms
Running the server in development mode. DO NOT use this configuration in production.
...
</code></pre></div></div>

<p>Last but not least, firing your prefered browser at http://localhost:8080 and
connecting as <code class="language-plaintext highlighter-rouge">admin/admin</code> to the login dialog shown below:</p>

<p><img src="/assets/images/keycloak1.png" alt="Keycloak logging" /></p>

<p>will allow you to get acces to the Keycloak
administration console. This proves that your IAM server is fully operational.</p>

<h2 id="getting-started-with-keycloak-cli">Getting started with Keycloak CLI</h2>

<p>As already mentioned above, Keycloak comes with an administration console which
allows you to configure and manage the IAM server. But using this administration
console wouldn’t be a SaC compliant approach because, whatever you do with it:</p>

<ul>
  <li>isn’t repeatable;</li>
  <li>isn’t deterministic;</li>
  <li>isn’t versionable;</li>
  <li>isn’t auditable;</li>
  <li>isn’t documented;</li>
  <li>is repetitive;</li>
  <li>is error prone.</li>
</ul>

<p>So, here is where the Keycloak CLI (<em>Command Language Interpreter</em>) comes into he play.</p>

<p>The Keycloak CLI consists in a <code class="language-plaintext highlighter-rouge">bash</code> script named <code class="language-plaintext highlighter-rouge">kadm.sh</code>, found in the <code class="language-plaintext highlighter-rouge">bin</code>
directory of the server. Therefore, you can run it as follows:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ export PATH=$PATH:$KEYCLOAK_HOME/bin
$ kcadm.sh
</code></pre></div></div>

<p>Using this script, you can do whatever you do by clicking around the administration
console, but in a controled and fully SaC compliant mode. Let’s have now a quick
overview of the most essential Keycloak concepts:</p>

<ul>
  <li><strong>Realms</strong>. A realm is a logical namespace grouping different security artifacts like applications, services, users, groups, roles, etc. They are isolated from one another and can only manage the artifacts that they control.</li>
  <li><strong>Clients</strong>. Before applications are able to use Keycloak services, they need to be registered first, as Keycloak clients. They represent basic entities that may request Keycloak authentication and authorization.</li>
  <li><strong>Users</strong>. The notion of users is the same as with any other kind of server, i.e. entities able to log with Keycloak. They are stored in the Keycloak internal datrabase or, in the case of the users federation, in external LDAP directories. Users belong to and log into realms.</li>
  <li><strong>Groups</strong>. Users can be grouped in user groups. This facilitates the management of their common attributes.</li>
  <li><strong>Roles</strong>. Roles are permission types that can be defined at either the realm or the client level. They are assigned to specific users or user groups.</li>
  <li><strong>Role mappers</strong>. These Keycloak artifacts are used in order to assign roles, i.e. sets of permissions, to specific users or user groups.</li>
</ul>

<p>Now that we got a basic understanding of the most important Keycloak artifacts,
let’s dive into the writing of <code class="language-plaintext highlighter-rouge">kcdm</code> scripts that handle them. The first thing
to do when starting using Keycloak as a security provider is to create a new
realm. Here are the required steps:</p>

<h3 id="configure-the-temporary-admin-credentials">Configure the temporary admin credentials.</h3>

<p>Keycloak comes with an already configured security realm named <code class="language-plaintext highlighter-rouge">master</code>. As its
name implies, it the master of realms, the place where the server administrators
create their accounts allowing them to manage any other realm created on the same
server instance. So, it is used by the server itself and, while you can use it to
manage your own realm, this isn’t recommended.</p>

<p>When Keycloak is installed on-prem and, hence, an installation and configuration
process has been executed, the server comes with a default administrator in the
<code class="language-plaintext highlighter-rouge">master</code> realm. The default credentials for this administrator are <code class="language-plaintext highlighter-rouge">admin/admin</code>.
This isn’t the case when Keycloak is run as an OCI compliant image and, in this
case, the first thing to do before getting access to the <code class="language-plaintext highlighter-rouge">master</code> realm, is to
set up its temporary credentials. With Keycloak CLI, this can be done using the
following command:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ kcadm.sh config credentials \
   --server &lt;server-url&gt; \
   --realm master \
   --user &lt;user-name&gt; \
   --password &lt;user-password&gt;
</code></pre></div></div>

<p>Of course, this command can only be executed once that the Keyclaok server has
started. Here <code class="language-plaintext highlighter-rouge">&lt;server-url</code> is the full URL of the Keycloak server, for example
http://localhost:8080, while the options <code class="language-plaintext highlighter-rouge">--user</code> and <code class="language-plaintext highlighter-rouge">password</code> enable you to
define the user name and, respectivelly, the associated password of the <code class="language-plaintext highlighter-rouge">master</code>
realm administrator.</p>

<h3 id="creating-a-new-realm">Creating a new realm</h3>

<p>Having defined these temporary credentials, you can now create a new realm:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ kcadm create realms -s realm=&lt;realm-name&gt; -s enabled=true
</code></pre></div></div>

<p>Here the <code class="language-plaintext highlighter-rouge">-s</code> option, for <code class="language-plaintext highlighter-rouge">set</code>, allows you to set up attributes values. In this
case we’re creating a new realm which name is defined by the <code class="language-plaintext highlighter-rouge">-s realm</code> option
and, since realm aren’t enabled by default, we need to do it using the argument
<code class="language-plaintext highlighter-rouge">-s enabled=true</code>.</p>

<h3 id="creating-users">Creating users</h3>

<p>Now is the time to create the Keycloak users.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ kcadm.sh create users \
   -r &lt;realm-name&gt; -s \
   -- username=&lt;user-name&gt; \
   -s enabled=true \
   -s "emailVerified=true" \
   -s "email=&lt;user-email&gt;" \
   -s "firstName=&lt;user-first-name&gt;" \
   -s "lastName=&lt;user-last-name&gt;"
$ kcadm.sh set-password -r &lt;realm-name&gt; \
    --username &lt;user-name&gt; \
    --new-password &lt;user-password&gt;
</code></pre></div></div>

<p>The sequence above creates a new user in the newly created realm and defines the
associated password. To be noted that users have several properties like their
associated first and last name, as well as their email address. These properties
are initialized on the behalf of <code class="language-plaintext highlighter-rouge">-s "name=value"</code> options. Also, the boolean
property <code class="language-plaintext highlighter-rouge">emailVerified</code> helps to define trusted users, which email address has
been verified after their creation.</p>

<h3 id="creating-clients">Creating clients</h3>

<p>Creating clients is a more complicated operations due to the large number of
properties and parameters that need to be defined. This is why, in practice,
all these parameters and properties are stored in JSON files that are used as
input for <code class="language-plaintext highlighter-rouge">kcadm</code> commands. Here is an example:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ kcadm create clients -r &lt;realm-name&gt; -f &lt;input-file&gt;
</code></pre></div></div>

<p>In this example, <code class="language-plaintext highlighter-rouge">&lt;input-file&gt;</code> is the full path of a local JSON file containing
the description of the new client that has to be created. We’ll come back later
with more details concerning the client types as well as their properties.</p>

<h3 id="creating-roles-and-assigning-them-to-users">Creating roles and assigning them to users</h3>

<p>In order to assign permission to users, we’re using Keycloak roles. These roles
can be assigned, as explained, to users, in which case we’re talking about realm
roles, or to clients. Here is an example of creating a new realm role and to
assign it to an user:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ kcadm create roles -r &lt;realm-name&gt; -s name=&lt;role-name&gt;
$ kcadm add-roles --uusername &lt;user-name&gt; --rolename &lt;role-name&gt; -r &lt;realm-name&gt;
</code></pre></div></div>

<p>The sequence above creates a new role which name is defined by the option
<code class="language-plaintext highlighter-rouge">-s name=&lt;role-name&gt;</code> in the realm which name is defined by the option <code class="language-plaintext highlighter-rouge">-r &lt;realm-name&gt;</code>.
Then, this role is assigned to the user which name is defined by the option
<code class="language-plaintext highlighter-rouge">--uusername &lt;user-name&gt;</code>.</p>

<p>Once we complete these steps above, we can consider having a new realm, quite
complete, providing all the required artifacts, which should allow us to run
the sample application, as shown in the next section. Of course, the mentioned
steps don’t have to be executed manually, but we’ll demonstrate how to automate
them, in the most authentic SaC way, using tools like <code class="language-plaintext highlighter-rouge">docker</code> and <code class="language-plaintext highlighter-rouge">docker-compose</code>,
integrated with Quarkus.</p>

<h2 id="getting-started-with-the-sample-application">Getting started with the sample application.</h2>

<p>In order to illustrate all the concepts introduced above, we provide a sample
application, available here: https://github.com/nicolasduminil/iam.git.
It’s a Java application, using Quarkus, the famous supersonic and subatomic stack.
It consists of several Maven modules or subprojects, as follows:</p>

<ul>
  <li>The <code class="language-plaintext highlighter-rouge">front-end</code> Maven module which deploys a web application which uses the Jakarta Faces and PrimeFaces extension for Quarkus.</li>
  <li>The <code class="language-plaintext highlighter-rouge">back-end</code> Maven module which exposes a simple REST API invoked by the <code class="language-plaintext highlighter-rouge">front-end</code> module.</li>
  <li>The <code class="language-plaintext highlighter-rouge">infra</code> Maven module which orchestrates the others ones, including the Keycloak server.</li>
</ul>

<p>Let’s look in a greater detail at each one of hese modules.</p>

<h3 id="technical-requirements">Technical requirements</h3>

<p>The sample application is a Java application, accordingly you need to have Java
21 or later installed on your box. You could use different Java versions but, in
this case, you need to slightly modify the master <code class="language-plaintext highlighter-rouge">pom.xml</code> file such that to
align with your Java version.</p>

<p>You also need to have a local copy of the GitHub repository associated with the
project. If you have Git installed, you can clone the repository by running this
command in a terminal:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ git clone https://github.com/nicolasduminil/iam.git
</code></pre></div></div>

<p>Alternatively, you can download a ZIP of the same repository mentioned above.</p>

<p>The sample application is using Keycloak as a IAM service, accordingly you need
to have it running, either by downloading and installing it, or by running it
as an OCI compliant image, using Docker, Podman or any other tool you prefer.
Here, we’re using Docker and, consequently, if you want to run the sample application
exactly as it is, you need a local Docker infrastructure.</p>

<p>And since we’re using Maven as our build engine, you need to have it installed
as well.</p>

<h3 id="understanding-the-sample-application">Understanding the sample application</h3>

<p>The sample application consists of two parts: a frontend web application and a
backend REST API.
The frontend web application is a classical web application, written in Java,
with Quarkus and the PrimeFaces extension for Quarkus.</p>

<blockquote>
  <p><strong><em>NOTE:</em></strong>  The fact of having written the web application in Java, with
PrimeFaces, which is an implementation of the Jakarta Faces specifications,
might be surprising. As a matter of fact, it would have been more usual to
write it in a JavaScript library, like Angular, Vue.js, etc.
The reason we did it this way is that Jakarta Faces is a great web framework
whose implementations offer hundreds of ready-to-use widgets and other visual
controls. Compared with Angular, where the visual components are a part of
external libraries, like Material, NG-Bootstrap, Clarity, Kendo, Nebular, and
many others, Jakarta Faces implementations not only provide ways more widgets
and features, but also are part of the official JSR 372 specifications and,
in this respect, they are standard, as opposed to the mentioned libraries,
which evolve with their authors prevailing moods, without any guarantee of
consistency and stability.
For more arguments in choosing Jakarta Faces implementations for web applications
please see my article on DZone: https://shorturl.at/Iv01O.</p>
</blockquote>

<p>As we want to focus on the features that Keycloak, as an enterprise IAM service,
can offer, the sample application is very simple. Furthermore, to make it as simple
as possible to run it, we’re using Quarkus.</p>

<p>The web application demonstrates the following features:</p>

<ul>
  <li>It uses the Keycloak <code class="language-plaintext highlighter-rouge">discovery</code> endpoint.</li>
  <li>It uses the OAuth 2.0 <code class="language-plaintext highlighter-rouge">authorization code</code> grant type.</li>
  <li>It uses the OpenID Connect to obtain an ID token for the given <code class="language-plaintext highlighter-rouge">authorization code</code>.</li>
  <li>It uses the OpenID Connect protocol to login against the Keycloak service.</li>
  <li>It shows the ID and the access token.</li>
  <li>It refreshes of the access token.</li>
  <li>It invokes the userinfo Keycloak endpoint and displays the required data.</li>
  <li>It invokes the public and the secured backend endpoint using the RBAC (<em>Role Based Access Control</em>).</li>
</ul>

<p>The backend REST API is also very simple and is implemented with Quarkus as well.
It provides a REST API with two endpoints:</p>

<ul>
  <li><code class="language-plaintext highlighter-rouge">/public</code>: A publicly available endpoint with no security</li>
  <li><code class="language-plaintext highlighter-rouge">/secured</code>: A secured endpoint requiring an access token with the myrealm global role</li>
</ul>

<p>Using Quarkus with its embedded Undertow web server allows to make the code as
easy to understand and as simple to run as possible for anyone familiar with the
Java programming language. The following diagram shows the relationship between
the frontend, the backend, and the Keycloak service. The frontend authenticates
the users against the Keycloak server and then invokes the backend, which uses
the Keycloak defined roles such that to validate the RBAC request:</p>

<p><img src="/assets/images/overview.png" alt="Application overview" /></p>

<p>Now  let’s look at some more details on how all these pieces come together.</p>

<h3 id="running-the-sample-application">Running the sample application</h3>

<p>In order to run the sample application, once you cloned the GitHub repository
associated with the project, all you need is to execute the following Maven
command:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ cd &lt;app-directory&gt;
$ mvn clean install
</code></pre></div></div>

<p>Of course, if that’s the first time you’re running the sample application, then
yoi don’t need the clean verb. You’ll see a whole crowd of Maven output lines
and, if everything is okay, you’ll see the following build process result:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[INFO] IAM :: The Master POM .............................. SUCCESS [  0.258 s]
[INFO] IAM :: The Doamain Module .......................... SUCCESS [  1.433 s]
[INFO] IAM :: The Common Module ........................... SUCCESS [  0.760 s]
[INFO] IAM :: The Back-End Module ......................... SUCCESS [ 13.560 s]
[INFO] IAM :: The Front-End Module ........................ SUCCESS [ 12.560 s]
[INFO] IAM :: The Infrastructure Module ................... SUCCESS [ 40.865 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
</code></pre></div></div>

<p>The durations shown above might be different in your case. Now, you can start the
<code class="language-plaintext highlighter-rouge">front-end</code> application by firing your preferd browser at http://localhost:8082.
You should see the following welcome screen:</p>

<p><img src="/assets/images/welcome.png" alt="Welcome screen" /></p>

<p>I took the greatest care and attention while designing the <code class="language-plaintext highlighter-rouge">front-end</code> such that
not only to showcase the Keycloak most important features, but also to demonstrate
the suitability of the Jakarta Faces compliant implementations, in this case
PrimeFaces, for UI based applications. And as you’ll see later when we’ll examine
the <code class="language-plaintext highlighter-rouge">front-end</code> details, this specification, together with its implementations,
provides a way more robust architecture than the one offered by any JavaScript
based libraries.</p>

<p>So, let’s start now exploring this UI. A menu available in the menu bar allows
you to select the desired OAuth 2.0 grant type and proposes the following options:</p>

<ul>
  <li>authorization code;</li>
  <li>resource owner password;</li>
  <li>client credentials.</li>
</ul>

<p>First, you need to know that the OAuth 2.0 protocol defines <em>grant types</em> as
being standardized methods that define how a client application can obtain
authorization to access protected resources. They represent different flows
through which an application can receive an access token to act on the behalf
of an user. The RFC 6749 (https://datatracker.ietf.org/doc/html/rfc6749)
provides all the required details.</p>

<p>So, our sample application allows you to exercice all these grant types. To
beggin, hover your mouse over the menu labeled <code class="language-plaintext highlighter-rouge">OAuth 2.0 Grant Types</code> and select
the first menu item named <code class="language-plaintext highlighter-rouge">Authorization code</code>. You’ll see the following dialog
box in the lower part of the screen:</p>

<p><img src="/assets/images/discovery.png" alt="Discovery screen" /></p>

<p>As you can notice, several ordered steps are proposed to you, in the manner of
a simplified workflow. Start by the first one labeled <code class="language-plaintext highlighter-rouge">Discovery</code> and click on
the button having the same name. A new input text area will be displayed, containing
all the functional endpoints proposed by the Keycloak server.</p>

<p>The <code class="language-plaintext highlighter-rouge">Discovery</code> function is an optional specification that an OAuth 2.0 provider
can decide if it wants to implement or not. This idea comes from the necessity
to associate REST endpoints to the OAuth 2.0 standard features. Instead of
defining these endpoints at the specification level, which would certainly weigh
them down a lot, the implementors are free to craft them how they want. And since
they would be different from an implementation to another one, the server has to
provide the <code class="language-plaintext highlighter-rouge">Discovery</code> endpoint which, when invoked, will return all the other
endpoints attached to OAuth 2.0 standard operations.</p>

<p>In our case, you can find in the new displayed input text area control labeled
<code class="language-plaintext highlighter-rouge">Keycloak OpenID Connect provider configuration</code>, the following entries:</p>

<ul>
  <li><code class="language-plaintext highlighter-rouge">authorization_endpoint</code>: the URL to use for authentication requests;</li>
  <li><code class="language-plaintext highlighter-rouge">token_endpoint</code>: the URL to use for token requests;</li>
  <li><code class="language-plaintext highlighter-rouge">introspection_endpoint</code>: the URL to use for introspection requests;</li>
  <li><code class="language-plaintext highlighter-rouge">userinfo_endpoint</code>: the URL to use for UserInfo requests;</li>
  <li><code class="language-plaintext highlighter-rouge">grant_types_supported</code>: the list of supported grant types;</li>
  <li><code class="language-plaintext highlighter-rouge">response_types_supported</code>: the list of supported response types;</li>
  <li>etc.</li>
</ul>

<p>Spend a short moement to scroll down among the endpoints that the Keycloak server
makes available and that the <code class="language-plaintext highlighter-rouge">Discovery</code> returns as metadata. Then, continue with
the next step of our workflow and click on the <code class="language-plaintext highlighter-rouge">Authentication</code> tab.The following
dialog box will display:</p>

<p><img src="/assets/images/authentication.png" alt="Authentication screen" /></p>

<p>Here you need to provide all the information for the use of the <code class="language-plaintext highlighter-rouge">authorization
code</code>grant type. We’ll come back with finer details concerning this grant type,
and all the others, for now just proceed as follows:</p>

<ul>
  <li>in the combo list box labeled <code class="language-plaintext highlighter-rouge">Client ID</code> select <code class="language-plaintext highlighter-rouge">fe-facc</code>; this is the ID of the Keycloak client on the purpose prepared for this kind of gran type (more on that later);</li>
  <li>select <code class="language-plaintext highlighter-rouge">code</code>, if not already selected, in the combo list box labeled <code class="language-plaintext highlighter-rouge">Response type</code>;</li>
  <li>accept the default value of <code class="language-plaintext highlighter-rouge">profile email</code> for the combo check box labeled <code class="language-plaintext highlighter-rouge">Scope</code>; the scope <code class="language-plaintext highlighter-rouge">OIDC</code> is mandatory for Keycloak so it will be added automatically;</li>
  <li>accept the default value of <code class="language-plaintext highlighter-rouge">login</code> for the combo list box labeled <code class="language-plaintext highlighter-rouge">Prompt</code>;</li>
  <li>accept the default value of <code class="language-plaintext highlighter-rouge">3600</code> for the input text contrl named <code class="language-plaintext highlighter-rouge">Max age</code>;</li>
  <li>keep empty the input text control labeled <code class="language-plaintext highlighter-rouge">Login hint</code> or, if you prefer, you could type in <code class="language-plaintext highlighter-rouge">john</code>, which is the user name you need to use for authentication purposes.</li>
</ul>

<p>Now click on the <code class="language-plaintext highlighter-rouge">Generate</code> button and the following HTTP request will appear in
the input text area labeled <code class="language-plaintext highlighter-rouge">Authorization request</code>:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>http://quarkus.oidc.client-idlocalhost:8080/realms/myrealm/protocol/openid-connect/auth
  client_id=fe-facc
  redirect_uri=http://localhost:8082/callback
  scope=profile+email+openid
  response_type=code
  prompt=login
  max_age=3600
</code></pre></div></div>

<p>This allows you to better understand how the <code class="language-plaintext highlighter-rouge">authorization code</code> grand type
works. Please notice that the <code class="language-plaintext highlighter-rouge">quarkus.oidc.client-idlocalhos</code> host name above
is the name associated by the DNS (<em>Domain Name Service</em>) to <code class="language-plaintext highlighter-rouge">localhost</code>.</p>

<p>Now, that you have seen what the <code class="language-plaintext highlighter-rouge">authorization code</code> request looks like, send
it by clicking the <code class="language-plaintext highlighter-rouge">Send authorization request</code> button. At this point, the
Keycloak service will take the helm and will display the login dialog. Type <code class="language-plaintext highlighter-rouge">john</code>
as the user name and <code class="language-plaintext highlighter-rouge">password1</code> as the password. The authentication process
against the Keycloak service should succeed and you should see now the response
to the <code class="language-plaintext highlighter-rouge">authorization code</code> request. It’s a very long character string without any
particular mening other the the ability to be exchanged against an ID token.</p>

<p>Now, click on the <code class="language-plaintext highlighter-rouge">Token</code> tab and, in the new displayed dialog box, click on
the <code class="language-plaintext highlighter-rouge">Send token request</code> button. You’ll be presented with three input text areas,
the first of which will contain the following HTTP request sent to the Keycloak
server in order to obtain the access token:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>http://quarkus.oidc.client-idlocalhost:8080/realms/myrealm/protocol/openid-connect/token
  client_id=fe-facc
  redirect_uri=http://localhost:8082/callback
  scope=email+profile+openid+openid
  client_secret=********************************
  code=a647bb3d-3...
</code></pre></div></div>

<p>Please notice the token endpoint which is <code class="language-plaintext highlighter-rouge">realms/myrealm/protocol/openid-connect
/token</code>. The <code class="language-plaintext highlighter-rouge">authorization code</code> provided in the request under the <code class="language-plaintext highlighter-rouge">code</code> parameter
has been truncated since it is too long and irelevant foe humans.</p>

<p>The 2nd input text area in the dialog is the JWT (<em>JSON Web Token</em>) header which
only fields meaningfull for us are:</p>

<ul>
  <li><code class="language-plaintext highlighter-rouge">alg</code>: the algorithm used for the token encoding which, in this case, is RS256;</li>
  <li><code class="language-plaintext highlighter-rouge">typ</code>: the type of the token which, in this case, is JWT.</li>
</ul>

<p>Last but not least, the 3rd input area text contains the JWT payload, as shown
below:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>{
  "exp": 1748451175,
  "iat": 1748450875,
  "jti": "f47e28a8-1106-43fa-bc55-29c69735d005",
  "iss": "http://localhost:8080/realms/myrealm",
  "aud": "fe-facc",
  "sub": "139d80d5-0cf9-4edb-a2ab-8aed2c121acd",
  "typ": "ID",
  "azp": "fe-facc",
  "sid": "a2fbba19-3ec2-45c4-866b-0179502d3a76",
  "at_hash": "SVG6Dl6cqfTy6IxzBy1urw",
  "email_verified": true,
  "realm_access": {
      "roles": [
          "default-roles-myrealm",
          "manager",
          "offline_access",
          "uma_authorization"
      ]
  },
  "name": "John Doe",
  "preferred_username": "john",
  "given_name": "John",
  "family_name": "Doe",
  "email": "john.doe@emailcom"
}
</code></pre></div></div>

<p>The listing above shows the structure of a JWT payload. The JSON elements that
you’re seeing are called <em>claims</em>. Here are the most important ones:</p>

<ul>
  <li><code class="language-plaintext highlighter-rouge">iss</code>: this is the issuer URL, in our case the Keycloak realm;</li>
  <li><code class="language-plaintext highlighter-rouge">aud</code>: the audience; identifies the intended recipients or consumers of the token, essentially, who is meant to accept and process this token; it typically matched the <code class="language-plaintext highlighter-rouge">client_id</code>;</li>
  <li><code class="language-plaintext highlighter-rouge">typ</code>: the token type, in our case an OpenID Connect token;</li>
  <li><code class="language-plaintext highlighter-rouge">azp</code>: the authorized party; represents the party to whom the ID token was issued, in this case the OAuth 2.0 client having the ID <code class="language-plaintext highlighter-rouge">fe-facc</code>;</li>
  <li><code class="language-plaintext highlighter-rouge">realm-access</code>: the parent element encapsulating the properties which define the acess rules to a Keycloak realm;</li>
  <li><code class="language-plaintext highlighter-rouge">roles</code>: this is a Keycloak specific claim that represents the user’s realm-level roles. The roles <code class="language-plaintext highlighter-rouge">default-roles-myrealm</code>, <code class="language-plaintext highlighter-rouge">offline_access</code> and <code class="language-plaintext highlighter-rouge">uma_authorization</code> are standard, automatic roles, while <code class="language-plaintext highlighter-rouge">manager</code> is a custom one, used in our application, for RBAC purposes;</li>
</ul>

<p>The remaining claims, from <code class="language-plaintext highlighter-rouge">name</code> to <code class="language-plaintext highlighter-rouge">email</code> are self explaining.</p>

<p>Okay, so we got an authorization code, we exchanged it against an ID token on
the behalf of the OpenID Connect protocol,by logging in to Keycloak as user
<code class="language-plaintext highlighter-rouge">john</code> and we examined these token content. As a JWT, the token has a header and a payload.</p>

<p>Let’s try now tro refersh our access token. Click on the <code class="language-plaintext highlighter-rouge">Refresh</code> tab and, then,
on the <code class="language-plaintext highlighter-rouge">Send refresh request</code> button. The following dialog will be shown on the screen.</p>

<p><img src="/assets/images/refresh.png" alt="Refresh screen" /></p>

<p>Here you can see that, in order to refresh the JWT, the following request has
been sent to the Keycloak server:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>http://quarkus.oidc.client-idlocalhost:8080/realms/myrealm/protocol/openid-connect/token
  grant_type=refresh_token
  refresh_token=eyJhbGciOi...
  client_id=fe-facc
  client_secret=********************************
  scope=profile+email+openid
</code></pre></div></div>

<p>We’re passing the refresh token received during the initial authentication as a
request parameter, together with the client ID and secret. Also, please notice
that, this time, the grant type is <code class="language-plaintext highlighter-rouge">refresh_token</code>. And here is the server’s
response:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>{
  "access_token": "eyJhbGciOi...",
  "refresh_token": "eyJhbGciOi...",
  "refresh_expires_in": 1800,
  "not-before-policy": 0,
  "scope": "openid profile email",
  "id_token": "eyJhbGciOi...",
  "token_type": "Bearer",
  "session_state": "70abc3f9-75b1-45a6-893f-4a0ceae68c89",
  "expires_in": 300
}
</code></pre></div></div>

<p>In order to save space, we replaced the irrelevant token content by “…”. But
don’t be confused, even if the ID, access and refresh tokens start all with a
similar header, there full content isn’t the same.</p>

<p>Let’s see the <code class="language-plaintext highlighter-rouge">UserInfo</code> feature now. Remember that this endpoint is a standard
part of the OpenID Connect protocol, built as an identity layer on top of OAuth 2.0.
Click on the <code class="language-plaintext highlighter-rouge">UserInfo</code> tab and, then, on the button labeled <code class="language-plaintext highlighter-rouge">Send UserInfo
Request</code>. You’ll see the following request displayed:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>http://quarkus.oidc.client-idlocalhost:8080/realms/myrealm/protocol/openid-connect/
</code></pre></div></div>

<p>and the following response:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>{
  "sub": "25c7280c-4dc4-4fcf-b482-7693daa1971f",
  "email_verified": true,
  "realm_access": {
    "roles": [
        "default-roles-myrealm",
        "manager",
        "offline_access",
        "uma_authorization"
    ]
  },
  "name": "John Doe",
  "preferred_username": "john",
  "given_name": "John",
  "family_name": "Doe",
  "email": "john.doe@emailcom"
}
</code></pre></div></div>

<p>The last thing you can do is to invoke the backend service, by clicking on the
<code class="language-plaintext highlighter-rouge">Invoke service</code> tab and, then, on the <code class="language-plaintext highlighter-rouge">Invoke public service</code> and, respectively,
<code class="language-plaintext highlighter-rouge">Invoke secure service</code> button. The services response messages will be displayed
proving this way that the RBAC works as expected. More on that later.</p>

<p>Okay, so we went around the OAuth 2.0 <code class="language-plaintext highlighter-rouge">authorization code</code> grant type, let’s
have a look now at the other two. In the menu bar, hover the <code class="language-plaintext highlighter-rouge">OAuth 2.0 Grant
Types</code> menu and, this time, select the <code class="language-plaintext highlighter-rouge">Resource owner password</code> menu item.
You’ll see the following dialog box:</p>

<p><img src="/assets/images/ropc-token.png" alt="ROPC login screen" /></p>

<p>In this dialog box you need to select the <code class="language-plaintext highlighter-rouge">fe-ropc</code> client ID in the combo list
box labeled <code class="language-plaintext highlighter-rouge">Client ID</code> and to type the password <code class="language-plaintext highlighter-rouge">password1</code> in the text field
with the same name. Then click the button <code class="language-plaintext highlighter-rouge">Send token request</code>. You’ll see the
screen below:</p>

<p><img src="/assets/images/ropc-send-token.png" alt="ROPC send login screen" /></p>

<p>Now the JWT request, header and payload, that we have already discussed previously,
will be displayed. The <code class="language-plaintext highlighter-rouge">Invoke service</code> function will work now exactly as in the
case of the <code class="language-plaintext highlighter-rouge">authorization code</code> grant type.</p>

<p>Reset again the go to the <code class="language-plaintext highlighter-rouge">Client credentials</code> tab. Here, select the client ID
<code class="language-plaintext highlighter-rouge">fa-sac</code> and click the <code class="language-plaintext highlighter-rouge">Send token request</code> button. The same JWT request, header
and payload, that you have already seen several times precedently, will be again
displayed.</p>

<p>This concludes our Keycloak showcase with the OpenID Connect protocol and the
OAuth 2.0 grant types.</p>

<blockquote>
  <p><strong><em>NOTE:</em></strong> During your exercises with the example application, you might spend
some time with different operations and your authorization code might expire.
Please notice that the property <code class="language-plaintext highlighter-rouge">max_age</code> of the <code class="language-plaintext highlighter-rouge">authorization_code</code> isn’t
related to the <code class="language-plaintext highlighter-rouge">authorization_code</code> validity but specifies the maximum time,
since the user’s last auhentication, that the Keycloak server will accept.
The <code class="language-plaintext highlighter-rouge">authorization_code</code> is much shorter, usually around 30 - 60 seconds. So,
should you spent longer than that with different operations, then you need to
either get a new <code class="language-plaintext highlighter-rouge">authorization code</code> and, then, to refresh the tokens, or
to simply restart the applications using the command: <code class="language-plaintext highlighter-rouge">mvn -pl infra exec:exec@restart</code>.
This command will restart your containers. If you prefer to stop your Keycloak
service, to start it again and to reconfigure the realm, then the following
command is for you: <code class="language-plaintext highlighter-rouge">mvn -pl infra exec:exec@stop exec:exec@start</code>.</p>
</blockquote>]]></content><author><name>Nicolas DUMINIL</name></author><category term="Java" /><category term="Quarkus" /><category term="Keycloak" /><category term="Security" /><category term="DevOps" /><category term="DevSecOps" /><category term="Blog" /><summary type="html"><![CDATA[SaC It Up: Dive Deep into DevSecOps with Java, Quarkus and Keycloak]]></summary></entry><entry><title type="html">Concurrency and Parallelism in Java - Part 2</title><link href="https://nicolasduminil.github.io/posts-archive/concurrency-and-parallelism-2/" rel="alternate" type="text/html" title="Concurrency and Parallelism in Java - Part 2" /><published>2025-02-10T00:00:00+00:00</published><updated>2025-02-10T13:05:34+00:00</updated><id>https://nicolasduminil.github.io/posts-archive/concurrency-and-parallelism-2</id><content type="html" xml:base="https://nicolasduminil.github.io/posts-archive/concurrency-and-parallelism-2/"><![CDATA[<h1 id="concurrency-and-parallelism-in-java-part-2">Concurrency and Parallelism in Java (Part 2)</h1>

<p>In a <a href="http://www.simplex-software.fr/posts-archive/concurrency-and-parallelism/">previous post</a>, we’ve been looking at a couple of interesting
aspects concerning the concurrency and parallelism in Java. We’ve seen that, with
parallel processing, the maximum number of parallel tasks that can be executed
at any given moment,
i.e. the maximum number of the simultaneously running platforms threads,
is equal to the number of the available CPU cores. If the number of the currently
active threads is superior to the one of the available CPU cores, then the number
of threads waiting for resources will be equal to the difference between the total
number of the active threads and the number of the available CPU cores.</p>

<p>In conclusion, more the difference between the number of the currently active
threads and the one of the available CPU cores is important, more important will
be the number of the blocked threads, waiting for the CPU. But would this impact
the application’s overall performances and, if yes, how ?</p>

<p>One of the most common ways to address parallel processing in Java is through
the <a href="https://en.wikipedia.org/wiki/Work_stealing">work stealing</a> design pattern
implemented by the <code class="language-plaintext highlighter-rouge">ForkJoinPool</code>. Java provides two categoris of <code class="language-plaintext highlighter-rouge">ForkJoinPool</code>:</p>

<ul>
  <li>a common <code class="language-plaintext highlighter-rouge">ForkJoinPool</code> JVM wide, shared across applications, used by default by parallel streams;</li>
  <li>customized <code class="language-plaintext highlighter-rouge">ForkJoinPool</code> created explicitly for specific use cases and supposed to provide more control over the resources usage.</li>
</ul>

<p>The Java common <code class="language-plaintext highlighter-rouge">ForkJoinPool</code> parallelism level defaults to a number of threads equal to the
number of the available CPU cores minus 1. It is configurable and can be set via the system property
<code class="language-plaintext highlighter-rouge">java.util.concurrent.ForkJoinPool.common.parallelism</code>. For example:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>System.setProperty("java.util.concurrent.ForkJoinPool.common.parallelism", "16");
</code></pre></div></div>

<p>sets the Java common <code class="language-plaintext highlighter-rouge">ForkJoinPool</code> maximum number of parallel threads to 16.</p>

<p>As for the customized <code class="language-plaintext highlighter-rouge">ForkJoinPool</code>, their parallelism level is initialized at
their instantiation time, via an input argument. This input argument is optional
and, if missing, the same value as the one defining currently the common
<code class="language-plaintext highlighter-rouge">ForkJoinPool</code> parallelism will be used.</p>

<p>Now, an interesting question would be to know what’s the relationship between
this parallelism level and the one of the platform’s available CPU cores ? The
typical recommendation
would be to set the parallelism level to the number of cores, or slightly higher.
For example, for CPU-intensive tasks, set the parallelism level to the number
of cores, while for I/O intensive tasks, set it according to the formula below:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>parallelism = number of cores * waiting time / service time
</code></pre></div></div>

<p>Here <code class="language-plaintext highlighter-rouge">waiting time</code> is the time spent waiting for I/O operations (like network
calls, disk operations, etc.) and <code class="language-plaintext highlighter-rouge">service time </code> is the actual CPU processing
time. For example, let’s say that our task makes a database query which takes
100 ms, after which it processes the results during 20 ms. Then:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>paralallism = 8 * (1 + 100/20) = 48
</code></pre></div></div>

<p>The reasoning behind this formula is:</p>

<ul>
  <li>During I/O operations, CPU cores are idle.</li>
  <li>While one thread is waiting for I/O, other threads can use the CPU.</li>
  <li>The ratio (waiting time / service time) helps determine how many additional threads can effectively use the CPU during I/O waits.</li>
  <li>A higher waiting-to-service time ratio justifies more threads since cores would otherwise be idle during I/O waits.</li>
</ul>

<p>Let’s look at an implementation trying to simulate such a computing model (the code is available in the <a href="https://github.com/nicolasduminil/concurrency-and-parallelism-in-java.git">GitHub repository</a>):</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>public class TestForkJoinPool
{
  private static final Logger LOG = Logger.getLogger(TestForkJoinPool.class.getName());
  private static final int CORES = Runtime.getRuntime().availableProcessors();
  private static final double WAITING_TIME = 100;
  private static final double SERVICE_TIME = 20;
  private static final int OPTIMAL_PARALLELISM = (int) (CORES * (1 + WAITING_TIME / SERVICE_TIME));

  @Test
  public void testForkJoinPool() throws Exception
  {
    LOG.info("&gt;&gt;&gt; Setting parallelism to %s for %d available CPU cores (waiting/service ratio: %.1f)"
     .formatted(OPTIMAL_PARALLELISM, CORES, WAITING_TIME / SERVICE_TIME));
    Instant start = Instant.now();
    try (var pool = new ForkJoinPool(OPTIMAL_PARALLELISM))
    {
      pool.submit(() -&gt; run()).get();
    }
    Duration duration = Duration.between(start, Instant.now());
    LOG.info("Threads: %d, Duration: %d ms"
      .formatted(OPTIMAL_PARALLELISM, duration.toMillis()));
  }

  private static void run()
  {
    long count = Stream.generate(() -&gt;
    {
      try
      {
        TimeUnit.MILLISECONDS.sleep(100);
        simulateCpuWork(20);
        return 1;
      }
      catch (InterruptedException e)
      {
        throw new RuntimeException(e);
      }
    })
    .parallel()
    .limit(10)
    .count();
  }

  private static void simulateCpuWork(long milliseconds)
  {
    long startTime = System.nanoTime();
    double result = 0;
    while (System.nanoTime() - startTime &lt; milliseconds * 1_000_000)
      result += Math.sin(result) + Math.cos(result);
  }
}
</code></pre></div></div>

<blockquote>
  <p><strong><em>NOTE:</em></strong>  In this example we’ve used a custom <code class="language-plaintext highlighter-rouge">ForkJoinPool</code> but the test
result would have been the same for the common <code class="language-plaintext highlighter-rouge">ForkJoinPool</code>.</p>
</blockquote>

<p>In the code above we’re simulating 10 operations consisting each in a I/O
intensive processing, taking 100 ms, and a CPU intensive one, taking 20 ms,
so a total duration 1 200 ms.</p>

<p>Running this test on my laptop having 8 available CPU cores, I get this:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Feb 10, 2025 2:40:48 PM fr.simplex_software.workshop.tests.TestForkJoinPool testForkJoinPool
INFO: &gt;&gt;&gt; Setting parallelism to 48 for 8 available CPU cores (waiting/service ratio: 5.0)
Feb 10, 2025 2:40:48 PM fr.simplex_software.workshop.tests.TestForkJoinPool testForkJoinPool
INFO: Threads: 48, Duration: 255 ms
</code></pre></div></div>

<p>As you can see, it takes 255 ms to perform the 10 operations which total duration
is of 1 200 ms. In order to check
the validity of the mentioned formula, I instantiated the common <code class="language-plaintext highlighter-rouge">ForkJoinPool</code> with a non-optimal
thread numbers, for example:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>...
try (var pool = new ForkJoinPool(1))
...
</code></pre></div></div>

<p>This time the total duration was of 1 445 ms, i.e. more than 5 times slower.
Meaning that the optimal setting completes the work much faster. But what happens
if I set the parallelism level at a number of thread higher than the optimal one ?
For example, doing:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>...
try (var pool = new ForkJoinPool(64))
...
</code></pre></div></div>

<p>I was expecting to see degraded performances but, surprisingly, the test performs
almost as fast as when using the optimal value, or, in the worst case, just a little
bit slower.</p>

<p>In order to illustrate this let’s modify the test as follows:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@Test
public void compareThreadCounts() throws Exception
{
  int[] threadCounts = {
    1,
    CORES,
    OPTIMAL_PARALLELISM,
    OPTIMAL_PARALLELISM * 2,
    OPTIMAL_PARALLELISM * 4
  };

  for (int threadCount : threadCounts)
  {
    LOG.info("&gt;&gt;&gt; Have set parallelism to %s for %d available CPU cores (waiting/service ratio: %.1f)"
      .formatted(threadCount, CORES, WAITING_TIME / SERVICE_TIME));
    Instant start = Instant.now();
    try (var pool = new ForkJoinPool(threadCount))
    {
      pool.submit(() -&gt; run()).get();
    }
    Duration duration = Duration.between(start, Instant.now());
    LOG.info("Threads: %d, Duration: %d ms"
      .formatted(threadCount, duration.toMillis()));
  }
}
...
</code></pre></div></div>

<p>Running this test on my machine I’m getting the following results:</p>

<table>
  <thead>
    <tr>
      <th>Nb. of threads</th>
      <th>Duration</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>1</td>
      <td>1443 ms</td>
    </tr>
    <tr>
      <td>8</td>
      <td>361 ms</td>
    </tr>
    <tr>
      <td>48</td>
      <td>253 ms</td>
    </tr>
    <tr>
      <td>96</td>
      <td>264 ms</td>
    </tr>
    <tr>
      <td>192</td>
      <td>279 ms</td>
    </tr>
  </tbody>
</table>

<p>As you can see, the best result is obtained for the optimal parallelism level.
When setting it at inferior values the test is significantly slower but when
setting it at higher values, the test is just a bit slower.</p>

<p>In conclusion, this formula works but, before using it, you need to take into
account the fact that it isn’t a panacea. It’s based on idealized assumptions
about workload distribution
and real-world performance can vary due to many factors.</p>

<p>Accordingly, the following best practices have to be observed when applying it:</p>

<ol>
  <li>Use the formula as a starting point, not a fixed rule.</li>
  <li>Benchmark with your specific workload.</li>
  <li>Monitor system resources (CPU, memory, etc.).</li>
  <li>Consider implementing adaptive thread pool sizing.</li>
  <li>Watch for signs of thread contention.</li>
</ol>

<p>The optimal parallelism level is more of a minimum threshold for good performance
rather than a strict maximum. As long as you’re not seeing degraded performance
or resource exhaustion, having more threads than the calculated optimal can be
perfectly fine.</p>

<p>If you’re interested in these topics then you might like <a href="https://shorturl.at/ohTjM"><img src="/assets/images/executors.jpg" alt="50 Shades of Java Executors" /></a></p>]]></content><author><name>Nicolas DUMINIL</name></author><category term="Java" /><category term="Concurrency" /><category term="Parallelism" /><category term="Blog" /><summary type="html"><![CDATA[Concurrency and Parallelism in Java (Part 2)]]></summary></entry></feed>