Containerizing Enterprise Java: From WAR to Docker
Patterns and pitfalls of containerizing legacy Java WAR applications running on JBoss and WebLogic
Enterprise Java is where containers go to earn their keep. It is easy to containerize a Node.js service or a Go binary. Those are simple processes with minimal runtime dependencies. But take a 15-year-old Java application deployed as a WAR file on JBoss EAP, dependent on JNDI datasources, JMS queues, and a dozen server-level configuration files, and suddenly containerization becomes a genuine engineering challenge.
I have spent the last several months leading exactly this effort at a major entertainment company, and the lessons have been hard-won. This is a practical guide for teams facing the same problem.
Understanding the Legacy Stack
The typical enterprise Java application we encountered was not a Spring Boot service with an embedded Tomcat. It was a Java EE application packaged as a WAR file, deployed into JBoss EAP 6.4 (a commercial distribution of WildFly), relying on the application server for:
- JNDI datasources: database connections configured in the server, looked up by name in the application
- JMS queues: message-driven beans consuming from queues configured in the server
- Security realms: LDAP authentication configured at the server level
- Shared libraries: common JARs deployed as JBoss modules, shared across applications
- JVM tuning: garbage collection flags, heap sizes, and thread pool configurations baked into server startup scripts
The application and the server were deeply coupled. You could not run one without the other. This is by design in the Java EE model, and it is the source of most containerization complexity.
The Dockerfile: Embedding the Application Server
The fundamental decision is whether to treat the application server as infrastructure (managed separately from the application) or as part of the application's runtime (embedded in the container). We chose the latter. In a containerized world, the application server is just another dependency, no different from the JDK or a native library.
FROM registry.access.redhat.com/jboss-eap-6/eap64-openshift:1.9
# Copy server configuration
COPY configuration/standalone-full.xml \
/opt/eap/standalone/configuration/standalone-full.xml
# Copy JBoss modules (shared libraries, JDBC drivers)
COPY modules/ /opt/eap/modules/
# Deploy the WAR
COPY target/commerce-app.war /opt/eap/standalone/deployments/
# JVM tuning via environment variables
ENV JAVA_OPTS="-Xms512m -Xmx2048m \
-XX:+UseG1GC \
-XX:MaxGCPauseMillis=200 \
-Djboss.bind.address=0.0.0.0 \
-Djboss.bind.address.management=0.0.0.0"
EXPOSE 8080 9990
The Red Hat base image includes JBoss EAP preconfigured for OpenShift, but the patterns apply regardless of the base image. The key insight is that the standalone configuration file, the modules directory, and the WAR file together constitute the complete deployable unit.
Externalizing Configuration
Legacy Java applications love configuration files. Properties files, XML files, YAML files, scattered across the filesystem. In a container world, configuration needs to come from the environment, not from files baked into the image.
We adopted a three-tier approach:
Tier 1: Environment variables for simple values. Database URLs, feature flags, service endpoints. JBoss supports variable substitution in standalone.xml:
<connection-url>${env.DATABASE_URL}</connection-url>
<user-name>${env.DATABASE_USER}</user-name>
<password>${env.DATABASE_PASSWORD}</password>
Tier 2: Mounted ConfigMaps for complex configuration. Some configuration was too complex for environment variables. LDAP realm configurations, JMS queue definitions, and logging configurations were mounted as volumes into the container at the expected filesystem paths.
Tier 3: Secrets management for credentials. Database passwords, API keys, and SSL certificates were stored in Kubernetes Secrets (encrypted with Sealed Secrets in Git) and mounted as files or injected as environment variables.
The goal was a single container image that could run in any environment, from local development to production, with the environment determined entirely by external configuration.
JVM Memory in Containers
This topic deserves its own section because it has caused more production incidents than any other containerization issue I have seen.
The JVM, particularly older versions, does not understand container memory limits. A JVM running inside a container with a 2GB memory limit sees the host machine's total memory (say, 64GB) and sizes its heap accordingly. The container runtime then OOM-kills the process when it exceeds 2GB, and you get cryptic exit codes with no useful diagnostics.
For JDK 8 (update 131 and later), the experimental CGroup memory awareness flag helps:
-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
For JDK 10 and later, container awareness is on by default. But many enterprise applications are stuck on JDK 8, and not all of them are on a recent enough update to support the CGroup flags.
Our rule of thumb: set the JVM heap to approximately 75% of the container memory limit. The remaining 25% accommodates the metaspace, thread stacks, native memory, and the JVM's own overhead. For a container with a 2GB limit:
-Xms512m -Xmx1536m -XX:MaxMetaspaceSize=256m
This leaves roughly 256MB for native memory and thread stacks, which is tight but workable for most applications.
JDBC Drivers and JBoss Modules
JBoss uses a module system for shared libraries, and JDBC drivers are typically installed as modules. This means the Oracle JDBC driver is not in the WAR file; it is in a JBoss module directory with its own module.xml descriptor.
modules/
com/
oracle/
jdbc/
main/
ojdbc8.jar
module.xml
<?xml version="1.0" encoding="UTF-8"?>
<module xmlns="urn:jboss:module:1.1" name="com.oracle.jdbc">
<resources>
<resource-root path="ojdbc8.jar"/>
</resources>
<dependencies>
<module name="javax.api"/>
<module name="javax.transaction.api"/>
</dependencies>
</module>
This module directory gets copied into the container image. It is an additional artifact that must be versioned and managed alongside the WAR file. We stored it in the same Git repository as the Dockerfile and treated it as part of the build.
Health Checks That Actually Work
JBoss provides a management interface on port 9990 that can be used for health checks, but it requires authentication by default. For container health checks, we added a simple servlet that reported application readiness:
@WebServlet("/health")
public class HealthCheckServlet extends HttpServlet {
@Inject
private DataSource dataSource;
@Override
protected void doGet(HttpServletRequest req, HttpServletResponse resp)
throws IOException {
try (Connection conn = dataSource.getConnection()) {
conn.createStatement().execute("SELECT 1 FROM DUAL");
resp.setStatus(200);
resp.getWriter().write("OK");
} catch (SQLException e) {
resp.setStatus(503);
resp.getWriter().write("Database unavailable");
}
}
}
The health check validates not just that the JVM is running, but that the application can reach its database. This distinction matters: a JVM that is alive but cannot connect to its database is not healthy, and the orchestrator should replace it.
Startup Time: The Hidden Problem
JBoss EAP takes time to start. Scanning deployments, initializing subsystems, establishing connection pools, and deploying the WAR file can take 60 to 120 seconds for a complex application. In a world where Kubernetes expects containers to be ready in seconds, this creates real problems.
Liveness probes that fire too early will kill the container before it finishes starting, creating a restart loop. We used Kubernetes startup probes (available in Kubernetes 1.16, which was freshly released) to handle this:
startupProbe:
httpGet:
path: /health
port: 8080
failureThreshold: 30
periodSeconds: 10
This gives the application 300 seconds (30 failures times 10 seconds) to start before Kubernetes considers it failed. Once the startup probe succeeds, the liveness probe takes over with shorter intervals.
The Migration Path
Not every WAR application should remain a WAR application forever. Containerization is often the first step in a longer modernization journey. Once the application runs in a container, you have options:
- Migrate from JBoss to embedded Tomcat or Undertow. Remove the application server dependency entirely by embedding the servlet container in the application.
- Extract services. Identify bounded contexts within the monolith and extract them as independent microservices.
- Upgrade the JDK. Containerization often motivates JDK upgrades because you control the runtime entirely.
We treated containerization as the prerequisite for all other modernization work. Getting the application into a container, running in Kubernetes, with CI/CD and proper health checks, created the foundation for everything that followed.
The work is unglamorous. There are no conference talks about fixing JNDI lookups in Docker containers. But for enterprises with decades of Java applications, this is the work that unlocks the cloud.