Java Containerization

Datetime:2016-08-23 03:53:19          Topic: DataBase  Tomcat  Docker           Share
Section 1

Installing Docker Manually

You can refer to Docker’s official documentation for detailed installation instructions. For Ubuntu for example, please refer to this document: https://docs.docker.com/engine/installation/ubuntulinux/

You can use these simple commands to install Docker: ~ ~ ~ ~ apt-get update apt-get -y install wget wget -qO- https://get.docker.com/ | sh ~ ~ ~ ~

Section 2

Provision a Docker-enabled Linux Host on Any Cloud

Sign Up on http://dchq.io for a free account

  • Register a Cloud Provider – navigate to Cloud Providers and register an end-point for one of the following: VMware vSphere, OpenStack, Cloudstack, AWS, Google Compute Engine, Microsoft Azure, Rackspace, DigitalOcean, IBM SoftLayer, and others.
  • Create a Cluster – navigate to Clusters and create a new cluster with Docker networking selected
  • Provision a Docker-enabled Host – navigate to Machines and provision a Docker-enabled Linux host on the cloud provider & cluster of your choosing.

You can refer to the detailed documentation here: http://dchq.co/docker-infrastructure-as-a-service.html

Deploying a simple Tomcat container with a sample Java WAR file using the Docker CLI

Section 3

Create a Docker Hub account for storing your images

Section 4

Build a custom Tomcat image with a sample Java WAR file on your Linux machine

Clone this GitHub project. ~ ~ ~ ~ git clone https://github.com/dchqinc/basic-docker-tomcat-example.git ~ ~ ~ ~

Build your own custom Tomcat image using the Dockefile cloned from GitHub and push the image to your Docker Hub repository. ~ ~ ~ ~ docker login docker build -t /tomcat:latest . docker push /tomcat:latest ~ ~ ~ ~

Here’s the basic Dockerfile used in this GitHub project. ~ ~ ~ ~ FROM tomcat:8.0.21-jre8

COPY ./software/ /usr/local/tomcat/webapps/ ~ ~ ~ ~

The image is built using the Tomcat image with the tag 8.0.21-jre8 and copies the sample Java WAR file into the /usr/local/tomcat/webapps/ directory.

Section 5

Run the container using the Docker CLI

docker run -p 8080:8080 -d --name tomcat  <your-username>/tomcat:latest
Section 6

Access the sample application

You can access the sample application on this URL: http://host-ip:8080/sample

Section 7

Access the logs

You can use this simple command to check the catalina logs of the Tomcat container

docker logs tomcat
Section 8

Check the files inside the container

You can run this command to enter the container and check the files under the webapps directory

docker exec -it tomcat bash
ls -lrt /usr/local/tomcat/webapps

Deploying a simple Tomcat container with a sample Java WAR file using DCHQ

Section 9

Build a custom Tomcat image with a sample Java WAR file using GitHub and DCHQ

Sign Up on http://dchq.io for a free account (no credit card required)

Register your Docker Hub account by navigating to Cloud Providers and then selecting Docker Registries from the + dropdown

Navigate to Image Builds and create new build by selecting GitHub from the + dropdown

Enter the this GitHub URL: https://github.com/dchqinc/basic-docker-tomcat-example.git

Run the build using the “play” button

You can refer to the detailed documentation here: http://dchq.co/docker-compose.html

Section 10

Create a Docker Compose template for the custom Tomcat image

Sign Up on http://dchq.io for a free account

Navigate to App & Machine and select Docker Compose after clicking on the + button. Provide the following YAML file.

tomcat:
  image: your-username/tomcat:latest
  mem_min: 500m
  cpu_shares: 1
  publish_all: true

image: your-username/tomcat:latest- This is the Docker image that will be pulled from a registry to launch a container. By default, all images are pulled from Docker Hub. - In order to pull from a private repository, the registry_id parameter needs to be added. This should reference the ID of the Docker registry you would have registered. - To pull from an official repository (like mysql), you can use simply enter image: mysql:latest - The tag name refers to the tagged images available in a repository.

mem_min: 500m- mem_min refers to the minimum amount of memory you would like to allocate to a container. In this case, the container will be allocated at least 50MB of memory and will continue using resources from the host based on the load.

cpu_shares: 1- cpu_shares refer to the amount of CPU allocated to the container

publish_all: true- If the value is true, this parameter will randomly bind all the exposed ports in the Dockerfile to a random port between 32000-59000 on the host. In this case, port 8080 is exposed in the Dockerfile – so a random port on the host will be bound to port 8080 in the container.

Section 11

Run the custom Tomcat image using DCHQ

Navigate to the Library and click Customize on the applications you would like to run (e.g. Basic Tomcat).

Select the Cluster of your choosing and then click Run

Deploy a multi-tier Names Directory Java application

Section 12

Configuring the web.xml and webapp-config.xml files in the Java application

You can clone this sample “Names Directory” Java application from GitHub.

git clone https://github.com/dchqinc/dchq-docker-java-example.git

This is the most important step in “Dockerizing” your Java application. In order to leverage the environment variables you can pass when running containers, you will need to make sure that your application is configured in a way that will allow you to change certain properties at request time – like:

  • The database driver you would like to use
  • The database URL
  • The database credentials
  • Any other parameters that you would like to change at request time (e.g. the min/max connection pool size, idle timeout, etc.)

To achieve this, you need pass environment varaibles in the context file storing your JNDI datasource connection details. Instead of hard-coding the database information, this file should have environment variables that can be overriden at request time. Here is the documentation on setting up JNDI connection details in Tomcat: https://tomcat.apache.org/tomcat-6.0-doc/jndi-datasource-examples-howto.html

Here’s documentation on defining a context in Tomcat: https://tomcat.apache.org/tomcat-6.0-doc/config/context.html#Defining_a_context

In this example, we will configure web.xml to use the bootstrap Servlet to start up the Spring context.

https://github.com/dchqinc/dchq-docker-java-example/blob/master/src/main/webapp/WEB-INF/web.xml

    <servlet>
        <servlet-name>DispatcherServlet</servlet-name>
        <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
        <init-param>
            <param-name>contextConfigLocation</param-name>
            <param-value>/WEB-INF/spring/webapp-config.xml</param-value>
        </init-param>
        <load-on-startup>1</load-on-startup>
    </servlet> 

You will notice that the contextConfigLocation is referencing /WEB-INF/spring/webapp-config.xml

Next, we will need to configure parameters in the webapp-config.xml file to reference host environment variables that will be passed at request time.

https://github.com/dchqinc/dchq-docker-java-example/blob/master/src/main/webapp/WEB-INF/spring/webapp-config.xml

    <bean id="dataSource" class="snaq.db.DBPoolDataSource" destroy-method="release">
        <property name="driverClassName" value="${database_driverClassName}"/>
        <property name="url" value="${database_url}"/>
        <property name="user" value="${database_username}"/>
        <property name="password" value="${database_password}"/>
        <property name="minPool" value="1"/>
        <property name="maxPool" value="10"/>
        <property name="maxSize" value="10"/>
        <property name="idleTimeout" value="60"/>
    </bean>

You will notice that specific dataSource properties are referencing the following environment variables that will be passed on at request time:

  • database_driverClassName
  • database_url
  • database_username
  • database_password

If you are unable to change the context files in your Java application, then you can use DCHQ’s plug-in framework to execute custom scripts to search for hard-coded parameter values and replace them with the right database connection details. DCHQ automatically retrieves information about the container IP, port and environment variable values for the connected database and allows you to inject this information inside Tomcat or other application servers that may need to connect to it.

Section 13

Using the liquibase bean to initialize the connected database

We typically recommend initializing the database schema as part of the Java application deployment itself. This way, you don’t have to worry about maintaining separate SQL files that need to be executed on the database separately.

However if you already have those SQL files and you still prefer executing them on the database separately – then DCHQ can help you automate this process through its plug-in framework. You can refer to thissection for more information.

In order to include the SQL scripts for creating the database tables in the Java application, you will need to configure webapp-config.xml file to use liquibase bean that checks the dataSource and runs new statements from upgrade.sql. Liquibase tracks which changelog statements have run against each database.

https://github.com/dchqinc/dchq-docker-java-example/blob/master/src/main/webapp/WEB-INF/spring/webapp-config.xml

    <bean id="liquibase" class="liquibase.integration.spring.SpringLiquibase">
        <property name="dataSource" ref="dataSource" />
        <property name="changeLog" value="/WEB-INF/upgrade/upgrade.sql" />
    </bean>

Here’s the actual upgrade.sql file with the SQL statements for initializing the schema on the connected MySQL, PostgreSQL or Oracle database.

https://github.com/dchqinc/dchq-docker-java-example/blob/master/src/main/webapp/WEB-INF/upgrade/upgrade.sql

--liquibase formatted sql

--changeset admin:1 dbms:mysql
CREATE TABLE IF NOT EXISTS `NameDirectory` (
    `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
    `firstName` VARCHAR(50) NOT NULL,
    `lastName` VARCHAR(50) NOT NULL,
    `createdTimestamp` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
    PRIMARY KEY (`id`)
)
ENGINE=InnoDB;

--changeset admin:1 dbms:postgresql
CREATE TABLE "NameDirectory" (
    id SERIAL NOT NULL,
    "firstName" VARCHAR(50) NOT NULL,
    "lastName" VARCHAR(50) NOT NULL,
    "createdTimestamp" TIMESTAMP(0) WITHOUT TIME ZONE DEFAULT timestamp 'now ()' NOT NULL,
    PRIMARY KEY(id)
)
    WITH (oids = false);

--changeset admin:1 dbms:oracle
CREATE TABLE NameDirectory (
    id NUMBER(10) NOT NULL,
    firstName VARCHAR2(50) NOT NULL,
    lastName VARCHAR2(50) NOT NULL,
    createdTimestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    CONSTRAINT id_pk PRIMARY KEY (id)
);
CREATE SEQUENCE NameDirectory_seq;
Section 14

Deploy a Multi-Tier Java Application using the Docker CLI

Section 15

Building a Docker image for Tomcat with the Java WAR file using the Docker CLI

You can create a very simple Dockerfile that simply copies the Java WAR file into the /usr/local/tomcat/webapps directroy.

First, you can simply wget the actual Java WAR file from the GitHub project. ~ ~ ~ ~ wget https://github.com/dchqinc/dchq-docker-java-example/raw/master/dbconnect.war ~ ~ ~ ~

Then create the Dockerfile ~ ~ ~ ~ FROM tomcat:8.0.21-jre8

RUN [“rm”, “-rf”, “/usr/local/tomcat/webapps/ROOT”] COPY dbconnect.war /usr/local/tomcat/webapps/ROOT.war

CMD [“catalina.sh”, “run”] ~ ~ ~ ~

Finally, build your “Names Directory” Tomcat image using the Dockefile and push the image to your Docker Hub repository. ~ ~ ~ ~ docker login docker build -t your-username/tomcat-names:latest . docker push your-username/tomcat-names:latest ~ ~ ~ ~

Section 16

Run the two-tier Java application using the Docker CLI

First, you can run the MySQL container. ~ ~ ~ ~ docker run -d -e MYSQL_USER=root -e MYSQL_DATABASE=names -e MYSQL_ROOT_PASSWORD=password –name mysql mysql:latest ~ ~ ~ ~

Then you can run the Tomcat container, which already contains the Names Directory Java WAR file. The Tomcat container will be linked to MySQL. ~ ~ ~ ~ docker run -d -p 8080:8080 –name names-directory –link mysql:mysql -e database_driverClassName=com.mysql.jdbc.Driver -e database_url=jdbc:mysql://mysql:3306/names -e database_username=root -e database_password=password your-username/tomcat-names ~ ~ ~ ~

You can access the Names Directory application on this URL: http://host-ip:8080

Section 17

Deploy a Multi-Tier Java Application using DCHQ

Section 18

Creating Docker Compose application templates that can re-used on any Linux host running anywhere

Once logged in to DCHQ (either the hosted DCHQ.io or on-premise version), a user can navigate to App & Machine and then click on the + button to create a new Docker Compose template.

We have created 52 application templates using the official images from Docker Hub for the same “Names Directory” Java application – but for different application servers and databases. https://github.com/dchqinc/dchq-docker-java-example https://github.com/dchqinc/dchq-docker-java-solr-mongo-cassandra-example

The templates include examples of the following application stacks (for the same Java application): - Apache HTTP Server (httpd) & Nginx – for load balancing - Tomcat, Jetty, WebSphere and JBoss – for the application servers - Solr – for the full-text search - MySQL, MariaDB, PostgreSQL, Oracle XE, Mongo and Cassandra – for the databases

Docker Service Discovery using Plug-in Lifecycle Stages

Across all these application templates, you will notice that some of the containers are invoking BASH, Perl, Python or Ruby script plug-ins in order to configure the container at different life-cycle stages.

These plug-ins can be created by navigating to Plug-ins . Once the script is provided, the DCHQ agent will execute this script inside the container . A user can specify arguments that can be overridden at request time and post-provision. For a BASH script, anything preceded by the s i g i c o n s i d e r e a a r g u m e n t f o e x a m p l e :

file_urlcan be an argument that allows developers to specify the download URL for a WAR file. This can be overridden at request time and post-provision when a user wants to refresh the Java WAR file on a running container.

The plug-in ID needs to be provided when defining the YAML-based application template. For example, to invoke a BASH script plug-in for Nginx, we would reference the plug-in ID as follows: ~ ~ ~ ~ LB: image: nginx:latest publish_all: true mem_min: 50m host: host1 plugins: - !plugin id: 0H1Nk restart: true lifecycle: on_create, post_scale_out:AppServer, post_scale_in:AppServer, post_start:AppServer, post_stop:AppServer arguments: # Use container_private_ip if you’re using Docker networking - servers=server {{AppServer | container_private_ip}}:8080; # Use container_hostname if you’re using Weave networking #- servers=server {{AppServer | container_hostname}}:8080; ~ ~ ~ ~

The service discovery framework in DCHQ provides event-driven life-cycle stages that executes custom scripts to re-configure application components. This is critical when scaling out clusters for which a load balancer may need to be re-configured or a replica set may need to be re-balanced.

You will notice that the Nginx plug-in is getting executed during these different stages or events:

  • When the Nginx container is created – in this case, the container IP’s of the application servers are injected into the default configuration file to facilitate the load balancing to the right services
  • When the application server cluster is scaled in or scale out – in this case, the updated container IP’s of the application servers are injected into the default configuration file to facilitate the load balancing to the right services
  • When the application servers are stopped or started – in this case, the updated container IP’s of the application servers are injected into the default configuration file to facilitate the load balancing to the right services

So the service discovery framework here is doing both service registration (by keeping track of the container IP’s and environment variable values) and service discovery (by executing the right scripts during certain events or stages).

Here are the parameters supported when invoking a plugin: - id – this is the ID of the plug-in. This can be retrieved from Manage > Plugins and then clicking Edit on your plugin of choice. - restart – this is a Boolean parameter. If set to true, then the container is restarted after executing the plugin. - arguments – you can override the arguments specified in the plugin here. The arguments can be overridden when creating the template, when deploying the application and post-provision.

The lifecycle parameter in plug-ins allows you to specify the exact stage or event to execute the plug-in. If no lifecycle is specified, then by default, the plug-in will be execute on_create. Here are the supported lifecycle stages:

  • on_create – executes the plug-in when creating the container
  • on_start – executes the plug-in after a container starts
  • on_stop – executes the plug-in before a container stops
  • on_destroy – executes the plug-in before destroying a container
  • post_create – executes the plug-in after the container is created and running
  • post_start[:Node] – executes the plug-in after another container starts
  • post_stop[:Node] – executes the plug-in after another container stops
  • post_destroy[:Node] – executes the plug-in after another container is destroyed
  • post_scale_out[:Node] – executes the plug-in after another cluster of containers is scaled out
  • post_scale_in[:Node] – executes the plug-in after another cluster of containers is scaled in

The application servers (Tomcat, Jetty, JBoss and WebSphere) are also invoking a BASH script plug-in to deploy the Java WAR file from the accessible GitHub URL.

https://github.com/dchqinc/dchq-docker-java-example/raw/master/dbconnect.war

Using plug-ins and the host parameter to deploy highly-available Docker Java applications

You will notice that the cluster_size parameter allows you to specify the number of containers to launch (with the same application dependencies). This allows you to deploy a cluster of application servers for example.

The host parameter allows you to specify the host you would like to use for container deployments. This is possible if you have selected Weave as the networking layer when creating your clusters. That way you can ensure high-availability for your application server clusters across different hosts (or regions) and you can comply with affinity rules to ensure that the database runs on a separate host for example. Here are the values supported for the host parameter:

  • host1, host2, host3 , etc. – selects a host randomly within a data-center (or cluster) for container deployments
  • IP Address 1, IP Address 2, etc. – allows a user to specify the actual IP addresses to use for container deployments
  • Hostname 1, Hostname 2, etc. – allows a user to specify the actual hostnames to use for container deployments
  • Wildcards (e.g. “db-*”, or “app-srv-*”) – to specify the wildcards to use within a hostname

Environment Variable Bindings Across Images

Additionally, a user can create cross-image environment variable bindings by making a reference to another image’s environment variable. In this case, we have made several bindings – including database_url=jdbc:mysql://{{MySQL|container_ip}}:3306/{{MySQL|MYSQL_DATABASE}} – in which the database container IP is resolved dynamically at request time and is used to ensure that the application servers can establish a connection with the database.

Here is a list of supported environment variable values:

  • {{alphanumeric | 8}}  – creates a random 8-character alphanumeric string. This is most useful for creating random passwords.
  • {{Image Name | ip}}  – allows you to enter the host IP address of a container as a value for an environment variable. This is most useful for allowing the middleware tier to establish a connection with the database.
  • {{Image Name | container_ip}}  – allows you to enter the name of a container as a value for an environment variable. This is most useful for allowing the middleware tier to establish a secure connection with the database (without exposing the database port).
  • {{Image Name | container_private_ip}}  – allows you to enter the internal IP of a container as a value for an environment variable. This is most useful for allowing the middleware tier to establish a secure connection with the database (without exposing the database port).
  • {{Image Name | port_Port Number}}  – allows you to enter the Port number of a container as a value for an environment variable. This is most useful for allowing the middleware tier to establish a connection with the database. In this case, the port number specified needs to be the internal port number – i.e. not the external port that is allocated to the container. For example, {{PostgreSQL | port_5432}} will be translated to the actual external port that will allow the middleware tier to establish a connection with the database.
  • {{Image Name | Environment Variable Name}}  – allows you to enter the value an image’s environment variable into another image’s environment variable. The use cases here are endless – as most multi-tier applications will have cross-image dependencies.

Here are a few example templates – but you can check out the GitHub projects for 50 more examples. https://github.com/dchqinc/dchq-docker-java-example https://github.com/dchqinc/dchq-docker-java-solr-mongo-cassandra-example

3-Tier Java (Nginx – Tomcat – MySQL)

LB:
  image: nginx:latest
  publish_all: true
  mem_min: 50m
  host: host1
  plugins:
    - !plugin
      id: 0H1Nk
      restart: true
      lifecycle: on_create, post_scale_out:AppServer, post_scale_in:AppServer, post_stop:AppServer, post_start:AppServer
      arguments:
        # Use container_private_ip if you're using Docker networking
        - servers=server {{AppServer | container_private_ip}}:8080;
        # Use container_hostname if you're using Weave networking
        #- servers=server {{AppServer | container_hostname}}:8080;
AppServer:
  image: tomcat:8.0.21-jre8
  mem_min: 600m
  host: host1
  cluster_size: 1
  environment:
    - database_driverClassName=com.mysql.jdbc.Driver
    - database_url=jdbc:mysql://{{MySQL|container_hostname}}:3306/{{MySQL|MYSQL_DATABASE}}
    - database_username={{MySQL|MYSQL_USER}}
    - database_password={{MySQL|MYSQL_ROOT_PASSWORD}}
  plugins:
    - !plugin
      id: oncXN
      restart: true
      arguments:
        - file_url=https://github.com/dchqinc/dchq-docker-java-example/raw/master/dbconnect.war
        - dir=/usr/local/tomcat/webapps/ROOT.war
        - delete_dir=/usr/local/tomcat/webapps/ROOT
MySQL:
  image: mysql:latest
  host: host1
  mem_min: 400m
  environment:
    - MYSQL_USER=root
    - MYSQL_DATABASE=names
    - MYSQL_ROOT_PASSWORD={{alphanumeric|8}}

Multi-Tier Java (ApacheLB-JBoss-Solr-Mongo)

HTTP-LB:
  image: httpd:latest
  publish_all: true
  mem_min: 50m
  host: host1
  plugins:
    - !plugin
      id: uazUi
      restart: true
      lifecycle: on_create, post_scale_out:AppServer, post_scale_in:AppServer
      arguments:
        # Use container_private_ip if you're using Docker networking
        - BalancerMembers=BalancerMember http://{{AppServer | container_private_ip}}:8080
        # Use container_hostname if you're using Weave networking
        #- BalancerMembers=BalancerMember http://{{AppServer | container_hostname}}:8080
AppServer:
  image: jboss/wildfly:latest
  mem_min: 600m
  host: host1
  cluster_size: 1
  environment:
    - mongo_url={{Mongo|container_private_ip}}:27017/dchq
    - solr_host={{Solr|container_private_ip}}
    - solr_port=8983
  plugins:
    - !plugin
      id: oncXN
      restart: true
      arguments:
        - file_url=https://github.com/dchqinc/dchq-docker-java-solr-mongo-cassandra-example/raw/master/dbconnect.war
        - dir=/opt/jboss/wildfly/standalone/deployments/ROOT.war
        - delete_dir=/opt/jboss/wildfly/standalone/deployments/ROOT
Solr:
  image: solr:latest
  mem_min: 300m
  host: host1
  publish_all: false
  plugins:
    - !plugin
      id: doX8s
      restart: true
      arguments:
        - file_url=https://github.com/dchqinc/dchq-docker-java-solr-mongo-cassandra-example/raw/master/names.zip
Mongo:
  image: mongo:latest
  host: host1
  mem_min: 400m

2-Tier Java (WebSphere – Oracle-XE)

AppServer:
  image: websphere-liberty:webProfile6
  publish_all: true
  mem_min: 600m
  host: host1
  cluster_size: 1
  environment:
    - database_driverClassName=oracle.jdbc.OracleDriver
    - database_url=jdbc:oracle:thin:@//{{Oracle|container_ip}}:1521/{{Oracle|sid}}
    - database_username={{Oracle|username}}
    - database_password={{Oracle|password}}
    - LICENSE=accept
  plugins:
    - !plugin
      id: rPuVb
      restart: true
      arguments:
        - file_url=https://github.com/dchqinc/dchq-docker-java-example/raw/master/dbconnect.war
        - dir=/opt/ibm/wlp/usr/servers/defaultServer/dropins/dbconnect.war
        - delete_dir=/opt/ibm/wlp/usr/servers/defaultServer/dropins/dbconnect
Oracle:
  image: wnameless/oracle-xe-11g:latest
  host: host1
  mem_min: 400m
  environment:
    - username=system
    - password=oracle
    - sid=xe
Section 19

Accessing The In-Browser Terminal For The Running Containers

A command prompt icon should be available next to the containers’ names on the Live Apps page. This allows users to enter the container using a secure communication protocol through the agent message queue. A white list of commands can be defined by the Tenant Admin to ensure that users do not make any harmful changes on the running containers.

For the Tomcat deployment for example, you can use the command prompt to make sure that the Java WAR file was deployed under the /usr/local/tomcat/webapps/ directory.

Section 20

Monitoring the CPU, Memory & I/O Utilization of the Running Containers

Once the application is up and running, a user can monitor the CPU, Memory, & I/O of the running containers to get alerts when these metrics exceed a pre-defined threshold. This is especially useful when developers are performing functional & load testing.

A user can perform historical monitoring analysis and correlate issues to container updates or build deployments. This can be done by clicking on the *Stats** button. A custom date range can be selected to view CPU, Memory and I/O historically.

Section 21

Redeploying Containers When a New Image is Pushed into a Docker Registry

A user can set up a container “re-deployment” policy that can be triggered when a new image is pushed into a Docker registry. This allows users to create continuous delivery workflows based on Docker image builds. This can be done by clicking on the Actions menu of the running application and then selecting Redeploy . A user can then select the Registry and the name of the Repository .

Section 22

Enabling the Continuous Delivery Workflow with Jenkins to Update the WAR File of the Running Application when a Build is Triggered

Many developers may wish to update the running application server containers with the latest Java WAR file instead of re-deploying containers. This may be a common practice in DEV/TEST environments. For that, DCHQ allows developers to enable a continuous delivery workflow with Jenkins. This can be done by clicking on the Actions menu of the running application and then selecting Continuous Delivery . A user can select a Jenkins instance that has already been registered with DCHQ, the actual Job on Jenkins that will produce the latest WAR file, and then a plug-in to grab this build and deploy it on a running application server. Once this policy is saved, DCHQ will grab the latest WAR file from Jenkins any time a build is triggered and deploy it on the running application server.

Section 23

Scaling out the Tomcat Application Server Cluster

If the running application becomes resource constrained, a user can to scale out the application to meet the increasing load. Moreover, a user can schedule the scale out during business hours and the scale in during weekends for example.

To scale out the cluster of Tomcat servers from 1 to 2, a user can click on the Actions menu of the running application and then select Scale Out . A user can then specify the new size for the cluster and then click on Run Now .

As the scale out is executed, the Service Discovery framework will be used to update the load balancer. For Apache HTTP Server, for example, a plug-in updates httpd.conf file to inject the application server container IP’s to ensure that the load balancer is routing traffic to the new application server containers added as part of the scale out.

An application time-line is available to track every change made to the application for auditing and diagnostics. This can be accessed from the expandable menu at the bottom of the page of a running application.

Alerts and notifications are also available for when containers or hosts are down or when the CPU & Memory Utilization of either hosts or containers exceed a defined threshold.





About List