The following section of the guide details the addition of a second OpenAM Server to the initial deployment. Having this second server turns what was initially just a test deployment into something which is capable of handling a limited production environment. This is because of the significantly increased throughput capacity, providing a model for extensive scalability, and greatly increased availability resulting from having a mirrored environment. It is assumed that the second OpenAM server is comparable to the first as described in the Introduction to this guide, and will be assumed to have the hostname openam-2 and accessible through local DNS at openam-2.example.com.
Overview of System
The key functionality of the additional deployment is the scalability and availability. The scalability is implemented through a load balancer, providing a single point of entry for users to query any number of OpenAM servers. In this example this is implemented through a software load balancer called HAProxy, however it is also possible to implement this using a different software load balancer such as Apache, or a hardware load balancer. HAProxy is chosen because it is resource light and feature rich, providing features such as sticky cookie load balancing, while creating very little load on the server upon which it is run. As such it is feasible to install it alongside the webserver, in a light use environment.
The new OpenAM server will be running an identical set of services to the first, with replication of both the internal OpenAM data store, providing configuration and settings, and the OpenDS external user data. This means that each of the servers automatically provides a backup of the other, mirroring both the data and configuration. The servers also have the ability to negotiate and pass sessions between themselves. This means that if a user authenticates with one server and receives a valid token, should they then be redirected to a different server to validate the session it will be able to contact the first server and verify the validity of the session.
The final stage of the deployment is focused entirely on availability, setting up a failover system for the user session data. This provides for the case where a server goes down and so a user session on a different machine can no longer be validated through peer communications between the two machines. For this eventuality a separate session datastore is used to provide backup facilities, using OpenMQ as a high availability asynchronous messaging service and BerkleyDB to store the sessions, as packaged in the SSOSessionTools.zip. This system will once again be deployed to both of the OpenAM servers and mirrored between them.
Second OpenAM Server Installation and Configuration
Initially follow the 2 OpenAM Server Installation section of the guide for the second server, as with the first. Follow the alternate settings for OpenDS topology configuration if this was planned from the first deployment, otherwise install identically to the first one and use the guide Configuring Data Replication with dsReplication from the OpenDS wiki to set up the replication.
Configuring the second OpenAM Instance
To begin the Configurator process for the second server, first access the new OpenAM deployment at http://openam-2.example.com:8080/openam.
Step 1 : General
The administrator password on this instance must be the same as the first instance. This allows the two machines to communicate between themselves. The installation process cannot complete if this password is not correct.
Password : cangetinam
Confirm Password : cangetinam
Step 2 : Server Settings
These should be set up equivalent to those of the first OpenAM server, once again with the defaults sufficing for the majority of the settings although it is suggested once again that the configuration directory used is ~/openam/conf instead of the default ~/openam
Server URL :
Cookie Domain : .example.com
Platform Locale : en_US
Configuration Directory : ~/openam/conf
Step 3 : Configuration Data Store Settings
This is where the second OpenAM server connects to the first and obtains most of the rest of its settings from there.
- Select 'Add to Existing Deployment?' button
- Type in the Server URL 'http://openam.example.com:8080/openam'
When this is done correctly, a view containing the various default port numbers used by the OpenAM installation is shown. All of these can be accepted as the default. This process connects to the existing OpenAM deployment and imports the settings. As such Step 4 is passed over, as it is automatically configured during this process.
Step 5 : Site Configuration
This step provides a quick way to add a server to a site, for use behind a load balancer. While it is simple to set up and add servers to existing sites, we will follow through this part of the configuration process separately, using the OpenAM Console.
Will this instance be deployed behind a load balancer as part of a site configuration? : No
Step 7 : Summary
Accept the summary and complete the installation process for the second OpenAM server.
Setting Up the Load Balancer
This is an example step and as such should be approached with a level of awareness. While it does provide a solution for load balancing, it is a simple solution as opposed to necessarily the best solution. As a result this section attempts to cover additional reasoning behind the choices made and settings taken. HAProxy was chosen to provide load balancer functionality for this example because it is one of the few, and probably the most simple of the load balancers, that provide sticky-cookie functionality.
It is greatly preferred if a user, once authenticated on a given OpenAM server, always receives authorization from that same server, in order to limit the need for inter-server and delay in receiving authorization. As such it is necessary for the load balancer to forward a given user to the same machine repeatedly. The primary method for doing this is choosing the server based upon the IP Address of the client, as it can be done at the internet level of the IP stack, saving decoding and making the process more efficient. The problem with this is that it causes problems when a user's ISP does not provide consistent IP addresses, the routing will likewise be inconsistent. The other problem with this method is that for small numbers of users it can provide uneven balancing between servers.
Sticky cookie load balancing makes use of a cookie to identify a given user and direct them accordingly. This provides advantages in that it can still be done with round-robin load balancing, and users can be directed to the least loaded server between login sessions, providing a more even balance. Furthermore support for sticky cookie load balancing is built into OpenAM, allowing it to assign the cookie, expire it upon logout and deal with extraordinary circumstances, such as server failure, more efficiently than would otherwise be possible. This is the amlbcookie set by the server. It is possible to use a cookie set by the load balancer, however this will be less effective than using the amlbcookie, as it will not expire on logout and will not provide the possibility of OpenAM handing off a session to another instance in case of failure.
Configuring a health probe from the load balancer to OpenAM
You should configure a probe from your load balancer to be sent to the OpenAM server periodically in order to check whether the service is still up and running. A good probe would be to do a HTTP GET of the following URL:
HAProxy can be downloaded from http://haproxy.1wt.eu/ as a pre-compiled linux binary, or may be available as a packaged distribution on your chosen system's package manager.
Create a file called haproxy.cfg in ~/, and add the following, replacing the three [ipaddress_of_X] instances with the relevant addresses.
Start HAProxy using the command
This will start HAProxy using its own cookie being used to provide stickiness. The commented out appsession line should be usable in place of the preceding cookie line, however did not seem to provide any stickiness.
In order to have accessing OpenAM through the load balancer seem equivalent to accessing a page, as opposed to redirecting to one or other of the servers, it is necessary to set up a site. To do this
- Log into the OpenAM Console as AmAdmin
- Go to 'Configuration -> Servers & Sites'
- Create a new Site
- Name : Entry
- Primary URL :
- Save the site.
- Click on the first OpenAM Server
- In the 'Parent Site' drop-down list, select 'Entry'
- Save the change
- Repeat the previous step for the second OpenAM Server
Looking in the Servers and Sites tab, both of the OpenAM servers should now be listed within Assigned Servers for Site Name 'Entry'.
Setting Up Session Failover
The installation and configuration of the session failover tools should be performed on both of the OpenAM machines and so it is assumed that this guide will be followed through for each of the OpenAM servers. The session failover tools are included in the tools folder of the OpenAM archive, as a zip file called ssoSessionTools.zip. This file contains all of the tools necessary to provide session failover for OpenAM. Unzip this archive into the ~/openam/ folder, which currently only contains the conf/ folder containing all the configuration for OpenAM, to create a folder ~/openam/ssoSessionTools.
To install the ssoSessionTools
- cd ~/openam/ssoSessionTools
- chmod +x setup
Choose the name of the folder to install the session failover scripts into, with 'sfoscripts' being used in this example.
Configuring the Failover Session Tools
The first of the session tools to configure is OpenMQ, the message queue and broker between OpenAM and the BerkleyDB storage. The first thing to do is start up an instance of the message queue daemon and configure the users. The default distribution of OpenMQ includes a guest user that needs to be disabled and an admin user must be created and activated. The following set of commands will change into the binary directory for OpenMQ, start an instance called openambroker running on port 7777, deactivate the guest user, create an administrative user called openamuser with the password cangetin and finally list all running imq services.
The next step is to shutdown the message queue broker. To do this use the kill command, followed by the pid of each of the imq services running, as listed by the ps command.
Now that OpenMQ has been set up, the next step is to configure the amsfo process. This involves editing the amsfo.conf file.
The changes to following attributes within the amsfo.conf file need to be located and set as follows
The final stage of setting up the session failover tools is creating an encrypted password file. This is done using the amsfopassword script. The following command will create a password file called .password in the sfoscripts folder. In order to use this file with a different name or location, it is necessary to change the PASSWORDFILE property in the amsfo.conf file accordingly.
Adding Failover to OpenAM
The session failover is added to OpenAM as a secondary session configuration instance.
- Log into OpenAM as AmAdmin
- Go into Configuration->Global->Session
- Create a new Secondary Configuration Instance
- Session Store User : openamuser
- Session Store Password : cangetin
- Session Store Password (confirm) : cangetin
- Maximum Wait Time : 5000
- Database Url : openam.example.com:7777,openam-2.example.com:7777
- Session Failover Enabled : (True) Enabled
Activating Session Failover
Session failover is started by running the amsfo script, from the sfoscripts/bin/ folder, before restarting OpenAM in order that it can begin using the tools. The first step is to start the amsfo script on both machines.
Unfortunately the secondary configuration attribute within OpenAM is not hot-swappable, and as such the OpenAM instances have to be restarted in order to begin using the session failover.
Make sure that tomcat has completely shutdown before restarting it. Another way to test this is to look at the open ports using netstat -an | grep 8080 to see whether tomcat is still using the port. Once tomcat has been shutdown on both servers, it is time to restart it.
Once Tomcat is up and running on both servers again the failover service should be working.
With failover now activated and running it should be possible to seamlessly continue a login session, even if the OpenAM server hosting the active session is taken down. To test this
- Go to http://website.example.com:8080/openam
- Log in as AmAdmin
- Go to the sessions tab and look at which of the servers the session is active on
- On the server hosting the active session, shut down Tomcat using the command ./apache-tomcat-2.0.26/bin/shutdown.sh
- Go back to the logged in web-browser session and continue using the OpenAM console
Assuming that the setup process has been successful, the session should remain active, with the load balancer finding the server offline and redirecting to the alternate OpenAM instance, and the session being retrieved from the secondary session store.