Skip to end of metadata
Go to start of metadata



This document describes how to configure and deploy OpenAM to maximise performance and throughput.

OpenAM Server

The OpenAM server has a number of areas that can be tuned to increase performance


There are a number of general points that should be considered when tuning OpenAM for performance. Ensure the following:

  • Debug level is set to error
  • Session failover debugging is disabled
  • Associated container logging is set to a low (error/severe) level


OpenAM communicates to LDAP in two key areas that can be tuned:

  • LDAP data store
  • LDAP authentication module

In the LDAP data store, the key tuning parameters are:


Default Value


LDAP Connection Pool Minimum Size


The minimum LDAP connection pool size; a good tuning value for this property is 10.

LDAP Connection Pool Maximum Size


The maximum LDAP connection pool size; a good tuning value for this property is 65. Ensure your LDAP server can cope with the maximum number of clients across all the OpenAM servers.

The configuration for the data store can be found in the OpenAM console under Access Control > Data Stores. Each data store has it's own LDAP connection pool and therefore each data store will need its own tuning.

In the LDAP authentication module, the key tuning parameters are:


Default Value


Default LDAP Connection Pool Size


The minimum and maximum LDAP connection pool used by the LDAP authentication module. This should be tuned to 10:65 for production.

The configuration for the LDAP authentication module LDAP connection pool can be found in the console under Configuration > Authentication > Core.


The LDAP data store contains a cache of LDAP data that has been loaded previously; the key parameters here are:


Default Value




Turns on/off the caching feature in the LDAP data store

Maximum Age of Cached Items


This is 10 minutes and does not normally need tuning.

Maximum Size of the Cache


This is 10k and is very small for a cache. A 1 mbyte cache (1048576) will be a better starting point.

These parameters are the same as the other parameters for the LDAP data store; each store has it's own cache and they must be configured separately.


OpenAM has two thread pools used to send notifications to clients; the Service Management Service thread pool can be tuned in console under Configuration > Servers and Sites > Default Server Settings > SDK.


Default Value


Notification Pool Size


This is the size of the thread pool used to send notifications; in production this value should be fine unless lots of clients are registering for SMS notifications.

The session service has its own thread pool to send notifications; this is configured under Configuration > Servers and Sites > Default Server Settings > Session.


Default Value


Notification Pool Size


This is the size of the thread pool used to send notifications; in production this should be around 25-30.

Notification Thread Pool Threshold


This is the maximum number of notifications in the queue waiting to be sent. The default value should be fine in the majority of installations.


The session service has a number of properties that can be tuned for a production environment under Configuration > Servers and Sites > Default Server Settings > Session.


Default Value


Maximum Sessions


In production this value can safely be set into the 100,000's. The maximum session limit is really controlled by the maximum size of the JVM heap which must be tuned appropriately to match the expected number of concurrent sessions.

Sessions Purge Delay


This should be zero to ensure sessions are purged immediately.


In this section we cover the performance tuning for various J2EE containers into which OpenAM can be deployed. 


Tomcat can be used as a standalone J2EE web container with basic web server connectivity. However Tomcat is not designed to provide the full functionality expected from a web server such as Apache. In production environments where performance needs to be maximised and full web server functionality is required then integrating Tomcat into the Apache Web Server is the preferred option.

Clients (such as a browser or the Apache Web Server) communicate with Tomcat by means of Connectors. Tomcat defines the Connectors in the $TOMCAT_HOME/conf/server.xml file. By default there is a HTTP connector running on port 8080. This allows a browser to access Tomcat applications using HTTP on port 8080. The HTTP connector can also be configured to run in a secure mode using SSL/TLS. The AJP connector uses the AJP 1.3 protocol (Apache Java Protocol) and is designed for efficient communications between Apache and Tomcat. 

General Configuration

Apache should be configured as required for the deployment; if Apache is just front-ending Tomcat then the vanilla configuration will be an ideal starting point. In terms of performance; Apache can be deployed in pre-fork or worker mode. Pre-Fork mode uses multiple single threaded processes to service requests whereas worker mode uses multi-threaded (and multiple processes) to process requests. Worker mode is the preferred approach as it is more efficient than pre-fork. The decision on whether Apache will run in pre-fork or worker mode is made at compile time by ensuring Apache is compiled using shared modules; this allows the Multi-Process-Mode (MPM) to be set in the configuration. 

Proxy Configuration

Apache uses the module; {{{}}} to forward requests to the Tomcat using said protocol.

In the Apache web server configuration file; ensure the module is loaded using this directive:

LoadModule proxy_ajp_module modules/

In order to forward requests from Apache Web Server to Tomcat, the following directive must be used:

ProxyPass /openam/ ajp://localhost:8009/openam/

This will cause Apache to forward all requests starting with the URI /openam/ to the Tomcat container using the AJP protocol. Tomcat has a default AJP Connector defined by default in the server configuration listening on port 8009. Additional ProxyPass directives can be used to forward to other applications deployed within Tomcat.

On the Tomcat side, the AJP connector must be enabled in the server.xml file.

<Connector port="8009" protocol="AJP/1.3" redirectPort="8443" />


Even with Tomcat and Apache Web Server linked using the AJP protocol further performance enhancements can be gained by linking the two processes at the operating system level. The Apache Web Server provides the Apache Portable Runtime that allows for dynamic modules to be installed into the Apache process at runtime. Tomcat has a native APR module that allows Tomcat to be linked directly into the Apache Web Server process. Tomcat uses the APR module to provide better scalability by being able to directly leverage operating system functionality and resources.
The Tomcat APR module must be compiled from source and then integrated into the Apache Tomcat environment. A compiler such as gcc is required to compile the Native Module as with Apache.

# cd /opt/tomcat6/bin
# gtar zxf tomcat-native.tar.gz
# cd tomcat-native-1.1.20-src/jni/native
# ./configure --with-apr=/opt/httpd-2.2.16/bin --with-java-home=/opt/jdk1.6.0_20/ --prefix=/opt/tomcat6/ --with-os-type=include/solaris
# make ; make install

This build is assuming Apache Web Server is installed into /opt/httpd-2.2.16 and the JDK is in /opt/jdk1.6.0_20. Update these values as appropriate for your environment.

When Tomcat6 is started, the following JVM options should be set:


When Tomcat6 starts, if the Native library is being loaded correctly, you should see the following entry in the catalina.out log file.

28-Jul-2010 13:43:13 org.apache.catalina.core.AprLifecycleListener init INFO: Loaded APR based Apache Tomcat Native library 1.1.20.
28-Jul-2010 13:43:13 org.apache.catalina.core.AprLifecycleListener init
INFO: APR capabilities: IPv6 [true], sendfile [true], accept filters [false], random [true].

The Tomcat Native library provides Tomcat with native performance is certain key areas such as transferring data from process to process.

Performance Tuning

The key area of performance tuning for Apache once Apache is configured to run in worker mode is to ensure they are enough processes and threads available to service the expected number of client requests. Apache performance is configured in the conf/extra/http-mpm.conf file.

<IfModule mpm_worker_module>
    StartServers          2
    MaxClients          150
    MinSpareThreads      25
    MaxSpareThreads      75
    ThreadsPerChild      25
    MaxRequestsPerChild   0

The key properties in this file are ThreadsPerChild and MaxClients as these together control the maximum number of concurrent requests that can be processed by Apache. The default configuration will allow for 150 concurrent clients spread across 6 processes of 25 threads each.

If you would like to use the agent notification feature the MaxSpareThreads, ThreadLimit and ThreadsPerChild default values must not be altered; otherwise the notification queue listener thread cannot be registered. Any other values apart from these three in the worker MPM are ok to be custom, i.e. it is possible to use a combination of MaxClients and ServerLimit to achieve a high level of concurrent clients.

Performance Tuning of Tomcat relies on tuning the connector being used by Apache, but fortunately the tuning parameters are the similar across the connectors; they must be applied to the correct Connector definition.

    <Connector port="8280" protocol="AJP/1.3" redirectPort="8443" connectionTimeout="60000" maxThreads="
256" backlog="4096"/>

Parameter Name

Default Value




The number of milliseconds this Connector will wait, after accepting a connection, for the request URI line to be presented.



The maximum number of request processing threads to be created by this Connector, which therefore determines the maximum number of simultaneous requests that can be handled.



The maximum queue length for incoming connection requests when all possible request processing threads are in use. Any requests received when the queue is full will be refused.

Finding the correct values for these parameters will depend on the each environment, but generally is will be a good idea to bring the connectionTimeout down to 60 seconds and increase the backlog to around 4096. The idea of increasing the backlog is to allow Tomcat a chance to process requests if there is a sudden surge to incoming requests. The maxThreads parameter should be tuned to ensure there are enough threads to accommodate the number of concurrent requests that could be received by Apache and passed through to Tomcat.

More information on the Tomcat connector parameters can be found here for HTTP and here for AJP.


JVM tuning is a complex process and will not be covered in detail here. This section gives some initial guidance on configuring the JVM for running OpenAM. These setting provide a strong foundation to the JVM before a more detailed garbage collection tuning exercise or just as best practice configuration for production.

Heap Size

JVM Parameter

Suggested Value


-Xms and -Xmx

At least 1024m, production environments 2048m-3072m. This will depend on the available physical memory in the server and whether a 32 or 64bit JVM is in use.

Controls the maximum amount of memory that will be allocated to the JVM by the Operating System. In a production environment these two values should be set to the same.



Ensures the server JVM is used.

-XX:MaxPermSize=256m \\\
-XX:PermSize and -XX:MaxPermSize


Controls the size of the permanent generation in the JVM.

-XX:MaxDirectMemorySize128MThis is only required when deploying on machines with 1Gb RAM or less (e.g. the AWS micro instance)


Controls the read timeout in the Java HTTP client implementation


Controls the connect timeout in the Java HTTP client implementation

Garbage Collection

JVM Parameter

Suggested Value




Verbose gc reporting



Location of the verbose gc log file



Prints a heap histrogram when a SIGTERM signal is received by the JVM



Prints out detailed GC timings



Out of Memory errors will generate a heap dump automatically



Location of the heap dump



Use the concurrent mark sweep garbage collector



Agressive compaction at full collection



Allow class unloading during CMS sweeps

These settings can be seen as a recommended starting point for a more details performance tuning exercise.

  • No labels