IG ClientHandler and ReverseProxyHandler Configuration
ReverseProxyHandler is configured to communicate as the client to the downstream Protected application. For the purposes of conciseness, the term
ClientHandler will be used to represent both implementations.
Specific configuration that manages this communication are:
|The number of available connections to the downstream remote application|
|This is the number of IG worker threads allocated to service inbound requests and manage propagation to the downstream application. Of note, IG has an asynchronous threading-model, so a worker thread is not consumed blocking for a response from the downstream server. By default, this value is set to the number of available cores.|
|The connection timeout, or maximum time to connect to a server-side socket, before timing out and abandoning the connection attempt.|
|The socket timeout, or the maximum time a request is expected to take before a response is received, after which the request is deemed to have failed.|
The IG container (Tomcat, Jetty, or Vert.x config) and IG-specific configuration should be done with regard to:
- the performance goals, together with the capabilities and limitations of the downstream system:
- expectation of some increase in response time with IG inserted as a proxy in front of the protected application, due to the extra network hop and processing required.
- IG and its container being constrained by the limitations of the downstream server and the response times of the protected application. This includes the downstream web container configuration, its JVM configuration and tuning, resource types (e.g. compiled resources), etc.
With that in mind, the configuration of IG as a proxy should be conducted as follows:
- Start with the configuration of the downstream server and protected application:
- Ensure that the web container and JVM are tuned and able to achieve performance targets.
- Test and confirm in a pre-production environment under expected load and with common use-cases.
- Ensure that the web container configuration forms the basis of configuring IG and its web container.
- Configure IG and its web container, based on the limitations of the downstream server and protected application:
- Configure the IG
ClientHandlerbased on the downstream server configuration (see below).
- Configure the IG web container (e.g. Tomcat) to correspond with the downstream server configuration:
- At this stage, IG and its web container should replicate the number of connections and timeouts of the downstream application.
- Test and tune the IG
numberOfThreadsand IG web container threads
maxThreadsto determine the optimum throughput.
- Tune the IG web container JVM to support the desired throughput:
- Ensure there is sufficient memory to accommodate peak-load for the required connections. See Tuning the JVM].
- Ensure IG and its container timeouts support latency in the protected application.
- This phase should involve an incremental optimisation exercise to settle on the best performing memory and garbage collection settings.
- Configure the IG
- Vertical scaling:
- Look to increase hardware resources, as required.
Configuring the IG (Tomcat) Web Container Uniformly with the ClientHandler
The relationship between the Tomcat container and IG webapp is that Tomcat's
maxThreads is the number of Tomcat HTTP request threads. An IG worker thread -
numberOfWorkers - will pick up from a Tomcat request thread to propagate requests for processing downstream, via the
The Tomcat version and the selection of Tomcat HTTP Connector is very important with regards to configuring IG. Notably, IG should be configured in conjunction with the Tomcat IG container configuration. Notably:
- If using a BIO Connector (Tomcat 3.x to 8.x):
- the Tomcat maxThreads should be aligned to be close to the number of Tomcat configured connections. IG can be configured a lot lower (using an async threading model). The async IG threads are freed up immediately after the request is propagated and can service another blocking Tomcat request thread.
- Assumptions should be ratified in a pre-production performance test environment using real-life use cases.
- If using a NIO Connector:
maxThreadsconfig can be a lot lower than it would be using a BIO Connector. The NIO Connector also uses an async threading model, freeing up request threads once the request is handed over to the IG worker threads.
- Therefore, the config of IG worker threads -
numberOfWorkersshould be closely aligned with your Tomcat request threads -
- It is still necessary to test the IG worker config throughput/ errors incrementally in this deployment to identify optimum throughput.
Notes on Threading
numberOfThreads option defaults to the number of available cores. This is a sensible start point given the asynchronous threading model used in IG. That is, in theory, as the IG uses an asynchronous threading model, one thread per core should maximise use of available CPU time (e.g. in-between I/O operations). However, in reality, some requests do block due to IG dependencies. For example the
ResourceHandler serves static resources from disk.
Additionally, performance testing has indicated that some improvement can be seen by increasing
numberOfThreads. It is therefore advisable to test and optimise in a pre-production environment under load with sensible, realistic use cases to understand if increasing this value leads to better throughput. Test incrementally with binary increases between number of cores and some large maximum, based on the number of concurrent connections. The aim of the exercise is to identify the plateau in throughput.
Configuring IG Standalone and the ClientHandler
IG standalone exists as a server in its own right, rather than being a web container hosted web application. It is implemented on the Vert.x application framework.
IG standalone is configured through its admin.json file, as described in the Gateway Guide (version 7.0+). There are a number of first-class options available, but also, the full set of Vert.x specific options remain available (except where expressly disallowed due to first-class options taking precedence). These can be configured for all the server overall and per connector:
- Server config is configured in the root
- This also contains several warning log configuration options: blocked thread, max time in thread.
- See Vert.x VertxOptions API doc for details.
- Connector config is configured in the
connectorsconfig - in a contained connector's
- This contains various max levels pertaining to header size, data/ chunk size, websocket frame and message size
- It also contains options supporting message compression
- See Vert.x HttpServerOptions API doc for details.
In the example below, specific WebSocket configuration is provided to the IG server for the overall server (root
vertx object) and for a connector
As described in Configuration Strategy, and as with IG on Tomcat, there is a relationship between the Vert.x HTTP server configuration and the IG
ClientHandler configuration. One should expect that the configuration of the downstream server should influence the IG
ClientHandler and in turn, the overall IG server configuration.
Specific Vert.x options of interest here are:
Standalone (Vert.x) Troubleshooting Options
The following options may be useful for runtime monitoring and troubleshooting - but may affect performance:
|Interval at which Vert.x checks for blocked threads and logs a warning - default is 1 second.|
|Maximum time executing at which Vert.x logs a warning - default 2 seconds|
|Threshold at which warning logs are accompanied by a stack trace to identify causes. Default is 5 seconds.|
|logActivity||Log network activity.|
Turn on Vert.x metrics gathering:
See Vert.x Dropwizard Metrics for more details on the types of metrics captured. Currently DropWizard only.
- KB FAQ: IG Performance and Tuning
- KB HowTo: How do I collect data for troubleshooting high CPU utilization or Out of Memory errors on IG/OpenIG (All versions) servers?
- KB Solutions: 502 Bad Gateway or SocketTimeoutException when using IG (All versions)
- Vert.x Guide for Java Devs
- Vertx Core Manual: Writing Verticles
- Vertx Core Manual: Specifying the Number of Instances
- Vert.x API: VertxOptions
- Vert.x API: HttpServerOptions
- Vert.x Dropwizard Metrics
- Vertx scaling the number of instances per thread (StackOverflow) by @tsegismont
- Vertx 2(!): Performance Tuning
- - OPENIG-4256Getting issue details... STATUS