Dear e-Spirit community,
we just observe in the productive Tomcat instances:
- a heavy usage of the survivor space within the new generation
- a weak usage of the old generation
of the Oracle JVM. It seems, that the deployed and running FirstSpirit web applications there create short-living Java objects heavily that just reside in the:
- eden (1 GB)
- survivor (from / to: 1GB each)
spaces of the new generation. The old generation stays nearly untouched and unused. Below a screenshot of the JConsole for a tomcat instance in the productive environment:

The current JVM configuration looks like:
-Xmx10000m \
-Xms10000m \
-Xmn3000m \
-XX:PermSize=170m \
-XX:MaxPermSize=512m \
-XX:+DisableExplicitGC \
-Djava.awt.headless=true \
-d64 \
-verbose:gc \
-XX:+PrintGCTimeStamps \
-XX:+PrintGCDateStamps \
-XX:+PrintGCDetails \
-XX:+PrintGCApplicationStoppedTime \
-XX:+PrintTenuringDistribution \
-XX:+UseParNewGC \
-XX:+UseConcMarkSweepGC \
-XX:+CMSParallelRemarkEnabled \
-XX:+CMSClassUnloadingEnabled \
-XX:+CMSIncrementalMode \
-XX:SurvivorRatio=1 \
-XX:TargetSurvivorRatio=80 \
-XX:InitialTenuringThreshold=5 \
-XX:MaxTenuringThreshold=10 \
-XX:-UseLargePages \
-XX:SoftRefLRUPolicyMSPerMB=20 \
-XX:+UseBiasedLocking \
-XX:ParallelGCThreads=7 \
-XX:+HeapDumpOnOutOfMemoryError
When taking a closer look on the GC log output enriched via "PrintTenuringDistribution" we see:
- the majority of objects (counted in bytes) survive just 5 GC runs in the survivor space
Desired survivor size 838860800 bytes, new threshold 10 (max 10)
- age 1: 149813296 bytes, 149813296 total
- age 2: 150828560 bytes, 300641856 total
- age 3: 74822664 bytes, 375464520 total
- age 4: 116746640 bytes, 492211160 total
- age 5: 152362528 bytes, 644573688 total
- age 6: 45243968 bytes, 689817656 total
- age 7: 8149584 bytes, 697967240 total
- age 8: 14459600 bytes, 712426840 total
- age 9: 7236896 bytes, 719663736 total
- age 10: 30650336 bytes, 750314072 total
It seems, that too many blocking GCs on Tomcat side lead to the following error message on Apache side when having configured a PING timeout of 5s on Apache side in module "mod_proxy_ajp":
[Tue Sep 02 09:24:11.475890 2014] [proxy_ajp:error] [pid 25964:tid 33] (70007)The timeout specified has expired: AH01030: ajp_ilink_receive() can't receive header
Hint: the above timeout error message does occur whenever a configured timeout threshold in Apache for:
- timeout (reply timeout)
- ping (CPING/CPONG timeout)
exceeded (threshold exceedings of the "connectiontimeout" attribute do not provoke the above Apache error message!). Either by a GC or a long running request that could not be replied within the timeout value. Here, we focus on (2).
The assumption is that a blocking GC on Tomcat side provoked the PING error:
Total time for which application threads were stopped: 12.1258575 seconds
As Apache does currently not log PING requests, we cannot look how long the ping really took and compare the values with the GC duration 
Do you have any suggestions to improve that behaviour on a Oracle JVM 6 update 81 environment to allow a CPING connection check being done in 5s and not being disturbed by a blocking GC?
- increasing the new space to reduce GC runs in the eden by setting "Xmn" to "5000m"
- increasing the eden to survivor ratio by increasing the "SurvivorRatio" setting to 2 (eden double the size of survivor from and to)
We are looking forward to heading from you.