This
article will provide you with a quick reference guide on how to calculate the
Java process size memory footprint for Java VM processes running on IBM AIX
5.3+ OS.
This is a
complementary post to my original article on this subject: how to monitor the Java native memory on AIX. I highly recommend this read to any individual
involved in production support or development of Java applications deployed on AIX.
Why is this knowledge important?
From my perspective, basic knowledge on how the OS is managing the
memory allocation of your JVM processes is very important. We often overlook
this monitoring aspect and only focus on the Java heap itself.
From my experience, most Java memory related problems are observed from
the Java heap itself such as garbage collection problems, leaks etc. However, I’m
confident that you will face situations in the future involving native memory
problems or OS memory challenges. Proper knowledge of your OS and virtual
memory management is crucial for proper root causes analysis, recommendations
and solutions.
AIX memory vs. pages
As you may
have seen from my earlier post, the AIX Virtual Memory Manager (VMM) is
responsible to manage memory requests from the system and its applications.
The actual
physical memory is converted and partitioned in units called pages;
allocated either in physical RAM or stored on disk until it is needed. Each
page can have a size of 4 KB (small page), 64 KB (medium page) or 16 MB (large
page). Typically for a 64-bit Java process you will see a mix of all of the
above.
What about the topas command?
The
typical reflex when supporting applications on AIX is to run the topas command, similar
to Solaris top. Find
below an example of output from AIX 5.3:
As you can
see, the topas command is not very helpful to get a clear view on the memory
utilization since it is not providing the breakdown view that we need for our
analysis. It is still useful to get a rough idea of the paging space utilization which can give you a quick idea of your top "paging space" consumer processes. Same can be achieved via the ps aux command.
AIX OS command to the rescue:
svmon
The AIX svmon command
is by far my preferred command to deep dive into the Java process memory
utilization. This is a very powerful command, similar to Solaris pmap. It
allows you to monitor the current memory “pages” allocation along with each
segment e.g. Java Heap vs. native heap segments. Analyzing the svmon output
will allow you to calculate the memory footprint for each page type (4 KB, 64
KB, and 16 MB).
Now find
below a real example which will allow you to understand how the calculation is
done:
As you can
see, the total footprint of our Java process size was found at 2.2 GB which is
aligned with current Java heap settings. You should be able to easily perform
the same memory footprint analysis from your AIX environment
I hope
this article has helped you to understand how to calculate the Java process
size on AIX OS. Please feel free to post any comment or question.
As you may
have seen from my previous tutorials and case studies, Java Heap Space OutOfMemoryError problems can be complex to pinpoint and resolve. One of the common problems I
have observed from Java EE production systems is OutOfMemoryError: unable to
create new native thread; error thrown when the HotSpot JVM is unable to
further create a new Java thread.
This
article will revisit this HotSpot VM error and provide you with recommendations
and resolution strategies.
If you are
not familiar with the HotSpot JVM, I first recommend that you look at a high
level view of its internal HotSpot JVM memory spaces. This knowledge is important in
order for you to understand OutOfMemoryError problems related to the native (C-Heap)
memory space.
OutOfMemoryError: unable to create new native
thread – what is it?
Let’s
start with a basic explanation. This HotSpot JVM error is thrown when the
internal JVM native code is unable to create a new Java thread. More precisely,
it means that the JVM native code was unable to create a new “native” thread
from the OS (Solaris, Linux, MAC, Windows...).
We can
clearly see this logic from the OpenJDK 1.6 and 1.7 implementations as per
below:
Unfortunately
at this point you won’t get more detail than this error, with no indication of
why the JVM is unable to create a new thread from the OS…
HotSpot JVM: 32-bit or 64-bit?
Before you
go any further in the analysis, one fundamental fact that you must determine from
your Java or Java EE environment is which version of HotSpot VM you are using
e.g. 32-bit or 64-bit.
Why is it so
important? What you will learn shortly is that this JVM problem is very often related
to native memory depletion; either at the JVM process or OS level. For now
please keep in mind that:
A 32-bit JVM process is in theory allowed to grow up
to 4 GB (even much lower on some older 32-bit Windows versions).
For a 32-bit JVM process, the C-Heap is in a race with the Java Heap and PermGen
space e.g. C-Heap capacity = 2-4 GB – Java Heap size (-Xms, -Xmx) – PermGen size (-XX:MaxPermSize)
A 64-bit JVM process is in theory allowed to use
most of the OS virtual memory available or up to 16 EB (16 million TB)
As you can
see, if you allocate a large Java Heap (2 GB+) for a 32-bit JVM process, the
native memory space capacity will be reduced automatically, opening the door for
JVM native memory allocation failures.
For a
64-bit JVM process, your main concern, from a JVM C-Heap perspective, is the
capacity and availability of the OS physical, virtual and swap memory.
OK great but how does native memory affect Java
threads creation?
Now back
to our primary problem. Another fundamental JVM aspect to understand is that
Java threads created from the JVM requires native
memory from the OS. You should now start to understand the source of your
problem…
The high
level thread creation process is as per below:
A new Java thread is requested from the Java program
& JDK
The JVM native code then attempt to create a new
native thread from the OS
The OS then attempts to create a new native thread
as per attributes which include the thread stack size. Native memory is
then allocated (reserved) from the OS to the Java process native memory
space; assuming the process has enough address space (e.g. 32-bit process)
to honour the request
The OS will refuse any further native thread & memory
allocation if the 32-bit Java process size has depleted its memory address
space e.g. 2 GB, 3 GB or 4 GB process size limit
The OS will also refuse any further Thread & native
memory allocation if the virtual memory of the OS is depleted (including Solaris
swap space depletion since thread access to the stack can generate a SIGBUS
error, crashing the JVM * http://bugs.sun.com/view_bug.do?bug_id=6302804
In
summary:
Java threads creation require native memory available
from the OS; for both 32-bit & 64-bit JVM processes
For a 32-bit JVM, Java thread creation also requires memory available from the C-Heap or process address space
Problem diagnostic
Now that
you understand native memory and JVM thread creation a little better, is it now
time to look at your problem. As a starting point, I suggest that your follow
the analysis approach below:
Determine if you are using HotSpot 32-bit or 64-bit
JVM
When problem is observed, take a JVM Thread Dump and
determine how many Threads are active
Monitor closely the Java process size utilization before
and during the OOM problem replication
Monitor closely the OS virtual memory utilization before
and during the OOM problem replication; including the swap memory space
utilization if using Solaris OS
Proper
data gathering as per above will allow you to collect the proper data points,
allowing you to perform the first level of investigation. The next step will be
to look at the possible problem patterns and determine which one is applicable
for your problem case.
Problem pattern #1 – C-Heap depletion (32-bit
JVM)
From my
experience, OutOfMemoryError: unable to create new native thread is quite
common for 32-bit JVM processes. This problem is often observed when too many
threads are created vs. C-Heap capacity.
JVM Thread
Dump analysis and Java process size monitoring will allow you to determine if
this is the cause.
Problem pattern #2 – OS virtual memory
depletion (64-bit JVM)
In this
scenario, the OS virtual memory is fully depleted. This could be due to a few
64-bit JVM processes taking lot memory e.g. 10 GB+ and / or other high memory
footprint rogue processes. Again, Java process size & OS virtual memory
monitoring will allow you to determine if this is the cause. Also, please verify if you are not hitting OS related threshold such as ulimit -u or NPROC (max user processes). Default limits are usually low and will prevent you to create let's say more than 1024 threads per Java process.
Problem pattern #3 – OS virtual memory
depletion (32-bit JVM)
The third
scenario is less frequent but can still be observed. The diagnostic can be a
bit more complex but the key analysis point will be to determine which
processes are causing a full OS virtual memory depletion. Your 32-bit JVM
processes could be either the source or the victim such as rogue processes
using most of the OS virtual memory and preventing your 32-bit JVM processes to
reserve more native memory for its thread creation process.
Please
note that this problem can also manifest itself as a full JVM crash (as per below sample) when running
out of OS virtual memory or swap space on Solaris.
#
# A fatal
error has been detected by the Java Runtime Environment:
#
# java.lang.OutOfMemoryError: requested 32756 bytes
for ChunkPool::allocate. Out of swap space?
You now
understand your problem and know which problem pattern you are dealing with.
You are now ready to provide recommendations to address the problem…are you?
Your work is
not done yet, please keep in mind that this JVM OOM event is often just a “symptom”
of the actual root cause of the problem. The root cause is typically much
deeper so before providing recommendations to your client I recommend that you
really perform deeper analysis. The last thing you want to do is to simply
address and mask the symptoms. Solutions such as increasing OS physical /
virtual memory or upgrading all your JVM processes to 64-bit should only be
considered once you have a good view on the root cause and production environment
capacity requirements.
The next
fundamental question to answer is how many threads were active at the time of
the OutOfMemoryError? In my experience with Java EE production systems, the
most common root cause is actually the application and / or Java EE container
attempting to create too many threads at a given time when facing non happy
paths such as thread stuck in a remote IO call, thread race conditions etc. In
this scenario, the Java EE container can start creating too many threads when
attempting to honour incoming client requests, leading to increase pressure
point on the C-Heap and native memory allocation. Bottom line, before blaming
the JVM, please perform your due diligence and determine if you are dealing
with an application or Java EE container thread tuning problem as the root
cause.
Once you
understand and address the root cause (source of thread creations), you can
then work on tuning your JVM and OS memory capacity in order to make it more
fault tolerant and better “survive” these sudden thread surge scenarios.
Recommendations:
First, quickly rule out any obvious OS memory (physical & virtual memory) & process capacity (e.g. ulimit -u / NPROC) problem.
Perform a JVM Thread Dump analysis and determine
the source of all the active threads vs. an established baseline.
Determine what is causing your Java application or Java EE container to
create so many threads at the time of the failure
Please ensure that your monitoring tools closely
monitor both your Java VM processes size & OS virtual memory. This
crucial data will be required in order to perform a full root cause
analysis. Please remember that a 32-bit Java process size is limited between 2 GB - 4 GB depending of your OS
Look at all running processes and determine if your JVM
processes are actually the source of the problem or victim of other
processes consuming all the virtual memory
Revisit your Java EE container thread configuration
& JVM thread stack size. Determine if the Java EE container is allowed
to create more threads than your JVM process and / or OS can handle
Determine if the Java Heap size of your 32-bit JVM
is too large, preventing the JVM to create enough threads to fulfill your
client requests. In this scenario, you will have to consider reducing your
Java Heap size (if possible), vertical scaling or upgrade to a 64-bit JVM
Capacity planning analysis to the rescue
As you may
have seen from my past article on the Top 10 Causes of Java EE Enterprise Performance Problems, lack of capacity planning analysis is often the source of
the problem. Any comprehensive load and performance testing exercise should also properly
determine the Java EE container threads, JVM & OS native memory requirement
for your production environment; including impact measurements of "non-happy" paths. This approach
will allow your production environment to stay away from this type of problem
and lead to better system scalability and stability in the long run.
Please
provide any comment and share your experience with JVM native thread troubleshooting.
Today we will revisit a common Java HotSpot VM problem that you probably already experienced at some point in your JVM troubleshooting experience on Solaris OS; especially on a 32-bit JVM.
This article will provide you with a description of this particular type of OutOfMemoryError, the common problem patterns and the recommended resolution approach.
If you are not familiar with the different HotSpot memory spaces, I recommend that you first review the article Java HotSpot VM Overview before going any further in this reading.
java.lang.OutOfMemoryError: Out of swap space? – what is it?
This error message is thrown by the Java HotSpot VM (native code) following a failure to allocate native memory from the OS to the HotSpot C-Heap or dynamically expand the Java Heap etc... This problem is very different than a standard OutOfMemoryError (normally due to an exhaustion of the Java Heap or PermGen space).
A typically error found in your application / server logs is:
Exception in thread "main" java.lang.OutOfMemoryError: requested 53459 bytes for ChunkPool::allocate. Out of swap space?
Also, please note that depending of the OS that you use (Windows, AIX, Solaris etc.) some OutOfMemoryError due to C-Heap exhaustion may not give you detail such as “Out of swap space”. In this case, you will need to review the OOM error Stack Trace and determine if the computing task that triggered the OOM and determine which OutOfMemoryError problem pattern your problem is related to (Java Heap, PermGen or Native Heap exhaustion).
Ok so can I increase the Java Heap via –Xms & -Xmx to fix it?
Definitely not! This is the last thing you want to do as it will make the problem worse. As you learned from my other article, the Java HotSpot VM is split between 3 memory spaces (Java Heap, PermGen, C-Heap). For a 32-bit VM, all these memory spaces compete between each other for memory. Increasing the Java Heap space will further reduce capacity of the C-Heap and reserve more memory from the OS.
Your first task is to determine if you are dealing with a C-Heap depletion or OS physical / virtual memory depletion.
Now let’s see the most common patterns of this problem.
Common problem patterns
There are multiple scenarios which can lead to a native OutOfMemoryError. I will share with you what I have seen in my past experience as the most common patterns.
-Native Heap (C-Heap) depletion due to too many Java EE applications deployed on a single 32-bit JVM (combined with large Java Heap e.g. 2 GB) * most common problem *
-Native Heap (C-Heap) depletion due to a non-optimal Java Heap size e.g. Java Heap too large for the application(s) needs on a single 32-bit JVM
-Native Heap (C-Heap) depletion due to too many created Java Threads e.g. allowing the Java EE container to create too many Threads on a single 32-bit JVM
-OS physical / virtual memory depletion preventing the HotSpot VM to allocate native memory to the C-Heap (32-bit or 64-bit VM)
-OS physical / virtual memory depletion preventing the HotSpot VM to expand its Java Heap or PermGen space at runtime (32-bit or 64-bit VM)
Please keep in mind that each HotSpot native memory problem can be unique and requires its own troubleshooting & resolution approach.
Find below a list of high level steps you can follow in order to further troubleshoot:
-First, determine if the OOM is due to C-Heap exhaustion or OS physical / virtual memory. For this task, you will need to perform close monitoring of your OS memory utilization and Java process size. For example on Solaris, a 32-bit JVM process size can go to about 3.5 GB (technically 4 GB limit) then you can expect some native memory allocation failures. The Java process size monitoring will also allow you to determine if you are dealing with a native memory leak (growing overtime / several days…)
-The OS vendor and version that you use is important as well. For example, some versions of Windows (32-bit) by default support a process size up to 2 GB only (leaving you with minimal flexibility for Java Heap and Native Heap allocations). Please review your OS and determine what is the maximum process size e.g. 2 GB, 3 GB or 4 GB or more (64-bit OS)
-Like the OS, it is also important that you review and determine if you are using a 32-bit VM or 64-bit VM. Native memory depletion for a 64-bit VM typically means that your OS is running out of physical / virtual memory
-Review your JVM memory settings. For a 32-bit VM, a Java Heap of 2 GB+ can really start to add pressure point on the C-Heap; depending how many applications you have deployed, Java Threads etc… In that case, please determine if you can safely reduce your Java Heap by about 256 MB (as a starting point) and see if it helps improve your JVM memory “balance”.
-Analyze the verbose GC output or use a tool like JConsole to determine your Java Heap footprint. This will allow you to determine if you can reduce your Java Heap in a safe manner or not
-When OutOfMemoryError is observed. Generate a JVM Thread Dump and determine how many Threads are active in your JVM; the more Threads, the more native memory your JVM will use. You will then be able to combine this data with OS, Java process size and verbose GC; allowing to determine where the problem is
Once you have a clear view of the situation in your environment and root cause, you will be in a better position to explore potential solutions as per below:
-Reduce the Java Heap (if possible / after close monitoring of the Java Heap) in order to give that memory back to the C-Heap / OS
-Increase the physical RAM / virtual memory of your OS (only applicable if depletion of the OS memory is observed; especially for a 64-bit OS & VM)
-Upgrade your HotSpot VM to 64-bit (for some Java EE applications, a 64-bit VM is more appropriate) or segregate your applications to different JVM’s (increase demand on your hardware but reduce utilization of C-Heap per JVM)
-Native memory leak are trickier and requires deeper dive analysis such as analysis of the Solaris pmap / AIX svmon data and review of any third party library (e.g. monitoring agents). Please also review the Oracle Sun Bug database and determine if your HotSpot version you use is exposed to known native memory problems
Still struggling with this problem? Don’t worry, simply post a comment / question at the end of this article. I also encourage you to post your problem case to the root cause analysis forum.
This article will provide you with an overview of the JRockit Java Heap Space vs. the HotSpot VM. It will also provide you some background on Oracle future plans regarding JRockit & HotSpot.
Oracle JRockit VM Java Heap: 2 different memory spaces
The JRockit VM memory is split between 2 memory spaces:
-The Java Heap (YoungGen and OldGen)
-The Native memory space (Classes pool, C-Heap, Threads...)
Memory Space
Start-up arguments and tuning
Monitoring strategies
Description
Java Heap
-Xmx (maximum Heap space)
-Xms (minimum Heap size)
EX:
-Xmx1024m
-Xms1024m
- verbose GC
- JMX API
- JRockit Mission Control tools suite
The JRockit Java Heap is typically split between the YoungGen (short-lived objects), OldGen (long-lived objects).
Native memory space
Not configurable directly.
For a 32-bit VM, the native memory space capacity = 2-4 Gig – Java Heap
** Process size limit of 2 GB, 3 GB or 4 GB depending of your OS **
For a 64-bit VM, the native memory space capacity = Physical server total RAM & virtual memory – Java Heap
- Total process size check in Windows and Linux
- pmap command on Solaris & Linux - JRockit JRCMD tool
The JRockit Native memory space is storing the Java Classes metadata, Threads and objects such as library files, other JVM and third party native code objects.
Where is the PermGen space?
Similar to the IBM VM, there is no PermGen space for the JRockit VM. The PermGen space is only applicable to the HotSpot VM. The JRockit VM is using the Native Heap for Class metadata related data. Also, as you probably saw from my other article, Oracle Sun is also starting to remove the PermGen space for the HotSpot VM.
Why is the JRockit VM Java process using more memory than HotSpot VM?
The JRockit VM tend to uses more native memory in exchange for better performance. JRockit does not have an interpretation mode, compilation only, so due to its additional native memory needs the process size tends to use a couple of hundred MB larger than the equivalent Sun JVM size. This should not be a big problem unless you are using a 32-bit JRockit with a large Java Heap requirement; in this scenario, the risk of OutOfMemoryError due to Native Heap depletion is higher for a JRockit VM (e.g. for a 32-bit VM, bigger is the Java Heap, smaller is memory left for the Native Heap).
What is Oracle’s plan for JRockit?
Current Oracle JVM strategy is to merge both HotSpot and JRockit product lines to a single JVM project that will include the best features of each VM. This will also simplify JVM tuning since right now failure to understand the differences between these 2 VM’s can lead to bad tuning recommendations and performance problems.
Please feel free to post any comment or question on the JRockit VM.
This short article will provide you with a high level overview of the different Java memory spaces for the IBM VM.
This understanding is quite important given the implementation & naming convention differences between HotSpot & IBM VM.
IBM VM: 2 different memory spaces
The IBM VM memory is split between 2 memory spaces:
-The Java Heap (nursery and tenured spaces)
-The Native Heap (C-Heap)
Memory Space
Start-up arguments and tuning
Monitoring strategies
Description
Java Heap
-Xmx (maximum Heap space)
-Xms (minimum Heap size)
EX:
-Xmx1024m
-Xms1024m
GC policy Ex:
-Xgcpolicy:gencon (enable gencon GC policy)
- verbose GC
- JMX API
- IBM monitoring tools
The IBM Java Heap is typically split between the nursery and tenured space (YoungGen, OldGen).
The gencon GC policy (combo of concurrent and generational GC) is typically used for Java EE platforms in order to minimize the GC pause time.
Native Heap
(C-Heap)
Not configurable directly.
For a 32-bit VM, the C-Heap capacity = 4 Gig – Java Heap
For a 64-bit VM, the C-Heap capacity = Physical server total RAM & virtual memory – Java Heap
- svmon command
The C-Heap is storing class metadata objects including library files, other JVM and third party native code objects.
Where is the PermGen space?
This is by far the most typical question I get from Java EE support individuals supporting an IBM VM environment for this first time. The answer: there is no PermGen space for the IBM VM. The PermGen space is only applicable to the HotSpot VM. The IBM VM is using the Native Heap for Class metadata related data. Also, as you probably saw from my other article, Oracle / Sun is also starting to remove the PermGen space for the HotSpot VM.
The next article will provide you a tutorial on how to enable and analyze verbose GC for an IBM VM. Please feel free to post any comment or question on the IBM VM.
This case study describes the complete root cause analysis and resolution of a Java ZipFile OutOfMemoryError problem triggered during the deployment of an Oracle Weblogic Integration 9.2 application.
Environment Specs
·Java EE server: Oracle Weblogic Integration 9.2 MP2
·OS: Sun Solaris 5.10
·JDK: Sun Java VM 1.5.0_10-b03 - 32-bit
·Java VM arguments: -server –Xms2560m –Xmx2560m -XX:PermSize=256m -XX:MaxPermSize=512m
·Platform type: BPM
Monitoring and troubleshooting tools
·Solaris pmap command
·Java verbose GC (for Java Heap and PermGen memory monitoring)
Problem overview
·Problem type: java.lang.OutOfMemoryError at java.util.zip.ZipFile.open(Native Method)
An OutOfMemoryError problem was observed during the deployment process of one of our Weblogic Integration 9.2 application.
Gathering and validation of facts
As usual, a Java EE problem investigation requires gathering of technical and non technical facts so we can either derived other facts and/or conclude on the root cause. Before applying a corrective measure, the facts below were verified in order to conclude on the root cause:
·What is the client impact? HIGH (full outage of our application)
·Recent change of the affected platform? Yes, a minor update to the application was done along with an increase of minimum and maximum size the Java Heap from 2 GIG (2048m) to 2.5 GIG (2560m) in order to reduce the frequency of full garbage collections
·Any recent traffic increase to the affected platform? No
·Since how long this problem has been observed? Since the increase of the Java Heap to 2.5 GIG
· Is the OutOfMemoryError happening on start-up or under load? The OOM is triggered at deployment time only with no traffic to the environment
·What was the utilization of the Java Heap at that time? Really low at 20% utilization only (no traffic)
·What was the utilization of the PermGen space at that time? Healthy at ~ 70% utilization and not leaking
·Did a restart of the Weblogic server resolve the problem? No
-Conclusion #1: The problem trigger seems to be related to the 500 MB increase of the Java Heap to 2.5 GIG. ** This problem initially puzzled the troubleshooting team until more deep dive analysis was done as per below **
Weblogic log file analysis
A first analysis of the problem was done by reviewing the OOM error in the Weblogic managed server log.
java.lang.OutOfMemoryError
at java.util.zip.ZipFile.open(Native Method)
at java.util.zip.ZipFile.<init>(ZipFile.java:203)
at java.util.jar.JarFile.<init>(JarFile.java:132)
at java.util.jar.JarFile.<init>(JarFile.java:97)
at weblogic.utils.jars.JarFileDelegate.<init>(JarFileDelegate.java:32)
at weblogic.utils.jars.VirtualJarFactory.createVirtualJar(VirtualJarFactory.java:24)
at weblogic.application.ApplicationFileManagerImpl.getVirtualJarFile(ApplicationFileManagerImpl.java:194)
at weblogic.application.internal.EarDeploymentFactory.findOrCreateComponentMBeans(EarDeploymentFactory.java:162)
at weblogic.application.internal.MBeanFactoryImpl.findOrCreateComponentMBeans(MBeanFactoryImpl.java:48)
at weblogic.application.internal.MBeanFactoryImpl.createComponentMBeans(MBeanFactoryImpl.java:110)
at weblogic.application.internal.MBeanFactoryImpl.initializeMBeans(MBeanFactoryImpl.java:76)
at weblogic.management.deploy.internal.MBeanConverter.createApplicationMBean(MBeanConverter.java:88)
at weblogic.management.deploy.internal.MBeanConverter.createApplicationForAppDeployment(MBeanConverter.java:66)
at weblogic.management.deploy.internal.MBeanConverter.setupNew81MBean(MBeanConverter.java:314)
As you can see, the OutOfMemoryError is thrown during the loading of our application (EAR file). The Weblogic server relies on the Java JDK ZipFile class to load any application EAR / jar files.
-Conclusion #2: The OOM error is triggered during a native call (ZipFile.open(Native Method)) from the Java JDK ZipFile to load our application EAR file. This native JVM operation requires proper native memory and virtual address space available in order to execute its loading operation. The conclusion at this point was that our Java VM 1.5 was running out of native memory / virtual address space at deployment time.
Sun Java VM native memory and MMAP files
Before we go any further in the root cause analysis, you way want to revisit the internal structure of the Sun Java HotSpot VM for JDK 1.5 and JDK 1.6. Proper understanding of the internal Sun Java VM is quite important, especially when dealing with native memory problems.
When using JDK 1.4 / 1.5, any JAR / ZIP file loaded by the Java VM get mapped entirely into an address space. This means that the more EAR / JAR files you are loading to a single JVM, the higher is the native memory footprint of your Java process.
This also means that the higher is your Java Heap and PermGen space; the lower memory is left for your native memory spaces such as C-Heap and MMAP Files which can definitely be a problem if you are deploying too many separate applications (EAR files) to a single 32-bit Java process.
Please note that Sun came up with improvements in JDK 1.6 (Mustang) and changed the behaviour so that the JAR file's central directory is still mapped, but the entries themselves are read separately; reducing the native memory requirement.
I suggest that you review the Sun Bug Id link below for more detail on such JDK 1.4 / 1.5 limitation.
Native memory footprint: Solaris pmap to the rescue!
The Solaris pmap command allows Java EE application developers to analyse how an application uses memory by providing a breakdown of all allocated address spaces of the Java process.
Now back to our OOM problem, pmap snapshots were generated following the OutOfMemoryError and analysis of the data below did reveal that we were getting very close to the upper 4 GIG limit of a 32-bit process on Solaris following our Java Heap increase to 2.5 GIG.
Now find below the reduced raw data along with snapshot with explanation of the findings.
As you can see in the above pmap analysis, our 32-bit Java process size was getting very close to the 4 GIG limit; leaving no room for additional EAR file deployment.
We can see a direct correlation between the Java Heap increase and the appearance of this OutOfMemoryError. Since the Java 1.5 VM is mapping the entire EAR file to a native memory address space; proper native memory and address space must be available to fulfil such ZipFile.open() native operation.
The solution was to simply revert back our Java Heap increase which did allow the deployment of our EAR file.
Other long term solutions will be discussed shortly such as vertical scaling of the Weblogic Integration (adding more JVM’s / managed servers to existing physical servers), switching to the 64-bit JVM and / or upgrade to the Sun Java VM 1.6.
Conclusion and recommendations
- - When facing an OutOfMemoryError with the Sun Java VM, always do proper analysis and your due diligence to determine which memory space is the problem (Java Heap, PermGen, native / C-Heap)
- - When facing an OutOfMemoryError due to native memory exhaustion, always generate Solaris pmap snapshots of your Java process and do your analysis. Do not increase the Java Heap (Xms / Xmx) as this will make the problem even worse
- - Be very careful before attempting to increase your Java Heap or PermGen space. Always ensure that you understand your total Java VM memory footprint and that you have enough native memory available for the non Java Heap memory allocations such as MMAP Files for your application EAR files