[JDK 6u21]: does CMS report concurrent mode failure on perm gen expansion ?

Srinivas Ramakrishna ysr1729 at gmail.com
Thu Sep 13 22:15:12 UTC 2012


Right, it's a bug for CMS to cause a full gc for perm gen expansion. There
were a couple of releases in 6uXX (for a suitable value of hsXX)
where this regression had crept in unnoticed until it was noticed and
fixed. Try the more recent 7uXX (pre-perm gen removal changes) where you
should
find the perm gen expand without a stop-world full gc (and may be trigger a
CMS concurrent cycle instead, memory is hazy and
the source code not close at hand). As you guys noted the workaround is to
max-size the perm gen and enable class unloading.

-- ramki

On Thu, Sep 13, 2012 at 10:09 PM, Charlie Hunt <chunt at salesforce.com> wrote:

> Hi Jon,
>
> As often is the case, thanks for confirming my suspicions!
>
> Increasing PermSize is definitely in order. :-)
>
> thanks again,
>
> charlie ...
>
> On Sep 13, 2012, at 2:37 PM, Jon Masamitsu wrote:
>
>  Charlie,
>
> It is probably a case of mislabeling.   I think in earlier
> releases we were just dumbly labeling any full GC as
> a concurrent mode failure.  But it is a concurrent mode
> failure if the perm gen fills up.  Ramki had done some
> work to start a concurrent collection for perm gen with
> a different trigger.
>
> Increase your PermSize so that we don't have to look
> at it :-).
>
> Jon
>
>
> On 09/13/12 11:26, Charlie Hunt wrote:
>
> I'm looking at a GC log and surprised by the "concurrent mode failure"
> messages.
>
>  I'm seeing the following GC log entries as the first 8 GCs:
>
>  3.398: [Full GC 3.398: [CMS (concurrent mode failure)[YG occupancy:
> 594957 K (1887488 K)]3.489: [weak refs processing, 0.0000140 secs]:
> 0K->0K(6291456K), 0.1308660 secs] 594957K->594957K(8178944K), [CMS Perm :
> 21247K->21247K(21248K)], 0.1310260 secs] [Times: user=0.13 sys=0.03,
> real=0.14 secs]
>
>  3.529: [Full GC 3.529: [CMS (concurrent mode failure)[YG occupancy:
> 594957 K (1887488 K)]3.617: [weak refs processing, 0.0000110 secs]:
> 0K->0K(6291456K), 0.1010290 secs] 594957K->594957K(8178944K), [CMS Perm :
> 21248K->21248K(42496K)], 0.1011530 secs] [Times: user=0.12 sys=0.00,
> real=0.10 secs]
>
>  3.638: [Full GC 3.638: [CMS (concurrent mode failure)[YG occupancy:
> 662071 K (1887488 K)]3.730: [weak refs processing, 0.0000140 secs]:
> 0K->0K(6291456K), 0.1068710 secs] 662071K->662071K(8178944K), [CMS Perm :
> 21319K->21319K(42688K)], 0.1069610 secs] [Times: user=0.15 sys=0.00,
> real=0.11 secs]
>
>  3.772: [Full GC 3.772: [CMS (concurrent mode failure)[YG occupancy:
> 729184 K (1887488 K)]3.915: [weak refs processing, 0.0000140 secs]:
> 0K->0K(6291456K), 0.1582170 secs] 729184K->729184K(8178944K), [CMS Perm :
> 21499K->21499K(42880K)], 0.1583500 secs] [Times: user=0.20 sys=0.00,
> real=0.16 secs]
>
>  3.962: [Full GC 3.962: [CMS (concurrent mode failure)[YG occupancy:
> 804686 K (1887488 K)]4.107: [weak refs processing, 0.0000130 secs]:
> 0K->0K(6291456K), 0.1598050 secs] 804686K->804686K(8178944K), [CMS Perm :
> 21702K->21702K(43072K)], 0.1599420 secs] [Times: user=0.21 sys=0.00,
> real=0.16 secs]
>
>  4.497: [Full GC 4.497: [CMS (concurrent mode failure)[YG occupancy:
> 939150 K (1887488 K)]4.672: [weak refs processing, 0.0000130 secs]:
> 0K->0K(6291456K), 0.1895350 secs] 939150K->939150K(8178944K), [CMS Perm :
> 22202K->22202K(43408K)], 0.1896680 secs] [Times: user=0.23 sys=0.00,
> real=0.19 secs]
>
>  5.054: [Full GC 5.054: [CMS (concurrent mode failure)[YG occupancy:
> 992483 K (1887488 K)]5.274: [weak refs processing, 0.0000130 secs]:
> 0K->0K(6291456K), 0.2347170 secs] 992483K->992483K(8178944K), [CMS Perm :
> 24058K->24058K(44408K)], 0.2348700 secs] [Times: user=0.28 sys=0.00,
> real=0.23 secs]
>
>  6.662: [Full GC 6.662: [CMS (concurrent mode failure)[YG occupancy:
> 1292895 K (1887488 K)]6.953: [weak refs processing, 0.0000140 secs]:
> 0K->0K(6291456K), 0.3063690 secs] 1292895K->1292895K(8178944K), [CMS Perm :
> 30692K->30692K(48120K)], 0.3065860 secs] [Times: user=0.66 sys=0.01,
> real=0.30 secs]
>
>  After these 8 GCs, the GC log is much inline with what I'd expect, i.e.
> ParNew GCs, initial-marks, remarks, etc.
>
>  In each of the above GCs:
> - young gen is increasing in occupancy
> - old gen remains empty
> - perm gen space size is expanding
>
>  I can understand the Full GC being reported since perm gen is expanding,
> but I don't understand the reporting of "concurrent mode failure".  Am I
> missing something?  Is it possible HotSpot is reporting concurrent mode
> failure when perm gen is expanding?
>
>  Here's the set of command line options in use:
> -Xmx8G -Xms8G -Xmn1500m -XX:+UseCompressedOops -XX:-UseGCOverheadLimit
> -XX:+UseConcMarkSweepGC -XX:MaxDirectMemorySize=5000m
>
>  thanks,
>
>  charlie ...
>
>  PS:  I didn't go look at / investigate OpenJDK HotSpot source code since
> this is JDK 6u21 and thought it might be possible something may have
> changed between 6u21 and what's in OpenJDK.
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/hotspot-gc-dev/attachments/20120913/6772a7fd/attachment.htm>


More information about the hotspot-gc-dev mailing list