hprof format question

Simon Roberts simon at dancingcloudservices.com
Tue Oct 23 22:26:26 UTC 2018

Hi all, I'm hoping this is the correct list for a question on the hprof
file format (1.0.2)?

I found this information:

and have been working on a small project to read these files. (Yes, I know
that NetBeans/VisualVM and Eclipse both have such libraries, and a number
of other tools have been derived from those, but so far as I can tell, they
all are fundamentally built on the notion of fully decoding everything, and
creating memory representations of the entire heap. I want to pull out only
certain pieces of information--specifically object counts by class--from a
large, ~20Gb, dump file, and those tools just give up the ghost on my

Anyway, my code reads the file pretty well so far, except that the file I
want to analyze seems to contradict the specifications of the document
mentioned above. Specifically, after processing about five
HEAP_DUMP_SEGMENTS with around 1.5 million sub-records in each, I come
across some ROOT_JNI_LOCAL records. The first 15 follow the format
specified in the above document (one 8 byte "ID" and two four byte values.)
But the 16th omits the two four byte records (well, it might simply have
more, but visual analysis shows that after the 8 byte ID, I have a new
block tag, and a believable structure.

My current intention is to write a "smart" parser that investigates whether
processing will succeed or not if the "normal" format is read, or will use
the alternate format if it realizes that it would result in the next block
starting with an illegal tag value. Clearly this is not ideal!

Is there any more detailed, newer, better, information? Or anything else I
should know to pursue this tool (or indeed a simple object frequency by
classname result) from an hprof 1.0.2 format file?

(And yes, I'm pursuing a putative memory leak :)

Thanks for any input (including "dude, this is the wrong list!")

Simon Roberts
(303) 249 3613

More information about the code-tools-dev mailing list