RFR: 7885: Graphical rendering of dependency view fails due to heap memory drain [v2]

Vincent Alexander Beelte duke at openjdk.org
Wed Aug 23 19:44:29 UTC 2023


On Fri, 4 Aug 2023 15:25:11 GMT, Virag Purnam <vpurnam at openjdk.org> wrote:

>> When multiple views are enabled in JMC mainly Dependency View and Heatmap View. (Issue was with Flame graph as well but it got fixed as the implementation is now based on swing component)
>> 
>> For the Dependency View and Heatmap View still the implementation is based on javascript and for one particular scenario JMC gives "**java.lang.OutOfMemoryError: Java heap space**". 
>> 
>> **Scenario:**
>> For each selection in a table, views get rendered. For each click on table, 4 threads call method "**toJsonString(IItemCollection items)**" present in "**IItemCollectionJsonSerializer**" class. Within method it appends the items to a StringBuilder and then it writes it to a StringWriter. When we have multiple JFR files open in editor, or we select the table contents very fast, multiple threads call the method at the same time and all try to append and write at the same time. This results in the "**java.lang.OutOfMemoryError: Java heap space**". 
>> 
>> ![image](https://github.com/openjdk/jmc/assets/97600378/ae24614c-c640-4dc0-9c4c-7f70ee2f164f)
>> 
>> ![image](https://github.com/openjdk/jmc/assets/97600378/a41434d7-7bb1-47a0-bb7f-6a8b8e17af30)
>> 
>> 
>> **Possible solution:** Making method  "**toJsonString(IItemCollection items)**" synchronized. I have done the changes and created a PR. 
>> 
>> Could you please review and provide your comments on this? If there are better way to solve this issue, could you please suggest me the same?
>
> Virag Purnam has updated the pull request incrementally with one additional commit since the last revision:
> 
>   7885: Graphical rendering of dependency view fails due to heap memory drain

Sure go ahead and adapt that idea however you see fit. I had already gotten approval to contribute that myself from Accenture but I never did because in the end I felt it wasn't enough of an improvement.
Even with that change I could easily make the JSONs big enough that they won't fit into the Java array size limits. And I did not even need to use abnormally large recordings for that.
I think fully removing the JSONs is not easily possible but you might want to explore changing the format unless there are consumers outside the scope of what you could change.
I imagine there might be a lot of duplicate content in those JSONs like maybe stacktraces or class names. You could implement a manual dictionary compression (I believe that's the word for what I am proposing).
Instead of for example

{
    "people": [
        {
            "firstName": "John",
            "lastName": "Smith"
        },
        {
            "firstName": "Max",
            "lastName": "Miller"
        },
        {
            "firstName": "Max",
            "lastName": "Smith"
        },
        {
            "firstName": "John",
            "lastName": "Miller"
        }
    ]
}

you could do this

{
    "people": [
        {
            "firstName": 0,
            "lastName": 1
        },
        {
            "firstName": 2,
            "lastName": 3
        },
        {
            "firstName": 2,
            "lastName": 1
        },
        {
            "firstName": 0,
            "lastName": 3
        }
    ],
    "strings": ["John", "Smith", "Max", "Miller"]
}

In this example the last JSON wasn't actually smaller in terms of characters because the names are too short, but I think I have shown the idea.

-------------

PR Comment: https://git.openjdk.org/jmc/pull/511#issuecomment-1690536265


More information about the jmc-dev mailing list