CMake replacing Autotools?

erik.joelsson at oracle.com erik.joelsson at oracle.com
Fri Mar 19 13:39:36 UTC 2021


On 2021-03-19 03:14, Thomas Stüfe wrote:
> On Fri, Mar 19, 2021 at 11:04 AM Andrew Haley <aph at redhat.com> wrote:
>
>> On 3/19/21 9:22 AM, Thomas Stüfe wrote:
>>>> 2. More choices to actually build the project: Use integrated build
>>>> tools of IDEs (Visual Studio, Xcode) or use Ninja, which is faster than
>>>> gmake
>>>>
>>> Is gmake really where we lose time? Did you analyze this or is this just
>> an
>>> assumption? I would have thought it's things like single threaded jmod,
>>> jlink, and subprocess spawning.
>> I'm sure it is. The other slow thing is linking HotSpot.
>>
> What is so slow with gmake? Rule processing?
>
> It also depends on the platform, I guess. Eg on Cygwin, the fork emulation
> is extremely slow.
>
I have done pretty extensive work optimizing our build's performance 
over the years. There are many ways to measure performance. First we 
need to establish what kind of build we are even measuring.

For a full images build ("make images"), on a reasonably sized machine 
(8-16 HV threads), we scale pretty well and use most CPUs most of the 
time. There isn't much additional concurrency to gain here. Obvious 
single threaded steps are hotspot linking and jlink. In such a build, 
Hotspot is mostly linked in parallel with all the Java compilation, so 
not an issue. Jlinking the JDK image does stick out as something we 
can't do much in parallel with, unless we also build the test or docs 
image. For a hotspot only build ("make hotspot"), then the hotspot 
linking will stick out as a single threaded step. Note that cmake/ninja 
will not help with any of this. Potential speed up from ninja is also 
very limited as the rules processing of our make scripts does not amount 
to any significant part of a full build.

On Windows specifically, we do have an issue with fork being 
inefficient. We also have a less efficient file system making file 
operations more expensive in general. I have two big (though old) 
identical workstations, one with Windows and one Linux. Very rough 
numbers are 5 minutes for "make images" on the Linux machine and 10 on 
the Windows machine. These differences vary wildly on different hardware 
though. Using Windows native tools here would certainly help to some 
extent. OTOH, we have WSL, which is already considerably more performant 
than Cygwin (very rough numbers, maybe 8-9 minutes for the same build). 
The setup is a bit trickier than Cygwin, but once set up, it works 
really well in my experience.

The area where ninja would provide the most benefit is for incremental 
builds, especially when very little work is actually needed, as it 
processes rules much faster than make. We have worked hard at making 
incremental builds as efficient and fast as possible, but our build is 
also pretty big so the time it takes is still noticeable, especially on 
Windows.

All this said, when picking a build system, compatibility issues are the 
number one concern. If the support matrix of CMake does not completely 
cover the support matrix of OpenJDK, it's a no go to me. I would also be 
hesitant to be at the mercy of the platform support of a 3rd party when 
a new port of OpenJDK needs to be made.

Regarding IDE integration, our build system is able to produce 
compile-commands.json which several IDEs know how to consume.

Another big objection I have to this is the amount work required to 
rewrite the build system (again). I would expect a rewrite like this to 
be several man months, just for the OpenJDK (not counting forced 
downstream work for custom extensions to the build as well as all system 
currently interacting with the build, which I'm sure exist in more 
places than just Oracle).

/Erik





More information about the build-dev mailing list