FW: Announcing Finalists for the OpenJDK Community Innovator's Challenge

Kelly O'Hair Kelly.Ohair at Sun.COM
Thu Mar 20 16:56:30 UTC 2008



Andrew Haley wrote:
> Kelly O'Hair wrote:
>> Excellent points Steve. And good summary.
>>
>> The hotspot nmake Makefiles are tied to the MS Visual Studio (VC)
>> compilers,
>> and I'm pretty sure nmake.exe is now only delivered with the VC product.
>> It's not clear how you can match this build performance.
>> However, it's also not clear how much of this benefit comes from Hotspot's
>> use of VC pre-compiled headers (PCH). Windows builds with nmake/VC/PCH
>> takes a few minutes, versus 20-35min builds on equivalent Linux/Solaris
>> systems. So it's significant and something (the performance) that we want
>> to keep. If a GNU Makefile using VC/PCH can match nmake is an open
>> question.
> 
> This is interesting.  I guess the Linux build isn't using PCH too?

The gcc compilers and Sun Studio compilers we have used in the past either
didn't have PCH capability or didn't have stable enough ones.
It's been my experience that each PCH implementation is unique in some
way, varied implementation techniques, with varied performance benefits.
The gcc we have used (version 3 based) did not have a good PCH solution
(gcc 4 supposedly has one now?),
and the Sun Studio Compilers just recently got a stable PCH system.
Both are different and may need source changes to make them work well.
PCH certainly requires special Makefile support, and each are completely
separate efforts, not trivial tasks.
As a side note, often in the process of making PCH work well, the end result
is a set of sources that may build even slower without PCH, this is
because your optimim PCH situation is the 'single include file' situation
where all sources include everything so that they can all share the same
everything.

I do quite a bit of build work, and Linux/Solaris builds just have a completely
different impact on a system than a Windows build.
Solaris seems to do the best with many processes, even with many fewer CPUs,
but older Solaris boxes have slower disk access and slower processors.
With Solaris you can just throw more CPUs at it.
The older Linux systems did ok, but you had to be careful about overloading
the system (newer Linux systems may be much better about this).
So with Solaris/Linux the approach to getting a faster build is to:
   * Use /tmp, or at least local disk, avoid all NFS writes
   * Use the ALT_PARALLEL_COMPILE_JOBS=N and HOTSPOT_BUILD_JOBS=N options
     (where N is at least 2, or maybe twice the number of CPUs)
   * Use something like ccache if doing repeated builds, e.g. save the .o files

On Windows, none of the above options are available or show little
benefit. So far PCH has been the best answer, which the Hotspot team
has done but the rest of the jdk's native sources aren't quite as normalized
as the Hotspot sources.

Maybe the Windows issue is the higher cost of process startup/warmup?
Fewer processes with more work to do is a better Windows situation?
I'm guessing of course...

> 
>> I have tried using parallel GNU make and batch compiles in the jdk
>> builds, and seen benefits on Linux and Solaris, but not much with
>> Windows.
> 
> Ah, that is important: IME builds scale almost linearly with the number
> of processors.

What is IME?

-kto

> 
> Andrew.



More information about the build-dev mailing list