Review Request: 8004352: build-infra: Limit JOBS on large machines
Erik Joelsson
erik.joelsson at oracle.com
Mon Feb 18 11:58:51 UTC 2013
New webrev:
http://cr.openjdk.java.net/~erikj/8004352/webrev.root.02/
Moved the cap last. Changed the 1000 to 1100. Also scaling down to 90%
is only done when cores is the limiting factor.
I read your bug and now I understand what you mean about the constants.
I agree that would be a good idea.
/Erik
On 2013-02-18 12:19, Erik Joelsson wrote:
>
>
> On 2013-02-15 18:12, Mike Duigou wrote:
>> A couple of comments;
>>
>> - Can you reverse steps #2 and #3? Having the hard cap last makes
>> more sense.
> I agree and will do.
>> - In JDK-8007327 I requested some memory size definitions such as
>> MEMORY_SIZE_NORMAL_HEAP for sizing of JVM heaps. The explicit "1000"
>> should probably be replaced with the MEMORY_SIZE_NORMAL_HEAP
>> constant. Elsewhere the value is "1100".
>>
> I can't find any other reference to this constant, but I can certainly
> change it to 1100. I just wanted something lower than 1024 since the
> reported amount of memory in a machine is usually somewhat lower than
> the exact multiple of 2 and the division is rounded down.
>> - The CONCURRENT_BUILD_JOBS is no longer JOBS * 2. Do we think that's
>> right?
> I think it's correct that you can squeeze out a bit more performance
> from the hotspot (or jdk native) build by using cores * 2 as number of
> jobs, but any value higher than cores will affect the third bullet
> point below (don't interfere with my browser experience while I'm
> building). I also found it weird to have a different setting for
> hotspot and the jdk native build and thought this a good opportunity
> to unify the settings.
>
> The problem is that if someone increases JOBS to cores * 2 for maximum
> speed, there will most likely be problems in the java source
> generation in the jdk, which is more memory intensive. My assumption
> is that the speed loss from cores * 2 to cores * 1 is small enough to
> not matter much.
>
> We could fix this, but I'm not sure it's worth the effort. The user
> just wants to say something like, "go full speed while I make coffee",
> "leave some room so my terminal doesn't lock up" or "I'm running 4
> builds in parallel, tune it down please". This could be expressed as a
> percentage, with a suitable default. For each part of the build, where
> we make a submake call, we can add a call to calculate a suitable
> value for the -j flag. Input is a factor indicating memory to cpu
> requirements, the factor from the user (set at configure or at make
> invocation), the number of cpus (from configure) and the amount of
> memory (from configure).
>
> /Erik
>> Mike
>>
>> On Feb 15 2013, at 04:06 , Erik Joelsson wrote:
>>
>>> The current default for number of parallel build jobs is just equal
>>> to the number of cores in the system. While this works well on many
>>> machines, there have been several reports of this not working. I
>>> have been trying to come up with a better scheme for the default and
>>> the following is something that I think will do. It has been active
>>> in the build-infra forest for a while now.
>>>
>>> The complaints that were raised were usually concerning one of the
>>> following:
>>>
>>> * A big machine with very many cpus didn't scale well when using all
>>> of them, so there is no point in trying to, it just made the build
>>> less stable. Suggestion, introduce a hard cap.
>>> * A machine with lots of cpus but not as much memory would run out
>>> of memory. Suggestion, limit number of jobs on memory as well as cpus.
>>> * The build eats up all my resources, leaving my browser unusable.
>>> Suggestion, don't use everything unless asked for.
>>>
>>> So this is what I ended up with:
>>> 1. Take the min of num_cores and memory_size/1000
>>> 2. Cap at 16
>>> 3. If more than 4 cores, shave it to 90% rounded down to leave some
>>> room.
>>>
>>> The user can still override this in two ways. Either by using any of
>>> the configure arguments:
>>>
>>> --with-num-cores number of cores in the build system, e.g.
>>> --with-num-cores=8 [probed]
>>> --with-memory-size memory (in MB) available in the build
>>> system, e.g.
>>> --with-memory-size=1024 [probed]
>>> --with-jobs number of parallel jobs to let make run
>>> [calculated
>>> based on cores and memory]
>>>
>>> Or by setting JOBS=<number> on the make command line.
>>>
>>> Also not that this change will set the CONCURRENT_BUILD_JOBS for
>>> hotspot to the value of JOBS so that it too will be overridden by
>>> the above.
>>>
>>> http://cr.openjdk.java.net/~erikj/8004352/webrev.root.01/
>>>
>>> /Erik
More information about the build-dev
mailing list