RFR(XL): 8181917: Refactor UL LogStreams to avoid using resource area

Erik Helin erik.helin at oracle.com
Fri Jul 21 08:24:34 UTC 2017


On 07/20/2017 09:11 AM, Thomas Stüfe wrote:
> Hi all,
> 
> after the last irc discussion with Erik here the new webrev with the 
> changes he requested:
> 
> Full: 
> http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/all.webrev.05/webrev/
> Delta: 
> http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/delta-04-to-05/webrev/

This looks good to me, Reviewed. Thank you Thomas for "hanging in there" 
with this big patch, really solid work!

> I built the change on windows (32bit too :P), Linux and AIX. I am 
> currently re-running jtreg tests on Linux, no issues so far.

I took the patch for a spin in our build system and it worked fine. I 
will sponsor this patch.

Thanks,
Erik

> The changes:
> 
> - I removed all stray "Unconditional write?" comments I had used before 
> to mark sites which did logging unconditionally. Marcus had opened a 
> follow up bug report for that issue 
> (https://bugs.openjdk.java.net/browse/JDK-8182466), and I added the 
> affected sites I found to the comment section of that bug.
> 
> - In g1GCPhaseTimes.cpp, I changed the logging code in 
> G1GCPhaseTimes::debug_phase() to use LogTarget instead of Log to be 
> consistent with G1GCPhaseTimes::trace_phase()
> 
> - In the new LogStream implementation (logStream.cpp), I added temporary 
> variables in LogStream::LineBuffer::append() and 
> LogStream::LineBuffer::try_ensure_cap().
> 
> - I also enlarge the line buffer a bit more aggressive. We now, once the 
> initial 64byte buffer is exhausted - enlarge by 256 bytes steps.
> 
> - Finally, I renamed LineBuffer::ptr() to LineBuffer::buffer()
> A note about memory consumption of the LogStream line buffer: I do not 
> think we need to overly worry about memory consumption, at least not if 
> we think the old solution was fine. Old solution was using stringStream, 
> which uses NEW_RESOURCE_ARRAY to enlarge the line buffer when exhausted. 
> This is resource area, so it *always* leaks the previous allocation, 
> which is not deallocated until the resource area stack is reset. Even 
> worse, for larger line lengths the growth strategy in stringStream() 
> seems to be to just enlarge by the requested amount, which means that 
> every single write would cause a reallocation and hence a new allocation 
> of the underlying buffer (see stringStream::write()). I am not 100% 
> sure, this is just from dry-reading the code.
> 
> Kind Regards, Thomas
> 
> 
> 
> On Mon, Jul 17, 2017 at 3:44 PM, Thomas Stüfe <thomas.stuefe at gmail.com 
> <mailto:thomas.stuefe at gmail.com>> wrote:
> 
>     Hi Eric,
> 
>     thank you for your review!
> 
>     new Webrev:
>     http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/all.webrev.04/webrev/
>     <http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/all.webrev.04/webrev/>
>     delta to last:
>     http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/delta-03-to-04/webrev/
>     <http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/delta-03-to-04/webrev/>
> 
>     Changes:
> 
>     - I changed the allocation function to handle the case where we
>     already reached the reasonable_max cap. You are right, in that case
>     reallocating is unnecessary.
>     - Also, added log_info() for that case as you suggested.
>     - Finally, added a fix to the LogStream gtest which somehow slipped
>     thru the last webrev.
> 
>     Changes are now rebased to the current tip.
> 
>     Thanks!
> 
>     ..Thomas
> 
>     On Fri, Jul 14, 2017 at 12:36 PM, Erik Helin <erik.helin at oracle.com
>     <mailto:erik.helin at oracle.com>> wrote:
> 
>         On 07/13/2017 08:46 AM, Thomas Stüfe wrote:
> 
>             Hi Erik!
> 
> 
>             On Mon, Jul 10, 2017 at 11:40 AM, Erik Helin
>             <erik.helin at oracle.com <mailto:erik.helin at oracle.com>
>             <mailto:erik.helin at oracle.com
>             <mailto:erik.helin at oracle.com>>> wrote:
> 
>                  Hi Thomas,
> 
>                  On 06/30/2017 08:32 PM, Thomas Stüfe wrote:
> 
>                      Hi Eric,
> 
>                      thank you for the review!
> 
>                      New
>                      Version:
>             http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/all.webrev.03/webrev/
>             <http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/all.webrev.03/webrev/>
>                     
>             <http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/all.webrev.03/webrev/
>             <http://cr.openjdk.java.net/~stuefe/webrevs/8181917-refactor-ul-logstream/all.webrev.03/webrev/>>
> 
> 
>                  I think you still have a problem with "runaway" loggers
>             in this version:
> 
>                  +// try_ensure_cap tries to enlarge the capacity of the
>             internal buffer
>                  +// to the given atleast value. May fail if either OOM
>             happens or
>                  atleast
>                  +// is larger than a reasonable max of 1 M. Caller must
>             not assume
>                  +// capacity without checking.
>                  +void LogStream::LineBuffer::try_ensure_cap(size_t
>             atleast) {
>                  +  assert(_cap >= sizeof(_smallbuf), "sanity");
>                  +  if (_cap < atleast) {
>                  +    const size_t reasonable_max = 1 * M;
>                  +    size_t newcap = align_size_up(atleast + 64, 64);
>                  +    assert(_cap <= reasonable_max, "sanity");
>                  +    // Cap out at a reasonable max to prevent runaway
>             leaks.
>                  +    if (newcap > reasonable_max) {
>                  +      newcap = reasonable_max;
>                  +    }
>                  +
>                  +    char* const newbuf = (char*) os::malloc(newcap,
>             mtLogging);
>                  +    if (newbuf == NULL) { // OOM. Leave object unchanged.
>                  +      return;
>                  +    }
>                  +    if (_pos > 0) { // preserve old content
>                  +      memcpy(newbuf, _buf, _pos + 1); // ..including
>             trailing zero
>                  +    }
>                  +    if (_buf != _smallbuf) {
>                  +      os::free(_buf);
>                  +    }
>                  +    _buf = newbuf;
>                  +    _cap = newcap;
>                  +  }
>                  +  assert(_cap >= atleast, "sanity");
>                  +}
> 
>                  with the above code, even though _cap > reasonable_max,
>             you will
>                  always allocate newbuf with os::malloc. For debug
>             builds, we are
>                  fine, since the JVM will assert on _cap <=
>             reasonable_max, but for
>                  product builds we are not. Would it be better to first
>             check if _cap
>                  == reasonable_max? As in the following snippet:
> 
>                  void LogStream::LineBuffer::try_ensure_cap(size_t
>             atleast) {
>                     if (_cap < atleast) {
>                       const size_t reasonable_max = 1 * M;
>                       if (_cap == reasonable_max) {
>                         // max memory used, "die" in debug builds, log
>             in product builds
>                         log_warning(logging)("Max memory used for
>             message: %s\n", _buf);
>                         ShouldNotReachHere();
>                         return;
>                       }
> 
>                       size_t newcap = align_size_up(atleast + 64, 64);
>                       if (newcap > reasonable_max) {
>                         newcap = reasonable_max;
>                       }
>                       char* const newbuf = (char*) os::malloc(newcap,
>             mtLogging);
>                       // rest of logic
>                     }
>                  }
> 
>                  Thanks,
>                  Erik
> 
> 
>             I'm not sure I understand. Are you concerned that _cap would
>             grow beyond reasonable_max - because I do not see a way this
>             could happen. Or do you want to also treat the *attempt* to
>             grow the _cap beyond reasonable_max as a loggable offence?
> 
> 
>         What I am trying to say is that with your current patch, even if
>         _cap == reasonable_max, you will still allocate a new 1M buffer,
>         copy all the chars over, and the free the old one. My proposal
>         simply skips the allocation, copying and freeing if _cap ==
>         reasonable_max. The result will be the same, just less stress on
>         malloc.
> 
>             If the latter, I am not sure about logging. If this is an
>             error which may happen at a customer site without popping up
>             first in inhouse-tests (which it conceivably is, because
>             logging is very content dependent), I would rather hide the
>             error in production code instead of flooding stderr with
>             tons of log warnings or crashing with a guarantee, no?
> 
> 
>         I see what you mean, but perhaps log_info(logging) at least? I
>         want to be able to discover if this happens in production, for
>         example if a customer complains about truncated logs. A
>         log_info(logging) will only be printed if -Xlog:logging=info is
>         specified on the command-line.
> 
>         Thanks,
>         Erik
> 
>             ..Thomas
> 
> 
> 
> 


More information about the hotspot-dev mailing list