Request for Review: Backport of JDK-8206183: Possible construct EMPTY_STACK and allocation stack, etc. on first use.
Zhengyu Gu
zgu at redhat.com
Mon Sep 17 14:10:02 UTC 2018
Hi Leslie,
Thank you for doing this.
Could you send me the formal webrev? I can host it for you and go
through formal review process.
Are you a committer? If not, I can sponsor it once it is approved.
-Zhengyu
On 09/17/2018 10:01 AM, Leslie Zhai wrote:
> Hi Zhengyu,
>
> I would like to backport the fix for:
>
> https://bugs.openjdk.java.net/browse/JDK-8206183
>
> To OpenJDK 8 updates dev:
>
> http://hg.openjdk.java.net/jdk8u/jdk8u-dev
>
> The fix is mostly the same as the version that was committed in 12.
>
> Please review the patch, thanks a lot!
>
> diff -r 16b9bbfaa450 src/share/vm/services/mallocSiteTable.cpp
> --- a/src/share/vm/services/mallocSiteTable.cpp Fri Sep 14 07:58:22
> 2018 -0700
> +++ b/src/share/vm/services/mallocSiteTable.cpp Mon Sep 17 21:44:58
> 2018 +0800
> @@ -28,34 +28,10 @@
> #include "runtime/atomic.hpp"
> #include "services/mallocSiteTable.hpp"
>
> -/*
> - * Early os::malloc() calls come from initializations of static
> variables, long before entering any
> - * VM code. Upon the arrival of the first os::malloc() call, malloc
> site hashtable has to be
> - * initialized, along with the allocation site for the hashtable entries.
> - * To ensure that malloc site hashtable can be initialized without
> triggering any additional os::malloc()
> - * call, the hashtable bucket array and hashtable entry allocation site
> have to be static.
> - * It is not a problem for hashtable bucket, since it is an array of
> pointer type, C runtime just
> - * allocates a block memory and zero the memory for it.
> - * But for hashtable entry allocation site object, things get tricky. C
> runtime not only allocates
> - * memory for it, but also calls its constructor at some later time. If
> we initialize the allocation site
> - * at the first os::malloc() call, the object will be reinitialized
> when its constructor is called
> - * by C runtime.
> - * To workaround above issue, we declare a static size_t array with the
> size of the CallsiteHashtableEntry,
> - * the memory is used to instantiate CallsiteHashtableEntry for the
> hashtable entry allocation site.
> - * Given it is a primitive type array, C runtime will do nothing other
> than assign the memory block for the variable,
> - * which is exactly what we want.
> - * The same trick is also applied to create NativeCallStack object for
> CallsiteHashtableEntry memory allocation.
> - *
> - * Note: C++ object usually aligns to particular alignment, depends on
> compiler implementation, we declare
> - * the memory as size_t arrays, to ensure the memory is aligned to
> native machine word alignment.
> - */
> -
> -// Reserve enough memory for NativeCallStack and
> MallocSiteHashtableEntry objects
> -size_t
> MallocSiteTable::_hash_entry_allocation_stack[CALC_OBJ_SIZE_IN_TYPE(NativeCallStack,
> size_t)];
> -size_t
> MallocSiteTable::_hash_entry_allocation_site[CALC_OBJ_SIZE_IN_TYPE(MallocSiteHashtableEntry,
> size_t)];
> -
> // Malloc site hashtable buckets
> MallocSiteHashtableEntry*
> MallocSiteTable::_table[MallocSiteTable::table_size];
> +const NativeCallStack* MallocSiteTable::_hash_entry_allocation_stack =
> NULL;
> +const MallocSiteHashtableEntry*
> MallocSiteTable::_hash_entry_allocation_site = NULL;
>
> // concurrent access counter
> volatile int MallocSiteTable::_access_count = 0;
> @@ -73,9 +49,6 @@
> * time, it is in single-threaded mode from JVM perspective.
> */
> bool MallocSiteTable::initialize() {
> - assert(sizeof(_hash_entry_allocation_stack) >=
> sizeof(NativeCallStack), "Sanity Check");
> - assert(sizeof(_hash_entry_allocation_site) >=
> sizeof(MallocSiteHashtableEntry),
> - "Sanity Check");
> assert((size_t)table_size <= MAX_MALLOCSITE_TABLE_SIZE, "Hashtable
> overflow");
>
> // Fake the call stack for hashtable entry allocation
> @@ -91,17 +64,19 @@
> }
> pc[0] = (address)MallocSiteTable::new_entry;
>
> - // Instantiate NativeCallStack object, have to use placement new
> operator. (see comments above)
> - NativeCallStack* stack = ::new ((void*)_hash_entry_allocation_stack)
> - NativeCallStack(pc, MIN2(((int)(sizeof(pc) / sizeof(address))),
> ((int)NMT_TrackingStackDepth)));
> + static const NativeCallStack stack(pc, MIN2(((int)(sizeof(pc) /
> sizeof(address))), ((int)NMT_TrackingStackDepth)));
> + static const MallocSiteHashtableEntry entry(stack, mtNMT);
>
> - // Instantiate hash entry for hashtable entry allocation callsite
> - MallocSiteHashtableEntry* entry = ::new
> ((void*)_hash_entry_allocation_site)
> - MallocSiteHashtableEntry(*stack, mtNMT);
> + assert(_hash_entry_allocation_stack == NULL &&
> + _hash_entry_allocation_site == NULL,
> + "Already initailized");
> +
> + _hash_entry_allocation_stack = &stack;
> + _hash_entry_allocation_site = &entry;
>
> // Add the allocation site to hashtable.
> - int index = hash_to_index(stack->hash());
> - _table[index] = entry;
> + int index = hash_to_index(stack.hash());
> + _table[index] = const_cast<MallocSiteHashtableEntry*>(&entry);
>
> return true;
> }
> @@ -204,6 +179,9 @@
> _table[index] = NULL;
> delete_linked_list(head);
> }
> +
> + _hash_entry_allocation_stack = NULL;
> + _hash_entry_allocation_site = NULL;
> }
>
> void MallocSiteTable::delete_linked_list(MallocSiteHashtableEntry*
> head) {
> @@ -211,7 +189,7 @@
> while (head != NULL) {
> p = head;
> head = (MallocSiteHashtableEntry*)head->next();
> - if (p != (MallocSiteHashtableEntry*)_hash_entry_allocation_site) {
> + if (p != hash_entry_allocation_site()) {
> delete p;
> }
> }
> diff -r 16b9bbfaa450 src/share/vm/services/mallocSiteTable.hpp
> --- a/src/share/vm/services/mallocSiteTable.hpp Fri Sep 14 07:58:22
> 2018 -0700
> +++ b/src/share/vm/services/mallocSiteTable.hpp Mon Sep 17 21:44:58
> 2018 +0800
> @@ -42,7 +42,7 @@
>
> public:
> MallocSite() :
> - AllocationSite<MemoryCounter>(NativeCallStack::EMPTY_STACK),
> _flags(mtNone) {}
> + AllocationSite<MemoryCounter>(NativeCallStack::empty_stack()),
> _flags(mtNone) {}
>
> MallocSite(const NativeCallStack& stack, MEMFLAGS flags) :
> AllocationSite<MemoryCounter>(stack), _flags(flags) {}
> @@ -250,7 +250,13 @@
> }
>
> static inline const NativeCallStack* hash_entry_allocation_stack() {
> - return (NativeCallStack*)_hash_entry_allocation_stack;
> + assert(_hash_entry_allocation_stack != NULL, "Must be set");
> + return _hash_entry_allocation_stack;
> + }
> +
> + static inline const MallocSiteHashtableEntry*
> hash_entry_allocation_site() {
> + assert(_hash_entry_allocation_site != NULL, "Must be set");
> + return _hash_entry_allocation_site;
> }
>
> private:
> @@ -259,15 +265,11 @@
>
> // The callsite hashtable. It has to be a static table,
> // since malloc call can come from C runtime linker.
> - static MallocSiteHashtableEntry* _table[table_size];
> + static MallocSiteHashtableEntry* _table[table_size];
> + static const NativeCallStack* _hash_entry_allocation_stack;
> + static const MallocSiteHashtableEntry* _hash_entry_allocation_site;
>
>
> - // Reserve enough memory for placing the objects
> -
> - // The memory for hashtable entry allocation stack object
> - static size_t
> _hash_entry_allocation_stack[CALC_OBJ_SIZE_IN_TYPE(NativeCallStack,
> size_t)];
> - // The memory for hashtable entry allocation callsite object
> - static size_t
> _hash_entry_allocation_site[CALC_OBJ_SIZE_IN_TYPE(MallocSiteHashtableEntry,
> size_t)];
> NOT_PRODUCT(static int _peak_count;)
> };
>
> diff -r 16b9bbfaa450 src/share/vm/services/memTracker.cpp
> --- a/src/share/vm/services/memTracker.cpp Fri Sep 14 07:58:22 2018
> -0700
> +++ b/src/share/vm/services/memTracker.cpp Mon Sep 17 21:44:58 2018
> +0800
> @@ -67,10 +67,6 @@
> os::unsetenv(buf);
> }
>
> - // Construct NativeCallStack::EMPTY_STACK. It may get constructed twice,
> - // but it is benign, the results are the same.
> - ::new ((void*)&NativeCallStack::EMPTY_STACK) NativeCallStack(0, false);
> -
> if (!MallocTracker::initialize(level) ||
> !VirtualMemoryTracker::initialize(level)) {
> level = NMT_off;
> diff -r 16b9bbfaa450 src/share/vm/services/memTracker.hpp
> --- a/src/share/vm/services/memTracker.hpp Fri Sep 14 07:58:22 2018
> -0700
> +++ b/src/share/vm/services/memTracker.hpp Mon Sep 17 21:44:58 2018
> +0800
> @@ -31,8 +31,8 @@
>
> #if !INCLUDE_NMT
>
> -#define CURRENT_PC NativeCallStack::EMPTY_STACK
> -#define CALLER_PC NativeCallStack::EMPTY_STACK
> +#define CURRENT_PC NativeCallStack::empty_stack()
> +#define CALLER_PC NativeCallStack::empty_stack()
>
> class Tracker : public StackObj {
> public:
> @@ -83,9 +83,9 @@
> extern volatile bool NMT_stack_walkable;
>
> #define CURRENT_PC ((MemTracker::tracking_level() == NMT_detail &&
> NMT_stack_walkable) ? \
> - NativeCallStack(0, true) :
> NativeCallStack::EMPTY_STACK)
> + NativeCallStack(0, true) :
> NativeCallStack::empty_stack())
> #define CALLER_PC ((MemTracker::tracking_level() == NMT_detail &&
> NMT_stack_walkable) ? \
> - NativeCallStack(1, true) :
> NativeCallStack::EMPTY_STACK)
> + NativeCallStack(1, true) :
> NativeCallStack::empty_stack())
>
> class MemBaseline;
> class Mutex;
> diff -r 16b9bbfaa450 src/share/vm/services/virtualMemoryTracker.cpp
> --- a/src/share/vm/services/virtualMemoryTracker.cpp Fri Sep 14
> 07:58:22 2018 -0700
> +++ b/src/share/vm/services/virtualMemoryTracker.cpp Mon Sep 17
> 21:44:58 2018 +0800
> @@ -167,7 +167,7 @@
> // higher part
> address high_base = addr + sz;
> size_t high_size = top - high_base;
> - CommittedMemoryRegion high_rgn(high_base, high_size,
> NativeCallStack::EMPTY_STACK);
> + CommittedMemoryRegion high_rgn(high_base, high_size,
> NativeCallStack::empty_stack());
> return add_committed_region(high_rgn);
> } else {
> return false;
> diff -r 16b9bbfaa450 src/share/vm/services/virtualMemoryTracker.hpp
> --- a/src/share/vm/services/virtualMemoryTracker.hpp Fri Sep 14
> 07:58:22 2018 -0700
> +++ b/src/share/vm/services/virtualMemoryTracker.hpp Mon Sep 17
> 21:44:58 2018 +0800
> @@ -305,7 +305,7 @@
>
>
> ReservedMemoryRegion(address base, size_t size) :
> - VirtualMemoryRegion(base, size),
> _stack(NativeCallStack::EMPTY_STACK), _flag(mtNone),
> + VirtualMemoryRegion(base, size),
> _stack(NativeCallStack::empty_stack()), _flag(mtNone),
> _all_committed(false) { }
>
> // Copy constructor
> diff -r 16b9bbfaa450 src/share/vm/utilities/nativeCallStack.cpp
> --- a/src/share/vm/utilities/nativeCallStack.cpp Fri Sep 14 07:58:22
> 2018 -0700
> +++ b/src/share/vm/utilities/nativeCallStack.cpp Mon Sep 17 21:44:58
> 2018 +0800
> @@ -27,8 +27,6 @@
> #include "utilities/globalDefinitions.hpp"
> #include "utilities/nativeCallStack.hpp"
>
> -const NativeCallStack NativeCallStack::EMPTY_STACK(0, false);
> -
> NativeCallStack::NativeCallStack(int toSkip, bool fillStack) :
> _hash_value(0) {
>
> diff -r 16b9bbfaa450 src/share/vm/utilities/nativeCallStack.hpp
> --- a/src/share/vm/utilities/nativeCallStack.hpp Fri Sep 14 07:58:22
> 2018 -0700
> +++ b/src/share/vm/utilities/nativeCallStack.hpp Mon Sep 17 21:44:58
> 2018 +0800
> @@ -52,9 +52,6 @@
> * from it.
> */
> class NativeCallStack : public StackObj {
> - public:
> - static const NativeCallStack EMPTY_STACK;
> -
> private:
> address _stack[NMT_TrackingStackDepth];
> unsigned int _hash_value;
> @@ -63,6 +60,10 @@
> NativeCallStack(int toSkip = 0, bool fillStack = false);
> NativeCallStack(address* pc, int frameCount);
>
> + static inline const NativeCallStack& empty_stack() {
> + static const NativeCallStack EMPTY_STACK(0, false);
> + return EMPTY_STACK;
> + }
>
> // if it is an empty stack
> inline bool is_empty() const {
>
> ----- 8< -------- 8< -------- 8< -------- 8< -------- 8< -------- 8< ---
>
> Applied the patch, then built with clang-8:
>
> $ strings
> build/linux-x86_64-normal-server-fastdebug/images/j2sdk-image/bin/java |
> grep clang
> LLVM China clang version 8.0.0 (git at github.com:llvm-mirror/clang.git
> 7f223b8fbf26fa0e4d8f98847a53c4ba457720f0)
> (git at github.com:llvm-mirror/llvm.git
> 841e300fb15be4f9931d18d2f24f48cb59ef24a8) (based on LLVM 8.0.0svn)
>
> $
> ./build/linux-x86_64-normal-server-fastdebug/images/j2sdk-image/bin/java
> -version
>
> # To suppress the following error report, specify this argument
> # after -XX: or in .hotspotrc: SuppressErrorAt=/os_linux_x86.cpp:882
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> # Internal Error
> (/home/xiangzhai/project/jdk8u-llvm-dev/hotspot/src/os_cpu/linux_x86/vm/os_linux_x86.cpp:882),
> pid=16503, tid=0x00007f532ceaf700
> # assert(((intptr_t)os::current_stack_pointer() &
> (StackAlignmentInBytes-1)) == 0) failed: incorrect stack alignment
>
> Just workaround for JDK-8186780:
>
> diff -r 16b9bbfaa450 src/os_cpu/linux_x86/vm/os_linux_x86.cpp
> --- a/src/os_cpu/linux_x86/vm/os_linux_x86.cpp Fri Sep 14 07:58:22 2018
> -0700
> +++ b/src/os_cpu/linux_x86/vm/os_linux_x86.cpp Mon Sep 17 21:54:24 2018
> +0800
> @@ -878,7 +878,7 @@
>
> #ifndef PRODUCT
> void os::verify_stack_alignment() {
> -#ifdef AMD64
> +#if defined(AMD64) && !defined(__clang__)
> assert(((intptr_t)os::current_stack_pointer() &
> (StackAlignmentInBytes-1)) == 0, "incorrect stack alignment");
> #endif
> }
>
> ----- 8< -------- 8< -------- 8< -------- 8< -------- 8< -------- 8< ---
>
> $
> ./build/linux-x86_64-normal-server-fastdebug/images/j2sdk-image/bin/java
> -version
>
> openjdk version "1.8.0-internal-fastdebug"
> OpenJDK Runtime Environment (build
> 1.8.0-internal-fastdebug-xiangzhai_2018_09_15_19_30-b00)
> OpenJDK 64-Bit Server VM (build 25.71-b00-fastdebug, mixed mode)
>
More information about the jdk8u-dev
mailing list