RFR (M): 8184334: Generalizing Atomic with templates
Kim Barrett
kim.barrett at oracle.com
Sat Jul 22 02:43:54 UTC 2017
> On Jul 21, 2017, at 9:43 AM, Andrew Haley <aph at redhat.com> wrote:
> However, if we are using -fno-strict-aliasing,
> then I do not believe that there is any need for the complexity of the
> casting that your patch does. We should be able to do the operations
> that we need with no type casting at all, with suitable use of
> template functions.
Some of the casting exists in order to target the (new) generic API at
the existing specialized_xxx operations. Erik and I discussed making
changes to the specialized layer, but decided not to at this time.
That's the layer where all the platform-specific assembly code is
located, and having to make significant changes across all platforms
in that layer seemed too risky for the immediate goal of a cleaner
client level API. As it is, it still got touched quite a bit, but it
could have been a lot worse.
I think that with a different API for the specialized layer it would
be pretty easy to eliminate the pointer casts for some gcc-based
targets. For example, I think something like the following
(completely untested code) could work for linux_x64, and doesn't
involve any casts between pointer types. Instead, it relies on
implicit type erasure at the interface between C++ and inline
assembly.
class Atomic ... {
...
template<size_t byte_size> class SpecializedAdd;
...
};
// *d += i, for integral *d
template<typenameI, typename D>
inline D Atomic::add(I i, D volatile* d) {
STATIC_ASSERT(IsInteger<I>::value);
STATIC_ASSERT(IsInteger<D>::value);
D v = i; // Let -Wconversion or similar complain if unsafe
return SpecializedAdd<sizeof(D)>()(v, d);
}
// *d += i, for pointer *d
template<typename I, typename D>
inline D* Atomic::add(I i, D* volatile* d) {
STATIC_ASSERT(IsInteger<I>::value);
STATIC_ASSERT(sizeof(ptrdiff_t) == sizeof(D*));
ptrdiff_t v = i; // Let -Wconversion or similar complain if unsafe
v *= sizeof(D);
return SpecializedAdd<sizeof(D*)>()(v, d);
}
template<>
struct Atomic::SpecializedAdd<4> {
template<typename I, typename D>
D operator()(I i, D volatile* d) const {
I addend = i;
__asm__ volatile("lock xaddl %0,(%2)"
: "=r" (addend)
: "0" (addend), "r" (d)
: "cc", "memory");
// For integer case, I == D, and cast is a nop.
// For pointer case (D is P*), cast from I => P*.
return reinterpret_cast<D>(addend + i);
}
};
#ifdef _LP64
template<>
struct Atomic::SpecializedAdd<8> {
template<typename I, typename D>
D operator()(I i, D volatile* d) const {
I addend = i;
__asm__ volatile("lock xaddq %0,(%2)"
: "=r" (addend)
: "0" (addend), "r" (d)
: "cc", "memory");
// For integer case, I == D, and cast is a nop.
// For pointer case (D is P*), cast from I => P*.
return reinterpret_cast<D>(addend + i);
}
};
#endif
Maybe this is the kind of thing you are suggesting?
But what do we do when we need a helper stub function, like
_Atomic_cmpxchg_long for linux_x86. That can't be dealt with using
templates. And for some targets, that's what we have for pretty much
everything.
More information about the hotspot-runtime-dev
mailing list