Records -- Using them as JPA entities and validating them with Bean Validation

Gunnar Morling gunnar at hibernate.org
Wed Apr 11 18:27:38 UTC 2018


2018-04-11 19:34 GMT+02:00 Brian Goetz <brian.goetz at oracle.com>:

>
>
> This is the high-order bit; if we can't address this then the rest don't
> matter.
>
>>   Are there concrete criteria that we can use to reason about when it
>> would try to create a proxy?
>>
>
> One criteria is whether there are lazily loaded references to entities.
> E.g. consider this model:
>
>
> Right.  In this case, then I think its fair to say that the aggregate
> doesn't meet the goals for records, which is "the state, the whole state,
> and nothing but the state."  An entity with lazily materialized properties
> really has some hidden external state, which may in fact be the entire
> database.
>
> But, if the framework will detect that the domain class is final, and not
> attempt to lazily load anything, then this may be fine (though potentially
> limiting.)
>

Yes, while technically things still work that way, the effect on
application performance can be very undesirable. Esp. as it is quite
implicit and unfortunately many users are not closely keeping an eye on the
SQL statements created by their JPA provider. So I'd advice against using
records for entities in general, unless someone is very clear about the
implications.

> Yes, indeed Hibernate ORM will choose one of them in this case. It's not a
> mapping a user would typically use themselves, but this shouldn't matter
> here.
>
>
> So, if the annotation were lowered onto both the field and the getter,
> then its quite possible things would "just work" -- in this case.
>
> As you probably saw, there was some discussion that suggested that the
> best thing to do would be to create a new Target kind for records.  In this
> case, of course, frameworks would have to be updated, but then there would
> be no "guessing" about which was meant.
>

I'd welcome such new target kind. As said I think it'd also be useful to
have an API which would tell for a field/getter/parameter annotation
whether its derived from a record annotation.

> For Bean Validation, things are a bit worse. The spec is very clear about
> the fact that a constraint annotation should only be put to a field *or*
> the corresponding getter, as otherwise both constraints would be checked
> when validating an instance of the type hosting the field and getter.
>
>
> Because of the strong connection between the field and getter in this
> case, reflection will probably be able to tell you that "method x() is a
> getter for field x", which a framework could use to determine that this is
> a harmless conflict.
>

I tend to disagree on that one. At least for Bean Validations, semantics
are very clear in this regard: if there's @Min(1) int getMyInt(), that's
telling the BV engine to validate the "myInt" property, retrieving the
value by calling getMyInt(). It's similar for the corresponding field (you
might think of a case where one constraint should be validated against the
field's value and another one against the value as returned via the getter,
which at least theoretically may return a different value). So a BV
provider shouldn't make any assumptions on such link between field and a
getter but always retrieve the value via the annotated member.

But the new constraint target and information about annotations being
derived from records may help to address this issue.

> Agreed. But I also think the question of custom equals()/hashCode()
> methods is important.
>
>
> The constraint on equals/hashCode is a contingent one, and it relates to
> the possible existence of ancillary fields.  We're still working out the
> details here, so we should have some more clarity once that happens.
>
>
>
>


More information about the amber-dev mailing list