JEP 411 Headaches: Instrumenting private methods in the JDK for authorization checkpoints.

Peter Firmstone peter.firmstone at zeus.net.au
Mon Jul 26 01:05:32 UTC 2021


Hi Alan,

My apologies, finalizer removal, that's a positive step, I thought it 
was motivated by "the code is trusted so we don't need to worry about 
finalizer attacks".

Also apologies for the long email, but it seems appropriate for 
information handover.

Atomic construction guarantee; if invariant checks during construction 
throw an exception, prior to calling Object's zero arg constructor, an 
object instance will not be created, creation of the object is atomic, 
either it succeeds with all invariants satisfied, or an object is not 
created, such that it cannot be created in a broken state where 
invariants aren't satisfied.  We apply this constructor safety guarantee 
to deserialization of data, it is a requirement of the code performing 
deserialzation to check invariants and we require that users are 
authenticated, this is to avoid parsing untrusted data, while still 
validating user data.  We enforce it using SM infrastructure.

I understand JEP 411 is a business decision, there wasn't enough 
adoption, following the fallout of applets, businesses and users running 
afoul of untrusted code, causing ongoing pain (publicly). The remaining 
attempts of JEP 411 to explain why POLP is a technical failure only 
apply to the default implementation and are incorrect when applied to 
other implementations, it's a commercial failure, as suggested by low 
rates of adoption in JEP 411, but that's due to a lack of investment in 
tooling, however I suspect OpenJDK has underestimated adoption, although 
probably not by a big margin, but I suspect it will be more painful than 
OpenJDK anticipates.  I have a perfectly good working reliable publicly 
available example (for years) contrary to JEP 411's technical claims.

OpenJDK's decision has been made, and those affected must also assess 
and make their own decisions, the following only serves to share my 
thoughts and insights, no need to read further if not of interest.  Our 
Java programs are going into care and maintenance as we assess suitable 
replacement development platforms.

<--------->

Applets relied on SM (perhaps SM only exists due to their success), 
applets themselves weren't the cause of their own demise, for that we 
have Java Serialization to thank, otherwise applets were a commercial 
success, and had they remained so, then SM would have also remained, it 
appears to be inexorably tied to the fate of applets now.

Serialization needed an atomic replacement before 2008, when it was 
becoming obvious that Java serialization was insecure. OpenJDK could 
still fix Java serialization without using white list filters 
(ironically white listing is a complication of SM, which reduced 
adoption, it's likely the same will occur with Serialization white 
lists, if tooling isn't provided), by removing the ability to serialize 
circular object graphs, or disabling it by default.  We had circular 
object graphs in JGDMS (which heavily utilised Java serialization), but 
we refactored these out after implementing atomic de-serialization, we 
did this in a way that didn't require breaking the serial form 
compatibility of existing classes (unless they contained circular 
references).  This keeps serial data invariant validation code with the 
object implementation, rather than as a separate white list (and it is 
more powerful and less complex than white listing), reducing complexity 
and maintenance, and because failure is atomic an attacker cannot 
formulate a gadget chain, type safety is also read ahead and checked 
prior to de-serialization of data.   The development of atomic 
serialization started with atomic deserialization which was completed a 
few years ago, atomic serialization was under current development, with 
new explicit public API methods for used for serialization, to avoid any 
issues with reflection and module access controls, we were still using 
Java serialization to serialize, but an alternative 
AtomicObjectInputStream to de-serialize.

SM Performance isn't an issue, my policy implementation is high scaling 
and has no hotspots, neither is deployment, we have tools to generate 
policy files (more than one) and have been doing so for many years (the 
first tool was written by Sun Microsystems circa 2004 I think, it still 
required policy file editing, but listed permissions required), the 
second tool was written 8 years ago approx.  Our use cases have passed 
the tests of time.  I don't believe people hand author policy files in 
the age of computing: I've seen examples of policy generation tools by 
other authors on GitHub.  Sure some developers might grant 
AllPermission, to get something running, or for tests, but I haven't 
seen anyone serious about security in production that does.  I don't use 
the built in policy provider (it has a blocking permission cache that 
negatively impacts performance), my policy implementation doesn't have a 
cache and it's many magnitudes faster and high scaling thanks to shared 
immutability, thread confined mutability and the garbage collector, the 
last remaining pain point is SocketPermssion and DNS.  If 
SocketPermission was changed to use URI RFC3986 normalization and had 
netmask wildcards, that would address remaining issues (Too many 
SocketPermission grants required in policy files).   I managed to avoid 
most DNS calls by using a PermissionComparator, which finds the closest 
match, if it doesn't find an exact match, to reduce the number of actual 
permission checks.   We also had to implement our own ClassLoader to 
avoid the DNS calls originating from SecureClassLoader.

Java's memory model has more voodoo than AccessController, the JMM makes 
AccessController look simple, programming is voodoo to a non programmer, 
everything is relative and based on experience.  I guess you are 
suggesting the complexity to use ratio is too high. I can imagine what 
it was like being at a presentation when someone enables SecurityManager 
and it stops working, why the default provider wasn't fixed then I'm not 
sure, budget perhaps? If I was at that presentation, I'd generate the 
policy file on the first pass, then make the new policy file the default 
policy.  It would have added about 10 minutes to the presentation as we 
stepped through the swim lanes, or whatever people like to call 
executing desired functionality (preferably automated test cases), at 
least it would look like we knew what we were doing, then you validate 
it by attempting to do something you shouldn't, it leaves the JVM in a 
very locked down state (which can also prevent user errors), 
occasionally you can forget to execute some necessary functionality (not 
have a test case for it), or not allow the program to run long enough to 
capture all necessary permissions, but the missing permissions are 
quickly sorted, by running the tool a second time, the missing 
permissions are appended to policy files.

I would have preferred that we reduced the number of permissions, these 
can be removed without breaking backward compatibility, the policy just 
treats it as an UnresolvedPermission.

Many permissions are being replaced by better access controls (as they 
should), such as more recent changes to package / module access and 
reflection and this could be a gradual process as unnecessary 
permissions are removed.   We aren't trying to sandbox code, it's used 
for company authorization rules.  The code is trusted, but if it's from 
a third party, or another company with a business relationship, eg 
approved vendor, we need to place some constraints on it, which aren't 
intended to defend against malicious code.  Our intent is to prevent 
parsing of malicious data and loading of malicious code.

I would have preferred to build a model around permissions required for 
authorization of users and trusted code, rather than focusing on 
sandboxes and malicious code.   If a user or service doesn't 
authenticate, they cannot dynamically load code because they are not 
granted permission to do so, they also cannot generally deserialize 
untrusted data, this all depends on SM infrastructure.

We need to control access to networks, files, user credentials, 
properties (of certificate store locations) and class loading. Primarily 
we do this by authenticating users, we also allow some authenticated 
users and services to also dynamically download and load classes.

Due to JEP 411, we have no future Java upgrade path, what's possible 
currently with authorization, will not be possible in future versions of 
Java, design assumptions throughout our software are built on SM 
infrastructure.   When we remove SM from our systems, it enables 
deserialization of untrusted data from unauthenticated users and it 
allows downloads and class loading of untrusted code.   Our software 
uses TLS over IPv6 for it's point to point connectivity and is able to 
dynamically discover compatible services over global networks, so we 
cannot use a firewall to protect it either.

The library we use and provide publicy can be found here, in case its of 
interest: https://github.com/pfirmstone/JGDMS

At the end of the day, choosing any development platform is a risk, and 
this was one of the risks of choosing a commercial platform (at that 
time Java was a commercial platform and it's still funded by commercial 
interests today), budgets dictate both the initial development 
compromises and the eventual demise when adoption falls and a feature is 
no longer commercially viable for the people responsible for maintaining 
it.

Due to JEP 411 all our Java development will be put into care and 
maintenance, I'm currently looking at our options for other programming 
languages and platforms on which to base new development, Haskell looks 
interesting and seems to have better type safety, due to it's academic 
background, its developers are focused on solving very difficult 
problems and doing so in a way that is provably correct, using 
mathematical theorems, I think that's why its taken years longer to 
stabilize.  By not having made compromises, it will likely be useful for 
much longer, even with some change.  It seems unlikely that an academic 
language would lose features due to budget constraints, it is more 
likely that inadequate or problematic features will be addressed and 
upgraded or replaced.

It is not so much that JEP 411 might break backward compatibility, we 
can live with that, what we are unable to address; it removes a feature 
that cannot be re-implemented and has no replacement, which exposes us 
to low probability, but unacceptable consequences.

There are no hard feelings, it's just a consequence of our original 
platform adoption choice, we knew there were risks.  It's time to move 
on and deal with it.  No doubt Java will be useful to many people for 
many years to come, and many don't require an authorization layer or 
chose something other than SM to implement one.  With no Java upgrade 
path, it leaves us free to choose from what is available now, rather 
than a choice made 20 years ago. In any case it's likely a choice that 
we would have needed to make eventually, JEP 411 has only brought it 
forward.   If Haskell is a magnitude more efficient, as its proponents 
claim, then it may ultimately provide an overall cost saving.   We 
haven't made a choice yet though, it's still under investigation.

I do appreciate that you took the time to respond to my emails.

Regards,

Peter.

On 26/07/2021 12:44 am, Alan Bateman wrote:
> On 23/07/2021 23:33, Peter Firmstone wrote:
>> I think it's worth noting that there isn't a way to securely run code 
>> with malicious intent now, so I'm surprised that at this late stage 
>> you were still providing support for sand boxing (whack a mole).
>>
>> It's just for us many assumptions have been made on a Java platform 
>> with SM, using POLP (not sandboxing) as this was one of the 
>> foundational principles of secure coding guidelines (just like 
>> following concurrency best practice, were were following security 
>> best practice).   Sandboxing is an all or nothing approach, if you 
>> had a trusted applet that was signed, it had AllPermission, if you 
>> had an unsigned applet, then it had no permissions.  Sandboxing was 
>> one of the use cases for SM, when combined with ClassLoader 
>> visibility, but we never realized that OpenJDK developers meant 
>> sandboxing == authorization access controls.
>>
>> When you remove that pillar, everything it's supporting collapses, 
>> not just sand boxing, so when you say you are removing support for 
>> sandboxing, we say, good idea, but we didn't realize you were saying 
>> you were removing support for all authorization access controls.   
>> Reduced and revised authorization and access control would have been 
>> acceptable, as tightening reflection visibility using a different 
>> form of access control removes the need for authorization based 
>> reflection access checks, but also removing atomic construction 
>> guarantee's just seems like were doing this at a rapid pace without 
>> the community understanding what you have in mind, and this may have 
>> more uses than just stopping finalizer attacks. 
> I'm not 100% sure what you mean by "atomic construction guarantee" 
> here. This JEP does not propose to change anything with finalization 
> or do anything with the registration of finalizers after Object.<init> 
> runs. Our exchange in the previous mails was about classes (using 
> ClassLoader as the example) that specify a SM permission check in 
> their constructors, something that is strongly discouraged as the 
> checks are easy to bypass. The idiom that we use in the JDK to prevent 
> bypassing these SM permission checks with a finalizer attack is to 
> check in a static method that returns a dummy parameter for the 
> invokespecial. My point in the previous mail is that when the SM 
> permission checks eventually go away then many of the uses of this 
> idiom can do away too.
>
> That said, there is strong desire to eventually remove finalization 
> too. Finalization was deprecated several years ago and the Java 
> platform defines APIs that provide much more flexible and efficient 
> ways do run cleanup actions when an object becomes unreachable. So 
> another multi-year/multi-release effort to remove a problematic 
> feature, just nothing to do with this JEP.
>
> As regards POLP, the focus of SM architecture when it was enhanced in 
> Java 1.2. The JEP attempts to explain why this has been a failure. 
> AccessController is voodoo that most developers have never encountered 
> so anyone trying to run with SM ends up running counter to the 
> principle of least privilege by granting all permissions.
>
> -Alan.




More information about the security-dev mailing list