Java Security: JEP: 411: Deprecate the Security Manager for Removal - What about Serialization?

Peter Firmstone peter.firmstone at zeus.net.au
Mon May 3 22:35:27 UTC 2021


On 4/05/2021 5:12 am, Sean Mullan wrote:
> -bcc jdk-dev
> -cc security-dev
>
> On 4/30/21 10:04 PM, Peter Firmstone wrote:
>> <SNIP>
>>
>> In our software we use a ProtectionDomain to represent a remote 
>> server, because a thread only runs with the user's Subject (and that 
>> Subject must be carefully preserved for other threads), there is no 
>> way to represent the remote Server's Subject in a local domain , 
>> other than with a ProtectionDomain.   Our software is peer to peer, 
>> clients can be servers and servers can also be clients.  Code to 
>> interact with the server is downloaded via Maven and loaded.  Any 
>> permission's granted to a user, are injected into the stack when run 
>> as the client Subject, to authenticate the user for the server and 
>> establish a secure connection, calls made by the client are run with 
>> the user's Subject on the server, again for access control purposes.  
>> This functionality is beyond the capability of Java RMI, we aren't 
>> using Java RMI to do this.  This is very important to allow us to 
>> make fine grained access control decisions, or perform event 
>> notification callbacks over secure connections, without this feature, 
>> we can't make a secure connection with a callback, and you know what 
>> happens when you have to do something, but cannot do it securely?   
>> We only grant network access directly back to the server, downloaded 
>> code has been verified and is not expected to cause denial of 
>> service, by consuming resources etc, but we don't want to grant third 
>> party access to files, or random network connections, we still have 
>> privacy obligations for third party information.
>>
>> We can allow a third party to use unsigned certificates to sign their 
>> jar files or use a checksum and we verify them using a secure 
>> connection to the server, prior to loading.   We then dynamically 
>> grant permissions to the server's self signed Certificate (used to 
>> sign the jar file), or a ProtectionDomain, after authenticating the 
>> server and receiving a check sum or certificate from it.  So the 
>> client authenticates the server using signed TLS certificates (EG by 
>> letsencrypt.org or a trusted CA). We use self signed certificates on 
>> Jar files if we sign them, we are actually trusting the server entity 
>> in this case, eg a trusted company, but also placing restrictions on 
>> them.
>
> I am probably missing something, but I don't understand how this is 
> secure if you are using TLS server certificates as the basis for 
> authenticating signed code. These are two very different use cases.


Clarifying the level of trust:

 1. You have a trusted party, whom you trust to write their own code.
 2. You run that code on your systems dynamically.
 3. You trust the other party, but you either haven't or it's not
    practical to audit their code.
 4. Using the principle of least privilege, you limit the ability of the
    other party's code to ensure they are unable observe data they
    shouldn't, eg a third party with whom you also do business.
 5. While a trusted party (eg a supplier) could write code that caused
    denial of service, eg using up all available memory, there is no
    motivation for them to do so.  However there may be motivation for
    them to see quotes on your system from another supplier if they have
    access to it.

The code is loaded dynamically, but before the jvm loads it, we 
authenticate the party that is asking us to load their code, after which 
we are basically asking them the question, is this the code you want us 
to load?  Please check that it hasn't been tampered with.   The trusted 
party gives us a checksum, or a self signed certificate they used to 
sign the jar, we are then satisfied that we have received the software 
unaltered from the trusted party and not a MITM attack, so we load it.   
However we limit the permission of this software using the principle of 
least privilege.   They don't get file permissions, they are only 
allowed to connect to the server they used to authenticate with. If a 
third party uses the same jar file, they don't gain the permissions 
granted to other parties, as it will be loaded into a separate 
ClassLoader with the permissions granted to that party only.


>
>> If we remove access control, third parties will be able to open local 
>> network connections and freely and use Java Serialization over 
>> unsecured connections, exposing us to an attacker who can use a 
>> gadget attack. Presently they cannot open a network connection, 
>> access files or do much of anything without Permission.  All those 
>> protections will be removed with this JEP.
>>
>> from https://community.letsencrypt.org/t/do-you-support-code-signing/370
>>
>>> Code-signing certificates as they’re used today are part of systems 
>>> that try to decide whether a software source is malicious or 
>>> legitimate. I don’t think Let’s Encrypt could easily play that kind 
>>> of role when issuing certificates free of charge with an automated 
>>> process without checking the real-world identity of the applicant. 
>>> We could confirm that a code signing certificate applicant controls 
>>> a domain name like iurewnrjewknkjqoiw.biz 408 
>>> <http://iurewnrjewknkjqoiw.biz>, but that doesn’t give users or 
>>> operating system developers much ability to know whether software 
>>> that that applicant publishes is trustworthy or malicious.
>>
>> The JVM is one of very few platforms that has sufficient capability 
>> to allow us to do this.
>>
>> If I could chose my pain, I would chose to remove Java Serialization 
>> first, before SecurityManager because while I understand the 
>> maintenance burden needs to be reduced for the ongoing viability of 
>> the Java platform, security is still of utmost importance, as the 
>> vulnerabilities of Java Serialization killed of client development.
>
> I don't think it is appropriate to block deprecation of the Security 
> Manager until serialization is removed. Note that we have added 
> mechanisms such as Serialization Filters [1] to help applications 
> secure their serialization dependencies and that do not require a 
> Security Manager to be enabled. We also are continuing to look at 
> other improvements in this area, as well as introducing new features 
> such as Records that can be serialized more securely [2].

I'm not suggesting blocking deprecation of SecurityManager, I'm 
requesting blocking removal of SecurityManager until after Serialization 
has been removed.   So deprecate SecurityManager, just don't mark it for 
removal yet.  Please mark SecurityManager for removal after 
Serialization has been removed.   What follows is the reasoning for my 
request.

We currently use SecurityManager and policy to prevent un-trusted 
connections which could otherwise use serialization or access sensitive 
data.    We only allow a limited subset re-implementation of 
serialization over trusted connections and permission must be granted 
before it can be used.  No trust established (TLS), then no 
Serialization.  Because we load code dynamically, we will not be able to 
profile it in advance, so we cannot create serialization whitelists 
because we have no way of knowing class names in advance, our only 
choice will be to disable Serialization entirely.   At least with 
security policy, we can establish the permissions in advance.

Presently the dynamically loaded code, contains a list of requested 
Permissions in META-INF, however they may not be granted, the security 
policy has a list of Permission's that are allowed to be granted based 
on the remote principal, the permissions granted will be the 
intersection of these two lists, requested and allowed permissions.  We 
can log permissions that are not granted.  This occurs dynamically at 
runtime.  A Permission requested may also be a subset of the Permissions 
allowed, because of implies checks.  So dynamically loaded code really 
does operate under least privilege principles.  You can't do that with 
Serialization filtering.

Prior to the introduction of the Serialization's filtering mechanism, I 
re-implemented a subset of Java Serialization, focused on addressing 
vulnerabilities caused by gadget attacks.

We did this when other companies solutions to addressing Java 
vulnerabilities was to remove Java altogether, which is why Java applets 
are no longer used.  Instead, we knuckled down and addressed the 
vulnerabilities.

I had to remove circular links because they introduce security 
vulnerabilities, I also limited the number of bytes the stream can 
download before it must be reset, otherwise an IOException is thrown and 
control is returned to the caller.  It uses constructors, and all 
classes are expected to validate invariants.  We've discussed it 
previously.  Apart from Collection classes, serial form of existing 
classes has not been altered, instead classes have new Constructor's 
that are used to validate their de-serialized fields.  All fields have 
also undergone their own validation process.  J.B. called it atomic 
serialization, when we first discussed it, so that's what we call it.

My first step was to re-implement deserialization and use that for some 
time with the new deserialization public API,  I'm currently 
implementing a public API for serialization.  After this I will 
introduce more serialization protocols that use the same API, at that 
time I may have to change the serial form of some classes, as Java 
serialization had a lot of complex features that other serialization 
protocols lack.  The deserialization api, is capable of supporting 
multiple serial forms, to allow class implementations to change them.

Certain Java Collection classes are vulnerable to denial of service 
attacks, so they are not serialized, instead I have serializers that 
transfer the data, which is validated during deserialization, before 
populating the Java Collection classes via constructors.  This means any 
Collection class can be serialized, even those that don't implement 
Serializable, they all have the same serial form, so their serialized 
form basically respects the Collection interfaces of Map, Set and List, 
their bytes can be compared for equality, for example if two Map's are 
equal, their serialized bytes will also be equal.

Furthermore serialization filters have the same complexity flaws as the 
Security Manager model, but in our case we use SecurityManager to grant 
a limited set of privileges that we are able to establish prior to 
loading dynamic code.    We already have a profiling tool that generates 
policy files.

Once SecurityManager is removed, third party code will have all 
permission's granted to the JVM, so they will be able to view files and 
make network connections.   The serialization filters offer a similar 
level of complexity, but with less protection and less time for tool 
development.  No doubt this will require us to rethink the structure of 
our software and access control to sensitive data.

Once we fixed Policy performance issues and created profiling tools for 
initial creation of policy files, which are used as a guide to create 
the requested permission lists and enable permissions to be granted 
dynamically at runtime, security becomes a significant benefit rather 
than a burden.

To quote waratek: reference 
https://www.waratek.com/java-serialization-filtering/

> To configure Serialization Filtering, the application needs to first 
> be fully profiled. *Profiling* an app can be a complex process that 
> requires specialized tools and has to be performed by domain experts. 
> Typically, the process requires the app to run normally for a period 
> of time in order for all its paths to be executed. A dynamic profiling 
> tool can log the class names that are required for normal operation. 
> This list of class names will then be the basis of configuring the 
> white/black list of the Serialization Filters. And even after going 
> through this process, there is no guarantee that all of the execution 
> paths were run and all the required class names were logged. Of 
> course, the same process needs to be performed every time a new 
> release goes into production or even when a third-party library must 
> be upgraded. The lifecycle of this process becomes even more complex 
> since such any change in the Serialization Filters will first need to 
> go through QA and UAT before it reaches production.
>
> The Serialization Filtering mechanism follows a very similar approach 
> to the Security Manager 
> <https://docs.oracle.com/javase/7/docs/api/java/lang/SecurityManager.html>. 
> The Security Manager also works based on a whitelist 
> <https://docs.oracle.com/javase/tutorial/security/tour2/step3.html> 
> and suffers from the same scalability problems. Java’s Security 
> Manager has proved to be unsuitable for enterprise, large-scale 
> environments, given that *it moves the responsibility of protecting 
> the system to the user*. The user is responsible for understanding the 
> application’s security requirements and technical details and 
> correctly configuring the security policy, which in essence is a 
> whitelist of permissions. Such security policies are typically very 
> complicated in enterprise applications that change frequently and 
> integrate with numerous different systems and components. The 
> *operational cost* of correctly configuring and maintaining such 
> security policies is so high that Security Manager is rarely deployed 
> in production environments [6 
> <http://www.hpl.hp.com/techreports/98/HPL-98-79.pdf>] [7 
> <http://tomcat.apache.org/tomcat-7.0-doc/security-howto.html>].
>

I would elaborate that the above problems with SecurityManager have been 
address in practice, as we've had many years to address them, these 
solutions are not included with Java of course.  The responsibility is 
not with the user, but developers and administrators.

Also it has become apparent to me, that Java is following in the 
footsteps of Unix.   First workstations were replaced by cheaper PC's, 
so the workstation market was lost, then the PC version of Unix, Linux 
ate Unix's lunch in the Server market.  Java no longer has a client 
market, no longer on phones or the browser, some desktop deployments, it 
has largely retreated to servers.  Be careful not to diminish Java's 
market too much, lest Android eat Java's server market lunch.  Android 
has a newer fine grained security model, I don't know if it can be 
applied to a server environment.   I don't mean to be inflammatory, just 
giving you ammunition, should you require it.

-- 
Regards,
  
Peter Firmstone
Zeus Project Services Pty Ltd.


>
> --Sean
>
> [1] 
> https://docs.oracle.com/en/java/javase/16/core/serialization-filtering1.html#GUID-3ECB288D-E5BD-4412-892F-E9BB11D4C98A
> [2] https://inside.java/2021/04/06/record-serialization-in-practise/



More information about the jdk-dev mailing list