A possible JEP to replace SecurityManager after JEP 411
Peter Firmstone
peter.firmstone at zeus.net.au
Sat Apr 23 06:57:10 UTC 2022
Hi Martin,
I'm curious, you sound like you arrived at this opinion from
experience? Rather than being an upper layer only concern, my opinion
is that it requires lower layer intervention / controls, with upper
layers providing the decision making context.
My reason for asking is, we're basically waiting for finalizers to be
disabled, so that we can instrument the java api with access controls to
replace SM.
In our implementation, we use the SM infrastructure like this:
1. Distributed computing - protection domains and ClassLoaders are used
for service identity and isolation at the client. (Simplification
because a server can also be a client and vice versa).
2. All client subjects are authenticated over secure connections,
threads on the server are run with the client subject, from the
client jvm, for access control decisions, eg do I trust the data
enough to parse it? Callbacks for client listeners (services) are
run with the server's subject at the client.
3. We re-implemented java de-serialization, with a public api that uses
constructors. Unlike perl taint mode, we rely on the authenticated
subject's principals, to determine whether to allow permission to
deserialize (parse data). SM allows us to handle tainted data,
because we put permission checks into our deserialization
implementation, if there's no authenticated subject, or the remote
end doesn't have the required principal, then deserialization
doesn't proceed at all, because no one vouches for the data, it
cannot be trusted. Our deserialization implementation provides an
atomic input validation api, to validate data (sanitize) from
trusted sources, in theory it would allow us to parse untrusted
data, but we use authentication to reduce our exposure. Rather than
a bolted on external kind of white listing filtering mechanism, it's
a class level implementation concern.
4. Clients dynamically download requested proxy jar files, (streams are
not annotated like RMI), prior to download, the client authenticates
the service's server, after authentication, the client loads the jar
files, and deserializes the proxy object state into a designated
ClassLoader (unique to the service identity, services that share jar
file URI will not share ClassLoader's and don't resolve to the same
class type). After authentication, the service provides URI and
advisory permissions and the client may dynamically grant the
intersection of those permissions which it has permission to grant
and those the service requests.
5. Our services are discoverable over multicast IPv6 (globally and on
local networks, usually the two are kept somewhat separate).
6. We have service constraints, these are upper layer controls that
lower layers use to ensure connections use strongly encrypted TLS
protocols for example, or that a connection can be authenticated
with a particular principal. If a service is obtained from another
service, our lower layer communications ensure that the same
constraints apply to the second service, the client may apply new
constraints after receiving a service proxy.
JEP411's successor will remove or change the functionality of Java's
access controls and will break all our TLS connections and our ability
to have different levels of access control for different services.
We can of course just do some kind of no op on later versions of Java
with missing api's via reflection, which will also disable encrypted
connections, then we can allow services to communicate over trusted
networks or VPN's, and allow deserialization and jar file downloads, all
without any jvm layer security, but we lose our ability to dynamically
discover services globally, they will need to be known in advance and
the secure network connections established in advance.
We solved all problems with SM mentioned in JEP 411, with the exception
of the maintenance cost for OpenJDK. My understanding is it is company
policy around security that makes it expensive to maintain. We have a
policy generation tool (based on principles of least privilege), our
policy provider has a less than 1% performance impact. We have a
PermissionComparator that avoids equals calls on Permission's, we have a
URI 3986 implementation, that also normalizes IPv6 addresses, uses
bitshift operations for case conversions and is extremely fast, it's
used by our ClassLoader and Policy implementations.
The only remaining irritations were the structures of the Permissions
themselves, eg SocketPermission can't constrain communications to subnet
IP address ranges.
What Li Gong provided was very well designed, Sun just never finished
it, and pretty much let it rot on the vine, few people used it, because
of the amount of work required to make it work properly, and the fact
that security is a nice to have feature, but budget constraints and
delivery deadlines, and now it's subject to defenestration. Hand
edited policy files? Sorry, that's not a finished product.
The other mistake was the Java trusted computing base became too large,
it needed to be restricted to core java language features. There's too
much trusted code in the JVM. Deserialization and XML (data parsing),
never required any permissions, so it couldn't be disabled by assigning
the necessary permissions to the principal of the authenticating
subject. Serialization and XML shouldn't have been part of the trusted
code base. Even if Java deserialization was insecure, as it was for
many years, if it required authentication of the data source prior to
deserialization proceeding, well maybe history might have been
different. Also too many classes are Serializable.
So by removing SM, in effect we're just making the trusted codebase
larger, now it will encompass all third party libraries and their
dependencies, while also removing the only available mechanism to
determine whether data from an external source can be trusted based on
who it was provided by (authenticated).
Of course there will be those of us who will re-implement an
authorization layer, hopefully we'll learn from Java's mistakes, not
repeat them, but make a bunch of new mistakes instead.
Regards,
Peter.
On 23/04/2022 12:58 pm, Martin Balao wrote:
> Hi,
>
> On 4/8/22 11:13 AM, Sean Mullan wrote:
>> In general, I think authorization is best done at a higher layer within
>> the application and not via low-level SM callouts. Authorize the subject
>> first and if not acceptable, prevent the operation or API from being
>> called in the first place. Once the operation is in motion, you have
>> already taken a greater risk that something might go wrong.
> I completely agree with this vision, and also agree with the other
> arguments that both Sean Mullan and Andrew Dinn mentioned before in this
> thread. In my view, authorization decisions at higher layer generally
> have better context, are more clear and less riskier. At a lower layer
> there is more complexity and chances of both subtle combinations or
> unseen paths that may lead to check bypasses. I lean towards not
> splitting authorization responsibility through different layers, which
> might create confusion or even a false sense of security in some cases.
> To illustrate with a trivial example, if a subject is not supposed to
> access to some information, it's the application that has enough context
> to decide and block right away. Letting the attempt go though the call
> stack down to the network or the file system might be the recipe for
> missing a channel.
>
> I won't enter the untrusted code case -which has been extensively
> discussed already- but want to briefly mention something about the
> "trusted code performing risky operations" case. My first point is that
> vulnerabilities at the JVM level (i.e.: memory safety compromises) are
> serious enough to potentially bypass a SecurityManager. My second point
> is that the SecurityManager is really unable to deal with tainted data
> flowing from low to high integrity domains, and sanitation must be
> performed by the application or the library anyways because there are
> infinite ways in which it can be harmful. Even when the data is obtained
> at a low integrity domain, there will be a flow towards a high integrity
> domain to perform a legitimate action (i.e.: SQL query, OS command
> execution, etc). The OS and the DB engine, in this example, have the
> knowledge, granularity and power to be the next level of enforcement
> after data sanitation. Again, I wouldn't split this responsibility or
> pass it to the JDK for the same reasons than before.
>
> Best,
> Martin.-
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/security-dev/attachments/20220423/5716bbc7/attachment.htm>
More information about the security-dev
mailing list