[External] : Re: JEP411: Missing use-case: Monitoring / restricting libraries

Ron Pressler ron.pressler at oracle.com
Fri Apr 23 14:29:31 UTC 2021


What you’re saying is: SM is good to help with trusted code — i.e. not the threat that shaped its design. I’m saying,
perhaps, but other techniques are better, namely, OS-level sandboxing and deep monitoring (based on JFR, which
will need to be extended). Why? Because the SM is a far too elaborate, complex, and sensitive mechanism to defend
against careless coders or difficult to spot vulnerabilities.

Again, the question isn’t about what’s possible in theory, but about what code people can be expected to write in
practice, and what we actually see, in the very rare cases SM is used at all, is policy files clearly written by “let’s
add permissions until the code runs.” Such policy files don’t really do what you intend them to do.

Your log-dir scenario precisely highlights this. You say that you can do something that OS-level sandboxing won’t
allow, and while the SM does support this use-case, practice shows that it is used incorrectly. To control access to
the log directory, *on top of guarding all access to the log dir with doPrivileged*, if the application uses CompletableFuture,
any “reactive” framework, or thread pools of any kind, really, the application must also be carefully programmed with AccessController.getContext() and doPrivileged(access, context) that are used anywhere a task might move among
threads. Without this, access wouldn't be controlled correctly. Now, remember that because the code is otherwise
trusted, you want to protect against problems due to bugs, but to protect against them, you mustn’t have bugs in
this complex context setup process.

How many applications that you know actually do that? Put simply, are you describing something that you envision
people doing or one that you know people actually regularly do? If it isn’t done — and there’s good reason why it isn’t
— then it provides no security at all.

Here’s how you need to do it: set up an OS-level sandbox that allows access to the log directory, and use JFR to
monitor file activity. If you see activity with a suspicious stack trace, investigate it and fix the bug that’s caused the
vulnerability. The belief that complex code you have to write will save you from bugs you have in other complex code
*before they can manifest* is one that needs good evidence that is actually effective in practice, not just possible in
theory.

If you want to make a compelling case, show us what people *do*, not what you think they *could* do. We already know
what the SM was designed to do and what it could do, but by now we’re largely convinced that it doesn’t actually do it.

You are absolutely right to worry about the things you mention, and because they are so worrisome they should be handled
by components that can actually get the job done better than the SM.

— Ron

On 23 Apr 2021, at 14:41, Reinier Zwitserloot <reinier at zwitserloot.com<mailto:reinier at zwitserloot.com>> wrote:

> Ron Pressler wrote:
> The problem is that this is not doable with the Security Manager, either, except in theory.

Security is not that simple. Yes, there are ways to beat the security manager - it is not a black and white scenario where a SecurityManager is a complete guarantee against any and all vulnerabilities. But  a SecurityManager can stop many vulnerabilities (just not all of them; virtually no security policies get to reasonably make such a claim, though!).

The paper appears to be primarily about how you can work around SecurityManager based restrictions in code. In other words, it requires either untrusted code to run (when we're talking applets or the running of plugins supplied by untrusted third parties - but as has been covered in this thread, that's not part of the use case here), or for a separate vulnerability to exist that allows an attacker to run arbitrary bytecode, (or perhaps a dev team that is exploiting their own mitigations, but if we are to assume a sufficiently dysfunctional team, the line between malicious attacker and authorised committer is as good as gone and there is not much point talking about attack mitigation strategies in the first place). That's quite the big axiom! In the vast majority of attack scenarios, the attacker does not (yet) have the ability to run arbitrary bytecode. But they did gain the ability to e.g. coerce some part of a web server to return the file content of a path chosen by the attacker.

In other words, that's not the attack surface that the use-case 'use the SM to restrict access to files' is designed to protect against. Instead, it's targeted at a combination of careless coders / strong reminding of policy rules amongst a dev team, and to fight a large class of attacks, such as coercing the server to return file contents. That class does _not_ include attacks where the attacker runs arbitrary bytecode, however.

To fight against attacks and to enforce directives amongst the dev team, I can restrict the process itself using OS-level mechanisms as  being incapable of seeing any directories, except the ones it actually has a legitimate need to read and/or write to. Unfortunately, the log dir is one of those 'legitimate' dirs, so I can't restrict that at the OS level. With a SecurityManager, I can still restrict it. I can elect to install a SecurityManager that blanket denies any and all attempts to read from the log dir, and denies all attempts to list or write to the log dir unless stack trace analysis shows it's coming from the log system.

This SecurityManager based mitigation, as far as I am aware, even taking into account the 'Evaluating the Flexibility of the Java Sandbox' paper, fully stops:

* Misunderstandings within the dev team; it lets the design of the app encode, more or less as a guarantee, that neither any dev on the team directly (by programming it) or indirectly (by including a dependency and configuring it in such a way) is ever going to independently write logs, or that any code running anywhere in the VM is going to read from the log files, at least via the route of 'someone on the team thought it was an acceptable solution to some problem they had to do that'. *1

* Misunderstandings between dev team and third party dep authors: If library that does end up reading a log dir (or perhaps more likely: A misunderstanding about how an end-user supplied string value ends up being conveyed to the library such that it lets the end user make the library read log files).

* Any vulnerabilities that allow an attacker to coerce the server to return the contents of files on the file system from being used to let the attacker see log files (or any other file from a directory not explicitly allowed by the SM setup).

It (probably) does not stop:

* A vulnerability that lets an attacker run arbitrary bytecode on the VM.

* Malicious libraries *1.

* a vulnerability which lets a user inject arbitrary strings into the log file by making some exploitable code run e.g. `LOG.d(stringUnderFullHackerControl);`, but this is vastly less significant than an attacker that can read them, or if e.g. the log system itself has a webserver view component for admins to directly view the logs, from vulnerabilities in this frontend system.

I don't think a security measure should be mostly disregarded just because it doesn't stop _every_ imaginable attack. The ones this stops are surely worthwhile to stop (I have made these mitigations part of my security setup in server deployments and part of my ISO27k1-esque certification documentation on how to ensure that the dev team follows the security directives).

*1) The only ways this guarantee can be broken is if a library is intentionally working around it e.g. using techniques as set forth in that paper, which feels like it falls in the same category and has the same mitigations as any third party dependency that decides to include intentionally malicious code in there: Code review, CVE, and guardianship from repo owners such as sonatype's maven-central. In other words, that's a different class of attack and is not something that the SM, at least for this use-case, is meant to mitigate.

 --Reinier Zwitserloot


On Thu, 22 Apr 2021 at 19:43, Ron Pressler <ron.pressler at oracle.com<mailto:ron.pressler at oracle.com>> wrote:


On 22 Apr 2021, at 18:27, Reinier Zwitserloot <reinier at zwitserloot.com<mailto:reinier at zwitserloot.com>> wrote:

For example, I may want to restrict access to the 'logs' directory. I can't restrict it at the OS level (because the JVM does need to write the log files, of course), at best I can restrict it at the module / package / code line level, allowing the log framework write-only access, and deny it everywhere else.

The problem at hand ("I want to treat my log dir as unreadable and unwriteable to my own process, except for logging code, which should be allowed to write") cannot be address with a 'configure the library' solution, unless the java (new) files API grows a whole bunch of methods to redefine such things, and/or to try to shove into a custom FileSystem implementation some code that does stack trace introspection to try to make this happen.... and that still doesn't address the `java.io.File` API.

 --Reinier Zwitserloot

The problem is that this is not doable with the Security Manager, either, except in theory. That the
Security Manager can do this *in principle* (depending on correct use of doPrivileged)  is not in dispute.
But experience over they years has shown that people, *in practice* aren’t able to get Security Manager
to do that correctly; all the SM does is get people to *think* they can do it
(see http://www.cs.cmu.edu/~clegoues/docs/coker15acsac.pdf<https://urldefense.com/v3/__http://www.cs.cmu.edu/*clegoues/docs/coker15acsac.pdf__;fg!!GqivPVa7Brio!IRVcL32nrtah7k1tMeB1B2yKpKrRaobbCYu7QCCWxEQ6DOJl_QfnDEkRty00SqubfA$>).

Such an approach to security based on a highly-flexible sandbox is, empirically, not secure regardless of how
much we’d like it to work; Security Manager was designed so elaborately, because back then, when the approach
was new, people believed it could work. But having gained a couple of decades’ worth of experience with
software security and various approaches to it, we now that, perhaps disappointingly, it just doesn’t work.
In fact, it is worse than just not working — it’s insecure while giving a false sense of security.


— Ron


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/security-dev/attachments/20210423/2c6a5720/attachment.htm>


More information about the security-dev mailing list