<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">
What you’re saying is: SM is good to help with trusted code — i.e. not the threat that shaped its design. I’m saying,
<div class="">perhaps, but other techniques are better, namely, OS-level sandboxing and deep monitoring (based on JFR, which</div>
<div class="">will need to be extended). Why? Because the SM is a far too elaborate, complex, and sensitive mechanism to defend</div>
<div class="">against careless coders or difficult to spot vulnerabilities.</div>
<div class=""><br class="">
</div>
<div class="">
<div class="">Again, the question isn’t about what’s possible in theory, but about what code people can be expected to write in</div>
<div class="">practice, and what we actually see, in the very rare cases SM is used at all, is policy files clearly written by “let’s</div>
<div class="">add permissions until the code runs.” Such policy files don’t really do what you intend them to do.</div>
</div>
<div class=""><br class="">
</div>
<div class="">Your log-dir scenario precisely highlights this. You say that you can do something that OS-level sandboxing won’t</div>
<div class="">allow, and while the SM does support this use-case, practice shows that it is used incorrectly. To control access to</div>
<div class="">the log directory, *on top of guarding all access to the log dir with doPrivileged*, if the application uses CompletableFuture, </div>
<div class="">any “reactive” framework, or thread pools of any kind, really, the application must also be carefully programmed with AccessController.getContext() and doPrivileged(access, context) that are used anywhere a task might move among </div>
<div class="">threads. Without this, access wouldn't be controlled correctly. Now, remember that because the code is otherwise</div>
<div class="">trusted, you want to protect against problems due to bugs, but to protect against them, you mustn’t have bugs in</div>
<div class="">this complex context setup process.</div>
<div class=""><br class="">
</div>
<div class="">How many applications that you know actually do that? Put simply, are you describing something that you envision </div>
<div class="">people doing or one that you know people actually regularly do? If it isn’t done — and there’s good reason why it isn’t</div>
<div class="">— then it provides no security at all.</div>
<div class=""><br class="">
</div>
<div class="">Here’s how you need to do it: set up an OS-level sandbox that allows access to the log directory, and use JFR to </div>
<div class="">monitor file activity. If you see activity with a suspicious stack trace, investigate it and fix the bug that’s caused the</div>
<div class="">vulnerability. The belief that complex code you have to write will save you from bugs you have in other complex code</div>
<div class="">*before they can manifest* is one that needs good evidence that is actually effective in practice, not just possible in </div>
<div class="">theory. </div>
<div class=""><br class="">
</div>
<div class="">If you want to make a compelling case, show us what people *do*, not what you think they *could* do. We already know </div>
<div class="">what the SM was designed to do and what it could do, but by now we’re largely convinced that it doesn’t actually do it. </div>
<div class=""><br class="">
</div>
<div class="">You are absolutely right to worry about the things you mention, and because they are so worrisome they should be handled </div>
<div class="">by components that can actually get the job done better than the SM.</div>
<div class=""><br class="">
</div>
<div class="">— Ron<br class="">
<div class="">
<div><br class="">
<blockquote type="cite" class="">
<div class="">On 23 Apr 2021, at 14:41, Reinier Zwitserloot <<a href="mailto:reinier@zwitserloot.com" class="">reinier@zwitserloot.com</a>> wrote:</div>
<br class="Apple-interchange-newline">
<div class="">
<div dir="ltr" class="">
<div dir="ltr" class="">> Ron Pressler wrote:
<div class="">> The problem is that this is not doable with the Security Manager, either, except in theory.</div>
<div class=""><br class="">
</div>
<div class="">Security is not that simple. Yes, there are ways to beat the security manager - it is not a black and white scenario where a SecurityManager is a complete guarantee against any and all vulnerabilities. But a SecurityManager can stop many vulnerabilities
(just not all of them; virtually no security policies get to reasonably make such a claim, though!).<br class="">
</div>
<div class=""><br class="">
</div>
<div class="">The paper appears to be primarily about how you can work around SecurityManager based restrictions in code. In other words, it requires either untrusted code to run (when we're talking applets or the running of plugins supplied by untrusted third
parties - but as has been covered in this thread, that's not part of the use case here), or for a separate vulnerability to exist that allows an attacker to run arbitrary bytecode, (or perhaps a dev team that is exploiting their own mitigations, but if we
are to assume a sufficiently dysfunctional team, the line between malicious attacker and authorised committer is as good as gone and there is not much point talking about attack mitigation strategies in the first place). That's quite the big axiom! In the
vast majority of attack scenarios, the attacker does not (yet) have the ability to run arbitrary bytecode. But they did gain the ability to e.g. coerce some part of a web server to return the file content of a path chosen by the attacker.</div>
<div class=""><br class="">
</div>
<div class="">In other words, that's not the attack surface that the use-case 'use the SM to restrict access to files' is designed to protect against. Instead, it's targeted at a combination of careless coders / strong reminding of policy rules amongst a dev
team, and to fight a large class of attacks, such as coercing the server to return file contents. That class does _not_ include attacks where the attacker runs arbitrary bytecode, however.</div>
<div class=""><br class="">
</div>
<div class="">To fight against attacks and to enforce directives amongst the dev team, I can restrict the process itself using OS-level mechanisms as being incapable of seeing any directories, except the ones it actually has a legitimate need to read and/or
write to. Unfortunately, the log dir is one of those 'legitimate' dirs, so I can't restrict that at the OS level. With a SecurityManager, I can still restrict it. I can elect to install a SecurityManager that blanket denies any and all attempts to read from
the log dir, and denies all attempts to list or write to the log dir unless stack trace analysis shows it's coming from the log system.</div>
<div class=""><br class="">
</div>
<div class="">This SecurityManager based mitigation, as far as I am aware, even taking into account the 'Evaluating the Flexibility of the Java Sandbox' paper, fully stops:</div>
<div class=""><br class="">
</div>
<div class="">* Misunderstandings within the dev team; it lets the design of the app encode, more or less as a guarantee, that neither any dev on the team directly (by programming it) or indirectly (by including a dependency and configuring it in such a way)
is ever going to independently write logs, or that any code running anywhere in the VM is going to read from the log files, at least via the route of 'someone on the team thought it was an acceptable solution to some problem they had to do that'. *1</div>
<div class=""><br class="">
</div>
<div class="">* Misunderstandings between dev team and third party dep authors: If library that does end up reading a log dir (or perhaps more likely: A misunderstanding about how an end-user supplied string value ends up being conveyed to the library such
that it lets the end user make the library read log files).</div>
<div class=""><br class="">
</div>
<div class="">* Any vulnerabilities that allow an attacker to coerce the server to return the contents of files on the file system from being used to let the attacker see log files (or any other file from a directory not explicitly allowed by the SM setup).</div>
<div class=""><br class="">
</div>
<div class="">It (probably) does not stop:</div>
<div class=""><br class="">
</div>
<div class="">* A vulnerability that lets an attacker run arbitrary bytecode on the VM.</div>
<div class=""><br class="">
</div>
<div class="">* Malicious libraries *1.</div>
<div class=""><br class="">
</div>
<div class="">* a vulnerability which lets a user inject arbitrary strings into the log file by making some exploitable code run e.g. `LOG.d(stringUnderFullHackerControl);`, but this is vastly less significant than an attacker that can read them, or if e.g.
the log system itself has a webserver view component for admins to directly view the logs, from vulnerabilities in this frontend system.</div>
<div class=""><br class="">
</div>
<div class="">I don't think a security measure should be mostly disregarded just because it doesn't stop _every_ imaginable attack. The ones this stops are surely worthwhile to stop (I have made these mitigations part of my security setup in server deployments
and part of my ISO27k1-esque certification documentation on how to ensure that the dev team follows the security directives).</div>
<div class=""><br class="">
</div>
<div class="">*1) The only ways this guarantee can be broken is if a library is intentionally working around it e.g. using techniques as set forth in that paper, which feels like it falls in the same category and has the same mitigations as any third party
dependency that decides to include intentionally malicious code in there: Code review, CVE, and guardianship from repo owners such as sonatype's maven-central. In other words, that's a different class of attack and is not something that the SM, at least for
this use-case, is meant to mitigate.</div>
<div class=""><br class="">
</div>
<div class="">
<div class="">
<div dir="ltr" class="gmail_signature"> --Reinier Zwitserloot<br class="">
</div>
</div>
<br class="">
</div>
</div>
</div>
<br class="">
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Thu, 22 Apr 2021 at 19:43, Ron Pressler <<a href="mailto:ron.pressler@oracle.com" class="">ron.pressler@oracle.com</a>> wrote:<br class="">
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">
<div style="word-wrap:break-word;line-break:after-white-space" class=""><br class="">
<div class=""><br class="">
<blockquote type="cite" class="">
<div class="">On 22 Apr 2021, at 18:27, Reinier Zwitserloot <<a href="mailto:reinier@zwitserloot.com" target="_blank" class="">reinier@zwitserloot.com</a>> wrote:</div>
<div class="">
<div dir="ltr" class="">
<div dir="ltr" class="">
<div class=""><br class="">
</div>
<div class="">For example, I may want to restrict access to the 'logs' directory. I can't restrict it at the OS level (because the JVM does need to write the log files, of course), at best I can restrict it at the module / package / code line level, allowing
the log framework write-only access, and deny it everywhere else.</div>
<div class=""><br class="">
</div>
<div class="">The problem at hand ("I want to treat my log dir as unreadable and unwriteable to my own process, except for logging code, which should be allowed to write") cannot be address with a 'configure the library' solution, unless the java (new) files
API grows a whole bunch of methods to redefine such things, and/or to try to shove into a custom FileSystem implementation some code that does stack trace introspection to try to make this happen.... and that still doesn't address the `java.io.File` API.</div>
<div class=""><br class="">
</div>
<div class="">
<div class="">
<div dir="ltr" class=""> --Reinier Zwitserloot<br class="">
</div>
</div>
</div>
</div>
</div>
</div>
</blockquote>
</div>
<br class="">
<div class="">The problem is that this is not doable with the Security Manager, either, except in theory. That the</div>
<div class="">Security Manager can do this *in principle* (depending on correct use of doPrivileged) is not in dispute. </div>
<div class="">But experience over they years has shown that people, *in practice* aren’t able to get Security Manager </div>
<div class="">to do that correctly; all the SM does is get people to *think* they can do it </div>
<div class="">(see <a href="https://urldefense.com/v3/__http://www.cs.cmu.edu/*clegoues/docs/coker15acsac.pdf__;fg!!GqivPVa7Brio!IRVcL32nrtah7k1tMeB1B2yKpKrRaobbCYu7QCCWxEQ6DOJl_QfnDEkRty00SqubfA$" target="_blank" class="">http://www.cs.cmu.edu/~clegoues/docs/coker15acsac.pdf</a>).</div>
<div class=""><br class="">
</div>
<div class="">Such an approach to security based on a highly-flexible sandbox is, empirically, not secure regardless of how </div>
<div class="">much we’d like it to work; Security Manager was designed so elaborately, because back then, when the approach </div>
<div class="">was new, people believed it could work. But having gained a couple of decades’ worth of experience with </div>
<div class="">software security and various approaches to it, we now that, perhaps disappointingly, it just doesn’t work.</div>
<div class="">In fact, it is worse than just not working — it’s insecure while giving a false sense of security.</div>
<div class=""><br class="">
</div>
<div class=""><br class="">
</div>
<div class="">— Ron</div>
<div class=""><br class="">
</div>
</div>
</blockquote>
</div>
</div>
</blockquote>
</div>
<br class="">
</div>
</div>
</body>
</html>