RFR(S): 8130910: linux/unix: hsperfdata file created in cwd and not cleaned up if /tmp/hsperfdata_<username> has wrong permissions

Daniel D. Daugherty daniel.daugherty at oracle.com
Mon Jul 20 14:18:33 UTC 2015


On 7/17/15 4:58 PM, Volker Simonis wrote:
>
>
> On Saturday, July 18, 2015, Daniel D. Daugherty 
> <daniel.daugherty at oracle.com <mailto:daniel.daugherty at oracle.com>> wrote:
>
>     On 7/16/15 9:18 AM, Langer, Christoph wrote:
>
>         Hi all,
>
>
>
>         I don't know if this mailing list is the right one for this
>         issue. Let me know if I should post it elsewhere.
>
>
>
>         I have prepared a fix for an issue with /tmp/hsperfdata files.
>
>
>
>         Please review this change.  I also need a sponsor.
>
>
>
>         Bug: https://bugs.openjdk.java.net/browse/JDK-8130910
>
>         Change:
>         http://cr.openjdk.java.net/~simonis/webrevs/2015/8130910/
>         <http://cr.openjdk.java.net/%7Esimonis/webrevs/2015/8130910/>
>
>
>     src/os/aix/vm/perfMemory_aix.cpp
>         No comments.
>
>     src/os/bsd/vm/perfMemory_bsd.cpp
>         No comments.
>
>     src/os/linux/vm/perfMemory_linux.cpp
>         No comments.
>
>     src/os/solaris/vm/perfMemory_solaris.cpp
>         No comments.
>
>
>     I think I'm OK with the code changes above, but I need more
>     time to mull on them. I also need time to mull on the stuff
>     down below.
>
> Hi Dan,
>
> thanks for looking at this change.
>
> Maybe I can add some more context here. Christoph detected this issue 
> when somebody who ran the VM in a security context with a umask which 
> masked out all the executable bits of newly created files complained 
> about the stale  files in the current working directory.

Thanks for the context.


>
> I think that Chritoph's change is good

I concur that the change is good. I'm simply mulling on potential
side effects.


> and I also share his doubts about changing the current working 
> directory. I think this is questionable because it can have non-local 
> side effects but I think this should be fixed in a follow up change.

Changing the working directory can definitely have non-local side
effects, but the change of directory is a necessary part of the
secure file system object creation protocol. Jerry has a good
comment that explains what APIs we would rather use than changing
the current working directory, but those APIs don't exist on all
platforms (yet).

  348 // NOTE: The code below uses fchdir(), open() and unlink() because
  349 // fdopendir(), openat() and unlinkat() are not supported on all
  350 // versions.  Once the support for fdopendir(), openat() and 
unlinkat()
  351 // is available on all supported versions the code can be changed
  352 // to use these functions.
  353
  354 // Open the directory of the given path, validate it and set the
  355 // current working directory to it.
  356 // Return a DIR * of the open directory and the saved cwd fd.
  357 //
  358 static DIR *open_directory_secure_cwd(const char* dirname, int 
*saved_cwd_fd) {

The protocol is basically:

   open_directory_secure_cwd()
   do operations to files in the directory using relative paths
   close_directory_secure_cwd()

The alternative of using full paths is difficult to do safely without
exposure to attacks along exposed path elements.

Dan


>
> Thanks,
> Volker
>
>     Will get back to this next week.
>
>     Dan
>
>
>
>
>
>         The problem is that for creating a file in
>         /tmp/hsperfdata_<username>, an fchdir to this directory is
>         attempted. In case there's no execute permission on
>         /tmp/hsperfdata_<username>, the fchdir fails. As its return
>         code is not checked up to now, the process would stay in its
>         current working directory and create the file in there. The
>         location stored for cleaning up at the end of the VM is also
>         remembered as /tmp/hsperfdata... so the file would remain left
>         over eventually. There are several checks for the hsperfdata
>         location in place but nothing would hit and prevent the fchdir
>         attempt beforehand in this case.
>
>
>
>         I fixed it by handling fchdir return code and in case of
>         failure closing the handles and returning NULL. In that case
>         we wouldn't have shared PerfMemory but I think that is ok then.
>
>
>
>         I thought about a testcase but it would involve messing around
>         with /tmp/hsperfdata permissions which can have effects on
>         other running JVMs, too, which I consider a bit dangerous.
>
>
>
>         Generally I don't know if it is a good idea to do an fchdir at
>         all but I don't know so much about the backgrounds of it...
>         maybe someone wants to comment on that as well.
>
>
>
>         Thanks and best regards,
>
>         Christoph
>
>
>
>



More information about the hotspot-dev mailing list