[RFC][Icedtea-web]: Changed how cache works.

Omair Majid omajid at redhat.com
Mon Apr 4 10:43:03 PDT 2011


On 04/04/2011 12:14 PM, Andrew Su wrote:
> Oops forgot to attach patch.
>
> ----- Original Message -----
>> >  Hi,
>> >
>> >  I have attached a patch for changing how cache stores its entries to
>> >  fix some problems that existed in the current cache system.
>> >
>> >  Right now the jar files may be overwritten by another plugin/javaws
>> >  when a new version is pushed while the user is using an old version
>> >  (prior update) then open another instance of the applet/javaws. With
>> >  the patch this gets fixed by creating a new location for the update to
>> >  be downloaded to instead of overwriting the current file.
>> >
>> >  [I'm going to use applet, but assume I mean applet+javaws]
>> >  Another issue with the current one is.. When loading the same applet
>> >  twice (or more), it could detect that it's not current and both applet
>> >  would download to the same place. In the patch, we will wait for the
>> >  download to finish first then check if we have already downloaded it.
>> >  If so just proceed without re-downloading.
>> >
>> >  This patch makes the cache keep track of the most recently used items
>> >  and remove items until we hit the cache limit if set.
>> >

I think this patch would be easier to review if you split it into two 
patches - one to fix problems with files being overwritten, and another 
to enforce cache sizes.

>> >  Changelog:
>> >
>> >  2011-04-04 Andrew Su<asu at redhat.com>
>> >
>> >  * netx/net/sourceforge/jnlp/cache/CacheEntry.java:
>> >  (markForDelete, lock, unlock): New methods.
>> >  * netx/net/sourceforge/jnlp/cache/CacheUtil.java:
>> >  (getCacheFile): Changed to look for most recently used item, and when
>> >  not found create a new entry.
>> >  (makeNewCacheFile, cleanCache, lockFile, unlockFile,
>> >  lockRecentlyUsedFile, unlockRecentlyUsedFile): New methods.
>> >  * netx/net/sourceforge/jnlp/cache/ResourceTracker.java:
>> >  (downloadResource): Changed to wait if we are already downloading the
>> >  file. If file downloaded is compressed, mark compressed and extracted
>> >  files for delete at shutdown.
>> >  (initializeResource): Added checks for when the cache needs updating
>> >  to create new folder entry.
>> >  * netx/net/sourceforge/jnlp/runtime/JNLPClassLoader.java:
>> >  (JNLPClassLoader): Change order so that we get resources first then
>> >  check permissions for the jars.
>> >  * netx/net/sourceforge/jnlp/runtime/JNLPRuntime.java:
>> >  (markNetxRunning): Add call to clean up cache directory.
>> >  * netx/net/sourceforge/jnlp/util/FileUtils.java:
>> >  (getFileLock): New method.
>> >
>> >  Any questions or comments welcome.
>> >

I have noted some comments below.

>> >  Cheers,
>> >  Andrew
>
>
> 20110404_cache_v8.patch
>
>
> diff -r 30276d468815 ChangeLog
> --- a/ChangeLog	Fri Apr 01 12:57:18 2011 -0400
> +++ b/ChangeLog	Mon Apr 04 12:13:32 2011 -0400
> @@ -1,3 +1,26 @@
> +2011-04-04  Andrew Su<asu at redhat.com>
> +
> +	* netx/net/sourceforge/jnlp/cache/CacheEntry.java:
> +	(markForDelete, lock, unlock): New methods.
> +	* netx/net/sourceforge/jnlp/cache/CacheUtil.java:
> +	(getCacheFile): Changed to look for most recently used item, and when
> +	not found create a new entry.
> +	(makeNewCacheFile, cleanCache, lockFile, unlockFile,
> +	lockRecentlyUsedFile, unlockRecentlyUsedFile): New methods.
> +	* netx/net/sourceforge/jnlp/cache/ResourceTracker.java:
> +	(downloadResource): Changed to wait if we are already downloading the
> +	file. If file downloaded is compressed, mark compressed and extracted
> +	files for delete at shutdown.
> +	(initializeResource): Added checks for when the cache needs updating
> +	to create new folder entry.
> +	* netx/net/sourceforge/jnlp/runtime/JNLPClassLoader.java:
> +	(JNLPClassLoader): Change order so that we get resources first then
> +	check permissions for the jars.
> +	* netx/net/sourceforge/jnlp/runtime/JNLPRuntime.java:
> +	(markNetxRunning): Add call to clean up cache directory.
> +	* netx/net/sourceforge/jnlp/util/FileUtils.java:
> +	(getFileLock): New method.
> +
>   2011-04-01  Denis Lila<dlila at redhat.com>
>
>   	* plugin/icedteanp/java/sun/applet/PluginDebug.java:
> diff -r 30276d468815 netx/net/sourceforge/jnlp/cache/CacheEntry.java
> --- a/netx/net/sourceforge/jnlp/cache/CacheEntry.java	Fri Apr 01 12:57:18 2011 -0400
> +++ b/netx/net/sourceforge/jnlp/cache/CacheEntry.java	Mon Apr 04 12:13:32 2011 -0400
> @@ -162,4 +162,25 @@
>           properties.store();
>       }
>
> +    /**
> +     * This adds an entry to the properties file to indicate we should delete this on exit.
> +     */
> +    public void markForDelete() { // once marked it should not be unmarked.

Not sure if this comment is meant for someone calling markForDelete, or 
if this is an implementation detail.

> +        properties.setProperty("delete", Boolean.toString(true));
> +    }
> +
> +    /**
> +     * This locks the properties file so only one thing can access this entry at a time.
> +     */
> +    protected void lock() {
> +        CacheUtil.lockFile(properties);
> +    }
> +
> +    /**
> +     * This unlocks the properties file.
> +     */
> +    protected void unlock() {
> +        CacheUtil.unlockFile(properties);
> +    }
> +
>   }
> diff -r 30276d468815 netx/net/sourceforge/jnlp/cache/CacheUtil.java
> --- a/netx/net/sourceforge/jnlp/cache/CacheUtil.java	Fri Apr 01 12:57:18 2011 -0400
> +++ b/netx/net/sourceforge/jnlp/cache/CacheUtil.java	Mon Apr 04 12:13:32 2011 -0400
> @@ -21,6 +21,8 @@
>   import java.io.*;
>   import java.net.*;
>   import java.nio.channels.FileChannel;
> +import java.nio.channels.FileLock;
> +import java.nio.channels.OverlappingFileLockException;
>   import java.util.*;
>   import java.security.*;
>   import javax.jnlp.*;
> @@ -29,6 +31,7 @@
>   import net.sourceforge.jnlp.config.DeploymentConfiguration;
>   import net.sourceforge.jnlp.runtime.*;
>   import net.sourceforge.jnlp.util.FileUtils;
> +import net.sourceforge.jnlp.util.PropertiesFile;
>
>   /**
>    * Provides static methods to interact with the cache, download
> @@ -39,6 +42,15 @@
>    */
>   public class CacheUtil {
>
> +    public static final String cacheDir = new File(JNLPRuntime.getConfiguration().getProperty(DeploymentConfiguration.KEY_USER_CACHE_DIR)).getPath(); // Do this with file to standardize it.
> +    public static final String recentlyUsed = cacheDir + File.separator + "RecentlyUsed";
> +

Please dont make variables that contain sensitive information (like 
user's home dir) public.

What happens if someone accidentally uses the CacheUtil class before 
JNLPRuntime.initialize() is called? I think cacheDir will be null at 
that point.

I dont understand the comment... are you referring to canonicalizing 
paths? getCanonicalPath() might be more appropriate.

> +    private static PropertiesFile cacheOrder = new PropertiesFile(new File(cacheDir + File.separator + "RecentlyUsed"));
> +    private static HashMap<String, FileLock>  propertiesLockPool = new HashMap<String, FileLock>();
> +

Perhaps these should be final?

> +    private static FileLock fl = null;
> +    private static int lockCount = 0;
> +
>       /**
>        * Compares a URL using string compare of its protocol, host,
>        * port, path, query, and anchor.  This method avoids the host
> @@ -138,8 +150,7 @@
>               return;
>           }
>
> -        File cacheDir = new File(JNLPRuntime.getConfiguration()
> -                .getProperty(DeploymentConfiguration.KEY_USER_CACHE_DIR));
> +        File cacheDir = new File(CacheUtil.cacheDir);
>           if (!(cacheDir.isDirectory())) {
>               return;
>           }
> @@ -284,18 +295,86 @@
>           if (!isCacheable(source, version))
>               throw new IllegalArgumentException(R("CNotCacheable", source));
>
> -        try {
> -            String cacheDir = JNLPRuntime.getConfiguration()
> -                    .getProperty(DeploymentConfiguration.KEY_USER_CACHE_DIR);
> -            File localFile = urlToPath(source, cacheDir);
> -            FileUtils.createParentDir(localFile);
> +        File cacheFile = null;
> +        File urlPath = urlToPath(source, "");
>
> -            return localFile;
> -        } catch (Exception ex) {
> -            if (JNLPRuntime.isDebug())
> -                ex.printStackTrace();
> +        lockRecentlyUsedFile();
> +        String folder = "";
> +        synchronized (cacheOrder) { // We can only look for one cache file at a time. Since if we open same applet twice same plugin that causes similar problems.

Please try to keep lines under 80 chars, unless there is a good reason.

> +            cacheOrder.load(); // Update hashtable.

In complex code like this, please include comments that explain why you 
are doing things, not what you are doing.

> +            Set<Object>  set = cacheOrder.keySet();
> +            if (set.size()>  0) { // There are items cached, we need to look through them.
> +                String[] keys = set.toArray(new String[0]);
> +                List<String>  listKeys = Arrays.asList(keys);
> +                Collections.sort(listKeys);

Can you please document the format of the RecentlyUsedFile?

>
> -            return null;
> +                // We now want to start searching from the most recent to least recent.
> +                for (int i = listKeys.size() - 1; i>= 0; i--) {
> +                    String str = cacheOrder.getProperty(listKeys.get(i));
> +                    if (str != null) {
> +                        str = str.substring(cacheDir.length());
> +                        int index = str.indexOf(File.separatorChar, 1);
> +                        folder = str.substring(1, index);
> +                        str = str.substring(index);
> +
> +                        if (str.equals(urlPath.getPath())) { // We found a match. Don't know if it is old though.
> +                            cacheFile = new File(cacheOrder.getProperty(listKeys.get(i)));
> +                            cacheOrder.remove(listKeys.get(i)); // We can remove this since we will add it back as updated.
> +                            break; // Stop searching since we got newest one already.
> +                        }
> +                    }
> +                }
> +            }
> +
> +            if (cacheFile != null) { // We found a file above.
> +                // We requested it recently so mark it as recent.
> +                // We append folder to guarantee uniqueness. Since it might be too quick in calculations.
> +                cacheOrder.put(Long.toString(System.currentTimeMillis()) + folder, cacheFile.getPath());

Hm... it's not obvious to me, but what ensures that the combination 
"currentTimeMillis() + folder" is unique? Will this assumption hold when 
we have machines which are much faster?

> +                cacheOrder.store();
> +            } else { // We did not find a copy of it. This will also add it to list.
> +                cacheFile = makeNewCacheFile(source, version);
> +            }
> +        }

Seems to me like this method is trying to do too much. Would it be 
possible to move the policy (LRU) to somewhere else? I think it might be 
better if you have one class responsible for the actual policy. Perhaps 
another (class or method) for extracting and adding entires might make 
this code a lot clearer to read.

> +        unlockRecentlyUsedFile();
> +        return cacheFile;
> +    }
> +
> +    /**
> +     * This will create a new entry for the cache item. It is however
> +     * not initialized but any future calls to getCacheFile with the
> +     * source and version given to here, will cause it to return this item.
> +     * @param source the source URL
> +     * @param version the version id of the local file
> +     * @return the file location in the cache.
> +     */
> +    public static File makeNewCacheFile(URL source, Version version) {
> +        synchronized (cacheOrder) {
> +            lockRecentlyUsedFile();
> +            cacheOrder.load();
> +
> +            File cacheFile = null;
> +            for (long i = 0; i<  Long.MAX_VALUE; i++) {
> +                String path = cacheDir + File.separator + i;
> +                File cDir = new File(path);
> +                if (!cDir.exists()) {

What cleans up this directory?

> +                    // We can use this directory.
> +                    try {
> +                        cacheFile = urlToPath(source, path);
> +                        FileUtils.createParentDir(cacheFile);
> +                        File pf = new File(cacheFile.getPath() + ".info");
> +                        FileUtils.createRestrictedFile(pf, true); // Create the info file for marking later.
> +                        cacheOrder.setProperty(Long.toString(System.currentTimeMillis()) + i, cacheFile.getPath());
> +                        cacheOrder.store();
> +                    } catch (IOException ioe) {
> +                        ioe.printStackTrace();
> +                    }
> +
> +                    break;
> +                }
> +            }
> +
> +            unlockRecentlyUsedFile();
> +            return cacheFile;
>           }
>       }
>
> @@ -436,4 +515,151 @@
>           }
>       }
>
> +    public static void cleanCache() {
> +        if (okToClearCache()) {
> +            long maxSize = -1;
> +            try {
> +                maxSize = Long.parseLong(JNLPRuntime.getConfiguration().getProperty("deployment.cache.max.size"));
> +                maxSize = maxSize * 1024 * 1024;
> +            } catch (NumberFormatException nfe) {
> +                // This is fine, we will just default to infinite cache.
> +            }
> +            long totalSize = 0;
> +
> +            // First we want to figure out which stuff we need to delete.
> +            // There should not be any duplicate entries in cacheOrder, since we always clear old when setting new.
> +            // Thus we don't need to search through that.
> +            // We now go through all the entries and pick out the ones that aren't marked for delete starting from the back.
> +            HashSet<String>  keep = new HashSet<String>();
> +            cacheOrder.load();
> +            Set<Object>  set = cacheOrder.keySet();
> +            String[] keys = set.toArray(new String[0]);
> +            List<String>  listKeys = Arrays.asList(keys);
> +            Collections.sort(listKeys);

This code looks identical to the one above. I am qutie convinced this 
should be abstracted away.

> +            // We now want to start searching from the most recent to least recent.
> +            for (int i = listKeys.size() - 1; i>= 0; i--) {
> +                // Check if the item is contained in cacheOrder.
> +                String str = cacheOrder.getProperty(listKeys.get(i));
> +
> +                if (str != null) {
> +                    File temp = new File(str);
> +                    PropertiesFile pf = new PropertiesFile(new File(str + ".info"));
> +                    boolean delete = Boolean.parseBoolean(pf.getProperty("delete"));
> +
> +                    // This will get me the root directory specific to this cache item.
> +                    String rStr = str.substring(cacheDir.length());
> +                    rStr = cacheDir + rStr.substring(0, rStr.indexOf(File.separatorChar, 1));
> +
> +                    if (temp.isFile()&&  !delete&&  (maxSize<  -1 || totalSize + temp.length()<  maxSize)&&  !keep.contains(str.substring(rStr.length()))) { // File exist, might want to keep.

Please split this across multiple lines. Perhaps even better, can you 
write a method (with a more descriptive name) that evaulates this 
conditional?

> +                        totalSize += temp.length();
> +                        // Since we are keeping it, we will delete any extra files extracted by JNLPClassLoader.activateJars().
> +                        for (File f : temp.getParentFile().listFiles()) {
> +                            if (!f.getPath().equals(temp.getPath())&&  !f.getPath().equals(pf.getStoreFile().getPath())) {
> +                                try {
> +                                    FileUtils.recursiveDelete(f, f);
> +                                } catch (IOException e) {
> +                                    e.printStackTrace();
> +                                }
> +                            }
> +                        }
> +
> +                        keep.add(str.substring(rStr.length()));
> +                        keep.add(rStr); // We can just use the same map, since these two things are disjoint with each other.
> +                    } else {
> +                        temp = new File(rStr);
> +                        try {
> +                            FileUtils.recursiveDelete(temp, temp);
> +                        } catch (IOException e) { // It's ok if we can't delete the actual directory as long as files are deleted. It would just look messy.
> +                            e.printStackTrace();
> +                        }
> +                        cacheOrder.remove(listKeys.get(i));
> +                    }
> +                }
> +            }
> +            cacheOrder.store();
> +
> +            // Not all items will exist in cacheOrder, thus we will need to remove them manually, but a majority of them should be removed by the above segment.
> +            File temp = new File(cacheDir);
> +            // Remove all folder not listed in keep.
> +            for (File f : temp.listFiles()) {
> +                String path = f.getPath();
> +                temp = new File(path);
> +                if (temp.isDirectory()&&  !keep.contains(temp.getPath())) {
> +                    try {
> +                        FileUtils.recursiveDelete(temp, temp);
> +                    } catch (IOException e) {
> +                        e.printStackTrace();
> +                    }
> +                }
> +            }

Can you elaborate a little more on this? What files are we deleting here?

> +        }
> +    }
> +
> +    /**
> +     * Lock the property file and add it to our pool of locks.
> +     * @param properties Property file to lock.
> +     */
> +    public static void lockFile(PropertiesFile properties) {
> +        File storeFile = properties.getStoreFile();
> +        try {
> +            propertiesLockPool.put(storeFile.getPath(), FileUtils.getFileLock(storeFile.getPath(), false, true));
> +        } catch (OverlappingFileLockException e) {
> +        } catch (FileNotFoundException e) {
> +            e.printStackTrace();
> +        }
> +    }
> +
> +    /**
> +     * Unlock the property file and remove it from our pool.
> +     * Nothing happens if it wasn't locked.
> +     * @param properties Property file to unlock.
> +     */
> +    public static void unlockFile(PropertiesFile properties) {
> +        File storeFile = properties.getStoreFile();
> +        FileLock fl = propertiesLockPool.get(storeFile.getPath());
> +        try {
> +            if (fl == null) return;
> +            fl.release();
> +            fl.channel().close();
> +            propertiesLockPool.remove(storeFile.getPath());
> +        } catch (IOException e) {
> +            e.printStackTrace();
> +        }
> +    }
> +
> +    private static void lockRecentlyUsedFile() {
> +        synchronized (recentlyUsed) {
> +            try {
> +                File f = new File(recentlyUsed);
> +                if (!f.exists()) {
> +                    FileUtils.createParentDir(f);
> +                    FileUtils.createRestrictedFile(f, true);
> +                }
> +                fl = FileUtils.getFileLock(f.getPath(), false, true);
> +                lockCount++;
> +            } catch (OverlappingFileLockException e) { // if overlap we just increase the count.
> +                lockCount++;
> +            } catch (Exception e) { // We didn't get a lock..
> +                e.printStackTrace();
> +            }
> +        }
> +    }
> +
> +    private static void unlockRecentlyUsedFile() {
> +        synchronized (recentlyUsed) {
> +            if (fl != null) {
> +                lockCount--;
> +                try {
> +                    if (lockCount == 0) {
> +                        fl.release();
> +                        fl.channel().close();
> +                        fl = null;
> +                    }
> +                } catch (IOException e) {
> +                    e.printStackTrace();
> +                }
> +            }
> +        }
> +    }
> +
>   }
> diff -r 30276d468815 netx/net/sourceforge/jnlp/cache/ResourceTracker.java
> --- a/netx/net/sourceforge/jnlp/cache/ResourceTracker.java	Fri Apr 01 12:57:18 2011 -0400
> +++ b/netx/net/sourceforge/jnlp/cache/ResourceTracker.java	Mon Apr 04 12:13:32 2011 -0400
> @@ -630,6 +630,9 @@
>       private void downloadResource(Resource resource) {
>           resource.fireDownloadEvent(); // fire DOWNLOADING
>
> +        CacheEntry origEntry = new CacheEntry(resource.location, resource.downloadVersion); // This is where the jar file will be.
> +        origEntry.lock();
> +
>           try {
>               // create out second in case in does not exist
>               URL realLocation = resource.getDownloadLocation();
> @@ -666,58 +669,73 @@
>               } else if (gzip) {
>                   downloadLocation = new URL(downloadLocation.toString() + ".gz");
>               }
> +
> +            CacheEntry finalEntry = new CacheEntry(downloadLocation, resource.downloadVersion); // This is where the jar/compressed file will be.
> +            File downloadLocationFile = CacheUtil.getCacheFile(downloadLocation, resource.downloadVersion);
>
> -            InputStream in = new BufferedInputStream(con.getInputStream());
> -            OutputStream out = CacheUtil.getOutputStream(downloadLocation, resource.downloadVersion);
> -            byte buf[] = new byte[1024];
> -            int rlen;
> -
> -            while (-1 != (rlen = in.read(buf))) {
> -                resource.transferred += rlen;
> -                out.write(buf, 0, rlen);
> -            }
> -
> -            in.close();
> -            out.close();
> -
> -            // explicitly close the URLConnection.
> -            if (con instanceof HttpURLConnection)
> -                ((HttpURLConnection) con).disconnect();
> -
> -            /*
> -             * If the file was compressed, uncompress it.
> -             */
> -
> -            if (packgz) {
> -                GZIPInputStream gzInputStream = new GZIPInputStream(new FileInputStream(
> -                        CacheUtil.getCacheFile(downloadLocation, resource.downloadVersion)));
> -                InputStream inputStream = new BufferedInputStream(gzInputStream);
> -
> -                JarOutputStream outputStream = new JarOutputStream(new FileOutputStream(
> -                        CacheUtil.getCacheFile(resource.location, resource.downloadVersion)));
> -
> -                Unpacker unpacker = Pack200.newUnpacker();
> -                unpacker.unpack(inputStream, outputStream);
> -
> -                outputStream.close();
> -                inputStream.close();
> -                gzInputStream.close();
> -            } else if (gzip) {
> -                GZIPInputStream gzInputStream = new GZIPInputStream(new FileInputStream(CacheUtil
> -                        .getCacheFile(downloadLocation, resource.downloadVersion)));
> -                InputStream inputStream = new BufferedInputStream(gzInputStream);
> -
> -                BufferedOutputStream outputStream = new BufferedOutputStream(
> -                        new FileOutputStream(CacheUtil.getCacheFile(resource.location,
> -                                resource.downloadVersion)));
> -
> -                while (-1 != (rlen = inputStream.read(buf))) {
> -                    outputStream.write(buf, 0, rlen);
> +            if (!downloadLocationFile.exists()) {

How well does this interact with the download progress indicators? How 
are they updated?

> +                byte buf[] = new byte[1024];
> +                int rlen;
> +                // Make sure we don't re-download the file. however it will wait as if it was downloading.
> +                // (This is fine because file is not ready yet anyways)
> +                InputStream in = new BufferedInputStream(con.getInputStream());
> +                OutputStream out = CacheUtil.getOutputStream(downloadLocation, resource.downloadVersion);
> +
> +                while (-1 != (rlen = in.read(buf))) {
> +                    resource.transferred += rlen;
> +                    out.write(buf, 0, rlen);
>                   }
> -
> -                outputStream.close();
> -                inputStream.close();
> -                gzInputStream.close();
> +
> +                in.close();
> +                out.close();
> +
> +                // explicitly close the URLConnection.
> +                if (con instanceof HttpURLConnection)
> +                    ((HttpURLConnection) con).disconnect();
> +
> +                /*
> +                 * If the file was compressed, uncompress it.
> +                 */
> +
> +                if (packgz) {
> +                    GZIPInputStream gzInputStream = new GZIPInputStream(new FileInputStream(
> +                            CacheUtil.getCacheFile(downloadLocation, resource.downloadVersion)));
> +                    InputStream inputStream = new BufferedInputStream(gzInputStream);
> +                    origEntry.markForDelete();
> +                    origEntry.store();
> +                    finalEntry.markForDelete();
> +                    finalEntry.store();
> +
> +                    JarOutputStream outputStream = new JarOutputStream(new FileOutputStream(
> +                            CacheUtil.getCacheFile(resource.location, resource.downloadVersion)));
> +
> +                    Unpacker unpacker = Pack200.newUnpacker();
> +                    unpacker.unpack(inputStream, outputStream);
> +
> +                    outputStream.close();
> +                    inputStream.close();
> +                    gzInputStream.close();
> +                } else if (gzip) {
> +                    GZIPInputStream gzInputStream = new GZIPInputStream(new FileInputStream(CacheUtil
> +                            .getCacheFile(downloadLocation, resource.downloadVersion)));
> +                    InputStream inputStream = new BufferedInputStream(gzInputStream);
> +                    origEntry.markForDelete();
> +                    origEntry.store();
> +                    finalEntry.markForDelete();
> +                    finalEntry.store();
> +
> +                    BufferedOutputStream outputStream = new BufferedOutputStream(
> +                            new FileOutputStream(CacheUtil.getCacheFile(resource.location,
> +                                    resource.downloadVersion)));
> +
> +                    while (-1 != (rlen = inputStream.read(buf))) {
> +                        outputStream.write(buf, 0, rlen);
> +                    }
> +
> +                    outputStream.close();
> +                    inputStream.close();
> +                    gzInputStream.close();
> +                }
>               }
>
>               resource.changeStatus(DOWNLOADING, DOWNLOADED);
> @@ -734,6 +752,8 @@
>                   lock.notifyAll(); // wake up wait's to check for completion
>               }
>               resource.fireDownloadEvent(); // fire ERROR
> +        } finally {
> +            origEntry.unlock();
>           }
>       }
>
> @@ -744,6 +764,9 @@
>       private void initializeResource(Resource resource) {
>           resource.fireDownloadEvent(); // fire CONNECTING
>
> +        CacheEntry entry = new CacheEntry(resource.location, resource.requestVersion);
> +        entry.lock();
> +
>           try {
>               File localFile = CacheUtil.getCacheFile(resource.location, resource.downloadVersion);
>
> @@ -755,6 +778,17 @@
>
>               int size = connection.getContentLength();
>               boolean current = CacheUtil.isCurrent(resource.location, resource.requestVersion, connection)&&  resource.getUpdatePolicy() != UpdatePolicy.FORCE;
> +            if (!current) {
> +                if (entry.isCached()) {
> +                    entry.markForDelete();
> +                    entry.store();
> +                    localFile = CacheUtil.makeNewCacheFile(resource.location, resource.downloadVersion);
> +                    CacheEntry newEntry = new CacheEntry(resource.location, resource.requestVersion);
> +                    newEntry.lock();
> +                    entry.unlock();
> +                    entry = newEntry;
> +                }
> +            }
>
>               synchronized (resource) {
>                   resource.localFile = localFile;
> @@ -768,7 +802,6 @@
>               }
>
>               // update cache entry
> -            CacheEntry entry = new CacheEntry(resource.location, resource.requestVersion);
>               if (!current)
>                   entry.initialize(connection);
>
> @@ -792,6 +825,8 @@
>                   lock.notifyAll(); // wake up wait's to check for completion
>               }
>               resource.fireDownloadEvent(); // fire ERROR
> +        } finally {
> +            entry.unlock();
>           }
>       }
>
> diff -r 30276d468815 netx/net/sourceforge/jnlp/runtime/JNLPClassLoader.java
> --- a/netx/net/sourceforge/jnlp/runtime/JNLPClassLoader.java	Fri Apr 01 12:57:18 2011 -0400
> +++ b/netx/net/sourceforge/jnlp/runtime/JNLPClassLoader.java	Mon Apr 04 12:13:32 2011 -0400
> @@ -166,11 +166,11 @@
>           // initialize extensions
>           initializeExtensions();
>
> +        initializeResources();
> +
>           // initialize permissions
>           initializePermissions();
>
> -        initializeResources();
> -
>           setSecurity();
>
>           installShutdownHooks();
> diff -r 30276d468815 netx/net/sourceforge/jnlp/runtime/JNLPRuntime.java
> --- a/netx/net/sourceforge/jnlp/runtime/JNLPRuntime.java	Fri Apr 01 12:57:18 2011 -0400
> +++ b/netx/net/sourceforge/jnlp/runtime/JNLPRuntime.java	Mon Apr 04 12:13:32 2011 -0400
> @@ -701,6 +701,7 @@
>           Runtime.getRuntime().addShutdownHook(new Thread() {
>               public void run() {
>                   markNetxStopped();
> +                CacheUtil.cleanCache();
>               }
>           });
>       }
> diff -r 30276d468815 netx/net/sourceforge/jnlp/util/FileUtils.java
> --- a/netx/net/sourceforge/jnlp/util/FileUtils.java	Fri Apr 01 12:57:18 2011 -0400
> +++ b/netx/net/sourceforge/jnlp/util/FileUtils.java	Mon Apr 04 12:13:32 2011 -0400
> @@ -19,7 +19,11 @@
>   import static net.sourceforge.jnlp.runtime.Translator.R;
>
>   import java.io.File;
> +import java.io.FileNotFoundException;
>   import java.io.IOException;
> +import java.io.RandomAccessFile;
> +import java.nio.channels.FileChannel;
> +import java.nio.channels.FileLock;
>
>   import net.sourceforge.jnlp.runtime.JNLPRuntime;
>
> @@ -292,4 +296,42 @@
>
>       }
>
> +    /**
> +     * This will return a lock to the file specified.
> +     *
> +     * @param name Name of the file to get lock from.

Is this the file name or the file path?

> +     * @param shared Specify if the lock will be a shared lock.
> +     * @param allowBlock Specify if we should block when we can not get the
> +     *            lock. No effect when trying to get shared lock.

What does no effect mean? Will it always block?

> +     * @return FileLock if we were successful in getting a lock, otherwise null.
> +     * @throws FileNotFoundException If the file does not exist.
> +     */
> +    public static FileLock getFileLock(String name, boolean shared, boolean allowBlock) throws FileNotFoundException {
> +        RandomAccessFile rafFile = new RandomAccessFile(name, "rw");
> +        FileChannel fc = rafFile.getChannel();
> +        FileLock lock = null;
> +        try {
> +            if (!shared) {
> +                if (allowBlock) {
> +                    lock = fc.lock(0, Long.MAX_VALUE, false);
> +                } else {
> +                    lock = fc.tryLock(0, Long.MAX_VALUE, false);
> +                }
> +            } else { // We want shared lock. This will block regardless if allowBlock is true or not.
> +                // Test to see if we can get a shared lock.
> +                lock = fc.lock(0, 1, true); // Block if a non exclusive lock is being held.
> +                if (!lock.isShared()) { // This lock is an exclusive lock. Use alternate solution.
> +                    FileLock tempLock = null;
> +                    for (long pos = 1; tempLock == null&&  pos<  Long.MAX_VALUE - 1; pos++) {
> +                        tempLock = fc.tryLock(pos, 1, false);
> +                    }
> +                    lock.release();
> +                    lock = tempLock; // Get the unique exclusive lock.
> +                }

Since we are emulating a shared lock using exclusive locks in a number 
of places (I recall seeing another patch before that was doing this), 
can we plese implement this in a separate class and reuse that rather 
than implementing this in a buch of places?

> +            }
> +        } catch (IOException e) {
> +            e.printStackTrace();
> +        }
> +        return lock;
> +    }
>   }


Cheers,
Omair



More information about the distro-pkg-dev mailing list