1

I'm trying to add a directory of files to a zip. The directory is around 150 files large. A few, 5-75 files in, I keep getting a crash with the error message "The process cannot access the file because it is being used by another process."

I tried a delay which may be helping but is certainly not solving the bug.

Using code from: Is it possible to create a NEW zip file using the java FileSystem?

final File folder = new File("C:/myDir/img");
for (final File fileEntry : folder.listFiles()) {
    if (fileEntry.isDirectory()) {
        continue;
    }
    else {
        String filename = fileEntry.getName();
        String toBeAddedName = "C:/myDir/img/" + filename;
        Path toBeAdded = FileSystems.getDefault().getPath(toBeAddedName).toAbsolutePath();
        createZip(zipLocation, toBeAdded, "./" + filename);
        System.out.println("Added file " + ++count);
        //Delay because 'file in use' bug
        try { Thread.sleep(1000); } //1secs
        catch (InterruptedException e) {}
    }
}

public static void createZip(Path zipLocation, Path toBeAdded, String internalPath) throws Throwable {
    Map<String, String> env = new HashMap<String, String>();
    //Check if file exists.
    env.put("create", String.valueOf(Files.notExists(zipLocation)));
    //Use a zip filesystem URI
    URI fileUri = zipLocation.toUri();  //Here
    URI zipUri = new URI("jar:" + fileUri.getScheme(), fileUri.getPath(), null);
    System.out.println(zipUri);
    //URI uri = URI.create("jar:file:"+zipLocation);    //Here creates the zip
    //Try with resource
    try (FileSystem zipfs = FileSystems.newFileSystem(zipUri, env)) {
        //Create internal path in the zipfs
        Path internalTargetPath = zipfs.getPath(internalPath);
        //Create parent dir
        Files.createDirectories(internalTargetPath.getParent());
        //Copy a file into the zip file
        Files.copy(toBeAdded, internalTargetPath, StandardCopyOption.REPLACE_EXISTING);
    }
}
ƒrequency
  • 31
  • 5
  • If the file is locked because it's in use, I don't see what else you can do other than perhaps display a message to the user and ask them to correct it. – markspace Jun 18 '19 at 18:49
  • @markspace it's the app itself which is locking the files. That's why I tried adding a delay. – ƒrequency Jun 18 '19 at 19:13
  • I'm now using a two second delay between files which IMHO is huge yet it is working. – ƒrequency Jun 18 '19 at 19:47
  • I'd find it strange if a process could be prevented from deleting a file it _itself_ has locked (but I could be wrong). Are you sure no other process is locking the file? The error message would indicate that's the case. You can check this; see, for instance, [this question (windows)](https://superuser.com/questions/117902/find-out-which-process-is-locking-a-file-or-folder-in-windows/643312) or [this question (linux)](https://superuser.com/questions/97844/how-can-i-determine-what-process-has-a-file-open-in-linux). – Slaw Jun 18 '19 at 21:49
  • Is it possible that you're adding the zip to itself? – J Banana Jun 18 '19 at 23:57
  • @JBanana Nope. Like I said the 2-secs delay solves the bug. – ƒrequency Jun 19 '19 at 11:10
  • @Slaw Thanks for the links but I've now solved the bug by adding a huge delay between files. – ƒrequency Jun 19 '19 at 11:16
  • Yes, but keep in mind that adding a delay is not a "proper" fix and is likely very fragile. If you got it to work and you're happy with it then I suppose you can leave it at that, but it might be good to investigate the fundamental issue and see if you can implement a more appropriate fix. – Slaw Jun 19 '19 at 13:40
  • @Slaw You're right. The delay isn't even helping much. It does need recoding. – ƒrequency Jun 19 '19 at 16:31

1 Answers1

0

I can't promise this is the cause of your problem, but your code compresses files into a ZIP file in a strange, or at least inefficient, manner. Specifically, you're opening up a new FileSystem for each individual file you want to compress. I'm assuming you're doing it this way because that's what the Q&A you linked to does. However, that answer is only compressing one file whereas you want to compress multiple files at the same time. You should keep the FileSystem open for the entire duration of compressing your directory.

public static void compress(Path directory, int depth, Path zipArchiveFile) throws IOException {
    var uri = URI.create("jar:" + zipArchiveFile.toUri());
    var env = Map.of("create", Boolean.toString(Files.notExists(zipArchiveFile, NOFOLLOW_LINKS)));

    try (var fs = FileSystems.newFileSystem(uri, env)) {
        Files.walkFileTree(directory, Set.of(), depth, new SimpleFileVisitor<>() {

            private final Path archiveRoot = fs.getRootDirectories().iterator().next();

            @Override
            public FileVisitResult preVisitDirectory(Path dir, BasicFileAttributes attrs) throws IOException {
                // Don't include the directory itself
                if (!directory.equals(dir)) {
                    Files.createDirectory(resolveDestination(dir));
                }
                return FileVisitResult.CONTINUE;
            }

            @Override
            public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException {
                Files.copy(file, resolveDestination(file), REPLACE_EXISTING);
                return FileVisitResult.CONTINUE;
            }

            private Path resolveDestination(Path path) {
                /*
                 * Use Path#resolve(String) instead of Path#resolve(Path). I couldn't find where the
                 * documentation mentions this, but at least three implementations will throw a 
                 * ProviderMismatchException if #resolve(Path) is invoked with a Path argument that 
                 * belongs to a different provider (i.e. if the implementation types don't match).
                 *
                 * Note: Those three implementations, at least in OpenJDK 12.0.1, are the JRT, ZIP/JAR,
                 * and Windows file system providers (I don't have access to Linux's or Mac's provider
                 * source currently).
                 */
                return archiveRoot.resolve(directory.relativize(path).toString());
            }

        });
    }
}

Note: The depth parameter is used in exactly the same way as maxDepth is in Files#walkFileTree.

Note: If you only ever care about the files in the directory itself (i.e. don't want to recursively traverse the file tree), then you can use Files#list(Path). Don't forget to close the Stream when finished with it.

It's possible that you opening and closing the FileSystem over and over is causing your problems, in which case the above should solve the issue.

Slaw
  • 37,820
  • 8
  • 53
  • 80