7

I'm trying to remove all files found in a directory. The accepted answer to Delete Folder Contents in Python suggests getting a list of all files and calling "unlink" on them in a loop.

Suppose I have thousands of files on a network share, and want to tie up the directory for as short a time as possible.

Is it more efficient to delete them all using a shell command like rm -f /path/* or by using shutils.rmtree or some such?

Community
  • 1
  • 1
Dave
  • 1,196
  • 15
  • 22

2 Answers2

10

If you actually want to delete the whole directory tree, shutils.rmtree should be faster than os.remove (which is the same as os.unlink). It also allows you to specify a callback function to handle errors.

The suggestion in the comment by @nmichaels is also good, you can os.rename the directory then make a new one in its place and use shutils.rmtree on the original, renamed directory.

agf
  • 171,228
  • 44
  • 289
  • 238
  • The question is about comparing python (rmtree is the best i think) to unix shell (rm -Rf ). – orzel Jun 11 '17 at 21:43
  • @orzel That's what he said, but I thought what he really meant was how to do it quickly, which `rmtree` does. He accepted my answer so we have to assume I addressed his intent. – agf Jun 12 '17 at 21:59
  • 1
    Might be. I am interested in the original question : which one is faster, python/rmtree or forking a 'rm -Rf' ? I was looking for tests/figures and arguments about this. – orzel Jun 13 '17 at 22:52
0

I tried this solution and it seems to work well:

while os.path.exists(file_to_delete):
  os.remove(file_to_delete)
Misantorp
  • 2,606
  • 1
  • 10
  • 18