331

I have a C++/Obj-C background and I am just discovering Python (been writing it for about an hour). I am writing a script to recursively read the contents of text files in a folder structure.

The problem I have is the code I have written will only work for one folder deep. I can see why in the code (see #hardcoded path), I just don't know how I can move forward with Python since my experience with it is only brand new.

Python Code:

import os
import sys

rootdir = sys.argv[1]

for root, subFolders, files in os.walk(rootdir):

    for folder in subFolders:
        outfileName = rootdir + "/" + folder + "/py-outfile.txt" # hardcoded path
        folderOut = open( outfileName, 'w' )
        print "outfileName is " + outfileName

        for file in files:
            filePath = rootdir + '/' + file
            f = open( filePath, 'r' )
            toWrite = f.read()
            print "Writing '" + toWrite + "' to" + filePath
            folderOut.write( toWrite )
            f.close()

        folderOut.close()
Bhavesh G
  • 3,000
  • 4
  • 39
  • 66
Brock Woolf
  • 46,656
  • 50
  • 121
  • 144

17 Answers17

462

Make sure you understand the three return values of os.walk:

for root, subdirs, files in os.walk(rootdir):

has the following meaning:

  • root: Current path which is "walked through"
  • subdirs: Files in root of type directory
  • files: Files in root (not in subdirs) of type other than directory

And please use os.path.join instead of concatenating with a slash! Your problem is filePath = rootdir + '/' + file - you must concatenate the currently "walked" folder instead of the topmost folder. So that must be filePath = os.path.join(root, file). BTW "file" is a builtin, so you don't normally use it as variable name.

Another problem are your loops, which should be like this, for example:

import os
import sys

walk_dir = sys.argv[1]

print('walk_dir = ' + walk_dir)

# If your current working directory may change during script execution, it's recommended to
# immediately convert program arguments to an absolute path. Then the variable root below will
# be an absolute path as well. Example:
# walk_dir = os.path.abspath(walk_dir)
print('walk_dir (absolute) = ' + os.path.abspath(walk_dir))

for root, subdirs, files in os.walk(walk_dir):
    print('--\nroot = ' + root)
    list_file_path = os.path.join(root, 'my-directory-list.txt')
    print('list_file_path = ' + list_file_path)

    with open(list_file_path, 'wb') as list_file:
        for subdir in subdirs:
            print('\t- subdirectory ' + subdir)

        for filename in files:
            file_path = os.path.join(root, filename)

            print('\t- file %s (full path: %s)' % (filename, file_path))

            with open(file_path, 'rb') as f:
                f_content = f.read()
                list_file.write(('The file %s contains:\n' % filename).encode('utf-8'))
                list_file.write(f_content)
                list_file.write(b'\n')

If you didn't know, the with statement for files is a shorthand:

with open('filename', 'rb') as f:
    dosomething()

# is effectively the same as

f = open('filename', 'rb')
try:
    dosomething()
finally:
    f.close()
AndiDog
  • 68,631
  • 21
  • 159
  • 205
  • 7
    Superb, lots of prints to understand what's going on and it works perfectly. Thanks! +1 – Brock Woolf Feb 06 '10 at 09:52
  • 26
    Heads up to anyone as dumb/oblivious as me... this code sample writes a txt file to each directory. Glad I tested it in a version controlled folder, though everything I need to write a cleanup script is here too :) – Steazy Sep 24 '14 at 23:56
  • that second (longest) code snippet worked very well, saved me a lot of boring work – amphibient Apr 24 '18 at 21:01
  • 1
    Since speed if obviously the most important aspect, `os.walk` is not bad, though I came up with an even faster way via `os.scandir`. All `glob` solutions are a lot slower than `walk` & `scandir`. My function, as well as a complete speed analysis, can be found here: https://stackoverflow.com/a/59803793/2441026 – user136036 Jan 18 '20 at 18:57
  • This is a great SO answer, not only drilling into the issue but stuff like this "BTW "file" is a builtin, so you don't normally use it as variable name." is golden for someone new to a language – steeveeet Oct 19 '22 at 14:06
264

If you are using Python 3.5 or above, you can get this done in 1 line.

import glob

# root_dir needs a trailing slash (i.e. /root/dir/)
for filename in glob.iglob(root_dir + '**/*.txt', recursive=True):
     print(filename)

As mentioned in the documentation

If recursive is true, the pattern '**' will match any files and zero or more directories and subdirectories.

If you want every file, you can use

import glob

for filename in glob.iglob(root_dir + '**/**', recursive=True):
     print(filename)
AnaS Kayed
  • 422
  • 3
  • 9
Chillar Anand
  • 27,936
  • 9
  • 119
  • 136
  • TypeError: iglob() got an unexpected keyword argument 'recursive' – Jewenile Sep 01 '17 at 07:19
  • 1
    As mentioned in the beginning, it is only for Python 3.5+ – Chillar Anand Sep 01 '17 at 09:43
  • 18
    root_dir must have a trailing slash (otherwise you get something like 'folder**/*' instead of 'folder/**/*' as the first argument). You can use os.path.join(root_dir, '**/*'), but I don't know if it's acceptable to use os.path.join with wildcard paths (it works for my application though). – drojf Apr 06 '19 at 08:45
  • 1
    @ChillarAnand Can you please add a comment to the code in this answer that `root_dir` needs a trailing slash? This will save people time (or at least it would have saved me time). Thanks. – Dan Nissenbaum Jun 24 '19 at 06:13
  • 4
    If I ran this as in the answer it didn't work recursively. To make this work recursively I had to change it to: `glob.iglob(root_dir + '**/**', recursive=True)`. I'm working in Python 3.8.2 – mikey May 12 '20 at 10:55
  • Like mikey said, the command didn't work for me on Python 3.8; it did not find any files. Using `**/**` instead of `**/*` solved it and shows all files. But I'm still unsure how to filter for certain file types. Eg, neither `**/*.csv` nor `**/**.csv` showed any of the CSV files that I have in child folders. – stefanbschneider Jul 20 '20 at 06:35
  • 5
    Be aware that glob.glob does not match dotfiles. You may use pathlib.glob instead – Thomas Aug 20 '20 at 09:34
44

Agree with Dave Webb, os.walk will yield an item for each directory in the tree. Fact is, you just don't have to care about subFolders.

Code like this should work:

import os
import sys

rootdir = sys.argv[1]

for folder, subs, files in os.walk(rootdir):
    with open(os.path.join(folder, 'python-outfile.txt'), 'w') as dest:
        for filename in files:
            with open(os.path.join(folder, filename), 'r') as src:
                dest.write(src.read())
the Tin Man
  • 158,662
  • 42
  • 215
  • 303
Clément
  • 6,670
  • 1
  • 30
  • 22
  • 3
    Nice one. This works as well. I do however prefer AndiDog's version even though its longer because it's clearer to understand as a beginner to Python. +1 – Brock Woolf Feb 06 '10 at 10:08
40

TL;DR: This is the equivalent to find -type f to go over all files in all folders below and including the current one:

for currentpath, folders, files in os.walk('.'):
    for file in files:
        print(os.path.join(currentpath, file))

As already mentioned in other answers, os.walk() is the answer, but it could be explained better. It's quite simple! Let's walk through this tree:

docs/
└── doc1.odt
pics/
todo.txt

With this code:

for currentpath, folders, files in os.walk('.'):
    print(currentpath)

The currentpath is the current folder it is looking at. This will output:

.
./docs
./pics

So it loops three times, because there are three folders: the current one, docs, and pics. In every loop, it fills the variables folders and files with all folders and files. Let's show them:

for currentpath, folders, files in os.walk('.'):
    print(currentpath, folders, files)

This shows us:

# currentpath  folders           files
.              ['pics', 'docs']  ['todo.txt']
./pics         []                []
./docs         []                ['doc1.odt']

So in the first line, we see that we are in folder ., that it contains two folders namely pics and docs, and that there is one file, namely todo.txt. You don't have to do anything to recurse into those folders, because as you see, it recurses automatically and just gives you the files in any subfolders. And any subfolders of that (though we don't have those in the example).

If you just want to loop through all files, the equivalent of find -type f, you can do this:

for currentpath, folders, files in os.walk('.'):
    for file in files:
        print(os.path.join(currentpath, file))

This outputs:

./todo.txt
./docs/doc1.odt
Luc
  • 5,339
  • 2
  • 48
  • 48
19

The pathlib library is really great for working with files. You can do a recursive glob on a Path object like so.

from pathlib import Path

for elem in Path('/path/to/my/files').rglob('*.*'):
    print(elem)
dstandish
  • 2,328
  • 18
  • 34
10
import glob
import os

root_dir = <root_dir_here>

for filename in glob.iglob(root_dir + '**/**', recursive=True):
    if os.path.isfile(filename):
        with open(filename,'r') as file:
            print(file.read())

**/** is used to get all files recursively including directory.

if os.path.isfile(filename) is used to check if filename variable is file or directory, if it is file then we can read that file. Here I am printing file.

Neeraj Sonaniya
  • 375
  • 4
  • 13
10

I've found the following to be the easiest

from glob import glob
import os

files = [f for f in glob('rootdir/**', recursive=True) if os.path.isfile(f)]

Using glob('some/path/**', recursive=True) gets all files, but also includes directory names. Adding the if os.path.isfile(f) condition filters this list to existing files only

Michael Silverstein
  • 1,653
  • 15
  • 17
8

If you want a flat list of all paths under a given dir (like find . in the shell):

   files = [ 
       os.path.join(parent, name)
       for (parent, subdirs, files) in os.walk(YOUR_DIRECTORY)
       for name in files + subdirs
   ]

To only include full paths to files under the base dir, leave out + subdirs.

Scott Smith
  • 1,002
  • 1
  • 14
  • 19
5

For my taste os.walk() is a little too complicated and verbose. You can do the accepted answer cleaner by:

all_files = [str(f) for f in pathlib.Path(dir_path).glob("**/*") if f.is_file()]

with open(outfile, 'wb') as fout:
    for f in all_files:
        with open(f, 'rb') as fin:
            fout.write(fin.read())
            fout.write(b'\n')
Gwang-Jin Kim
  • 9,303
  • 17
  • 30
3

use os.path.join() to construct your paths - It's neater:

import os
import sys
rootdir = sys.argv[1]
for root, subFolders, files in os.walk(rootdir):
    for folder in subFolders:
        outfileName = os.path.join(root,folder,"py-outfile.txt")
        folderOut = open( outfileName, 'w' )
        print "outfileName is " + outfileName
        for file in files:
            filePath = os.path.join(root,file)
            toWrite = open( filePath).read()
            print "Writing '" + toWrite + "' to" + filePath
            folderOut.write( toWrite )
        folderOut.close()
the Tin Man
  • 158,662
  • 42
  • 215
  • 303
ghostdog74
  • 327,991
  • 56
  • 259
  • 343
2

If just the file names are not enough, it's easy to implement a Depth-first search on top of os.scandir():

stack = ['.']
files = []
total_size = 0
while stack:
    dirname = stack.pop()
    with os.scandir(dirname) as it:
        for e in it:
            if e.is_dir(): 
                stack.append(e.path)
            else:
                size = e.stat().st_size
                files.append((e.path, size))
                total_size += size

The docs have this to say:

The scandir() function returns directory entries along with file attribute information, giving better performance for many common use cases.

neuviemeporte
  • 6,310
  • 10
  • 49
  • 78
  • Unfortunately when I run this, I get locked in an infinite loop if there are any subdirectories. It seems like there is a need to keep track of which directories have already been visited, or the loop will get stuck in re-doing them until all the computer memory is used up. – leerssej May 05 '23 at 18:00
1

If you prefer an (almost) Oneliner:

from pathlib import Path

lookuppath = '.' #use your path
filelist = [str(item) for item in Path(lookuppath).glob("**/*") if Path(item).is_file()]

In this case you will get a list with just the paths of all files located recursively under lookuppath. Without str() you will get PosixPath() added to each path.

knall0
  • 11
  • 1
1

os.walk does recursive walk by default. For each dir, starting from root it yields a 3-tuple (dirpath, dirnames, filenames)

from os import walk
from os.path import splitext, join

def select_files(root, files):
    """
    simple logic here to filter out interesting files
    .py files in this example
    """

    selected_files = []

    for file in files:
        #do concatenation here to get full path 
        full_path = join(root, file)
        ext = splitext(file)[1]

        if ext == ".py":
            selected_files.append(full_path)

    return selected_files

def build_recursive_dir_tree(path):
    """
    path    -    where to begin folder scan
    """
    selected_files = []

    for root, dirs, files in walk(path):
        selected_files += select_files(root, files)

    return selected_files
the Tin Man
  • 158,662
  • 42
  • 215
  • 303
b1r3k
  • 802
  • 8
  • 15
  • 1
    In Python 2.6 `walk()` **do** return recursive list. I tried your code and got a list with many repeats... If you just remove lines under the comment "# recursive calls on subfolders" - it works fine – borisbn Sep 28 '12 at 05:20
0

I think the problem is that you're not processing the output of os.walk correctly.

Firstly, change:

filePath = rootdir + '/' + file

to:

filePath = root + '/' + file

rootdir is your fixed starting directory; root is a directory returned by os.walk.

Secondly, you don't need to indent your file processing loop, as it makes no sense to run this for each subdirectory. You'll get root set to each subdirectory. You don't need to process the subdirectories by hand unless you want to do something with the directories themselves.

the Tin Man
  • 158,662
  • 42
  • 215
  • 303
David Webb
  • 190,537
  • 57
  • 313
  • 299
  • I have data in each sub directory, so I need to have a separate text file for the contents of each directory. – Brock Woolf Feb 06 '10 at 09:36
  • @Brock: the files part is the list of files in the current directory. So the indentation is indeed wrong. You are writing to `filePath = rootdir + '/' + file`, that doesn't sound right: file is from the list of current files, so you are writing to a lot of existing files? – Alok Singhal Feb 06 '10 at 09:52
0

Try this:

import os
import sys

for root, subdirs, files in os.walk(path):

    for file in os.listdir(root):

        filePath = os.path.join(root, file)

        if os.path.isdir(filePath):
            pass

        else:
            f = open (filePath, 'r')
            # Do Stuff
Diego
  • 117
  • 10
  • 1
    Why would you do another listdir() and then isdir() when you already have the directory listing split into files and directories from walk()? This looks like it would be rather slow in large trees (do three syscalls instead of one: 1=walk, 2=listdir, 3=isdir, instead of just walk and loop through the 'subdirs' and 'files'). – Luc Jul 26 '19 at 15:13
0

This worked for me:

import glob

root_dir = "C:\\Users\\Scott\\" # Don't forget trailing (last) slashes    
for filename in glob.iglob(root_dir + '**/*.jpg', recursive=True):
     print(filename)
     # do stuff
Scott
  • 4,974
  • 6
  • 35
  • 62
0

Starting from Python 3.12, you can also use walk() from pathlib which is similar to os.walk(), but yields tuples of (dirpath, dirnames, filenames) where dirpath is a Path. For example:

from pathlib import Path

for root, dirs, files in Path("cpython/Lib/concurrent").walk(on_error=print):
  print(
      root,
      "consumes",
      sum((root / file).stat().st_size for file in files),
      "bytes in",
      len(files),
      "non-directory files"
  )
  if '__pycache__' in dirs:
        dirs.remove('__pycache__')
Eugene Yarmash
  • 142,882
  • 41
  • 325
  • 378