64

I have this text file made up of numbers and words, for example like this - 09807754 18 n 03 aristocrat 0 blue_blood 0 patrician and I want to split it so that each word or number will come up as a new line.

A whitespace separator would be ideal as I would like the words with the dashes to stay connected.

This is what I have so far:

f = open('words.txt', 'r')
for word in f:
    print(word)

not really sure how to go from here, I would like this to be the output:

09807754
18
n
3
aristocrat
...
Jonas Stein
  • 6,826
  • 7
  • 40
  • 72
Johnnerz
  • 1,365
  • 4
  • 17
  • 29
  • Does that data literally have quotes around it? Is it `"09807754 18 n 03 aristocrat 0 blue_blood 0 patrician"` or `09807754 18 n 03 aristocrat 0 blue_blood 0 patrician` in the file? – dawg Jun 04 '13 at 15:52
  • I follow-up with comment above. Does that data literally have quotes around it – DataEngineer Sep 20 '17 at 19:33

6 Answers6

154

Given this file:

$ cat words.txt
line1 word1 word2
line2 word3 word4
line3 word5 word6

If you just want one word at a time (ignoring the meaning of spaces vs line breaks in the file):

with open('words.txt','r') as f:
    for line in f:
        for word in line.split():
           print(word)    

Prints:

line1
word1
word2
line2
...
word6 

Similarly, if you want to flatten the file into a single flat list of words in the file, you might do something like this:

with open('words.txt') as f:
    flat_list=[word for line in f for word in line.split()]

>>> flat_list
['line1', 'word1', 'word2', 'line2', 'word3', 'word4', 'line3', 'word5', 'word6']

Which can create the same output as the first example with print '\n'.join(flat_list)...

Or, if you want a nested list of the words in each line of the file (for example, to create a matrix of rows and columns from a file):

with open('words.txt') as f:
    matrix=[line.split() for line in f]

>>> matrix
[['line1', 'word1', 'word2'], ['line2', 'word3', 'word4'], ['line3', 'word5', 'word6']]

If you want a regex solution, which would allow you to filter wordN vs lineN type words in the example file:

import re
with open("words.txt") as f:
    for line in f:
        for word in re.findall(r'\bword\d+', line):
            # wordN by wordN with no lineN

Or, if you want that to be a line by line generator with a regex:

 with open("words.txt") as f:
     (word for line in f for word in re.findall(r'\w+', line))
dawg
  • 98,345
  • 23
  • 131
  • 206
  • 1
    How a file object is iterable (`for line in f:`)? – haccks Feb 09 '15 at 14:16
  • 1
    @haccks: It is the [suggested idiom](https://docs.python.org/2/tutorial/inputoutput.html#methods-of-file-objects) for looping line-by-line over a file. See also [this SO post](http://stackoverflow.com/a/8010133/298607) – dawg Feb 09 '15 at 17:51
  • I just wanted to know the mechanism behind this; how it works? – haccks Feb 09 '15 at 17:56
  • 3
    The `open` creates a file object. Python file objects support line-by-line iteration for text files (binary files are read in one gulp...) So each loop in the `for` loop is a line for a text file. At the end of the file, the file object raises `StopIteration` and we are done with the file. More understanding, of the mechanism is beyond what I can do in a comments. – dawg Feb 09 '15 at 18:01
  • You can also load into main memory and use "re" library like here http://stackoverflow.com/questions/7633274/extracting-words-from-a-string-removing-punctuation-and-returning-a-list-with-s – torina Feb 20 '17 at 22:28
  • I love the different ways and the discussion of when each might be used. Very clear, concise, and thorough. – bballdave025 May 31 '18 at 02:13
  • Maybe we should care about closing the file ? @dawg – Cryckx Aug 23 '18 at 14:22
  • @FlorentJousse: When you use `with` to open the file, the file is closed at the end of the `with` block. No need to manually close it. If you use a bare `open` it is indeed good practice to close that file when finished. All the examples here use `with` and therefor there is no close to worry about. – dawg Aug 23 '18 at 14:27
  • Okay thanks you dawg, I was using your code into a loop and I was wondering if the close() was missing. Well that's perfect ! – Cryckx Aug 23 '18 at 14:31
22
f = open('words.txt')
for word in f.read().split():
    print(word)
dugres
  • 12,613
  • 8
  • 46
  • 51
15

As supplementary, if you are reading a vvvvery large file, and you don't want read all of the content into memory at once, you might consider using a buffer, then return each word by yield:

def read_words(inputfile):
    with open(inputfile, 'r') as f:
        while True:
            buf = f.read(10240)
            if not buf:
                break

            # make sure we end on a space (word boundary)
            while not str.isspace(buf[-1]):
                ch = f.read(1)
                if not ch:
                    break
                buf += ch

            words = buf.split()
            for word in words:
                yield word
        yield '' #handle the scene that the file is empty

if __name__ == "__main__":
    for word in read_words('./very_large_file.txt'):
        process(word)
Featherlegs
  • 3,648
  • 1
  • 16
  • 11
pambda
  • 2,930
  • 2
  • 22
  • 32
  • 2
    For those interested in performance, this is an order of magnitude faster than the itertools answer. – Featherlegs May 25 '17 at 20:14
  • why 10240 ? Im assuming that bytes? So around 10kb? How big can the buffer be and if I am interested in performance is smaller or larger buf better? – Duxa Jan 12 '19 at 04:38
  • Im confuesed, what does process do? It isn't defined... – Kromydas Oct 01 '20 at 03:29
5

What you can do is use nltk to tokenize words and then store all of the words in a list, here's what I did. If you don't know nltk; it stands for natural language toolkit and is used to process natural language. Here's some resource if you wanna get started [http://www.nltk.org/book/]

import nltk 
from nltk.tokenize import word_tokenize 
file = open("abc.txt",newline='')
result = file.read()
words = word_tokenize(result)
for i in words:
       print(i)

The output will be this:

09807754
18
n
03
aristocrat
0
blue_blood
0
patrician
Gaurav
  • 141
  • 1
  • 2
  • 6
4
with open(filename) as file:
    words = file.read().split()

Its a List of all words in your file.

import re
with open(filename) as file:
    words = re.findall(r"([a-zA-Z\-]+)", file.read())
mujad
  • 643
  • 7
  • 15
1

Here is my totally functional approach which avoids having to read and split lines. It makes use of the itertools module:

Note for python 3, replace itertools.imap with map

import itertools

def readwords(mfile):
    byte_stream = itertools.groupby(
        itertools.takewhile(lambda c: bool(c),
            itertools.imap(mfile.read,
                itertools.repeat(1))), str.isspace)

    return ("".join(group) for pred, group in byte_stream if not pred)

Sample usage:

>>> import sys
>>> for w in readwords(sys.stdin):
...     print (w)
... 
I really love this new method of reading words in python
I
really
love
this
new
method
of
reading
words
in
python
           
It's soo very Functional!
It's
soo
very
Functional!
>>>

I guess in your case, this would be the way to use the function:

with open('words.txt', 'r') as f:
    for word in readwords(f):
        print(word)
Community
  • 1
  • 1
smac89
  • 39,374
  • 15
  • 132
  • 179