-1

I am trying to delete lines that contain decimal numbers. For instance:

82.45 76.16 21.49 -2.775

5 24 13 6 9 0 3 2 4 9 7 11 54 11 1 1 18 5 0 0

1 1 0 2 2 0 0 0 0 0 0 0 14 90 21 5 24 26 73 13

20 33 23 59 158 85 17 6 158 66 15 13 13 10 2 37 81 0 0 0

1 3 0 19 8 158 75 7 10 8 5 1 23 58 148 77 120 78 6 7

158 80 15 10 16 21 6 37 100 25 0 0 0 0 0 3 1 10 9 1

0 0 0 0 11 16 57 15 0 0 0 0 158 76 9 1 0 0 0 0

22 17 0 0 0 0 0 0

50.04 143.84 18.52 -1.792

3 0 0 0 0 0 0 0 36 0 0 0 2 4 0 1 23 2 0 0

8 24 4 12 21 9 5 2 0 0 0 4 40 0 0 0 0 0 0 12

150 11 2 7 12 16 4 59 72 8 30 88 68 83 15 27 21 11 49 94

6 1 1 8 17 8 0 0 0 0 0 5 150 150 33 46 9 0 0 20

28 49 81 150 76 5 8 17 36 23 41 48 7 1 16 88 0 3 0 0

0 0 0 0 36 108 13 9 2 0 3 61 19 26 14 34 27 8 98 150

14 2 0 1 1 0 115 150

114.27 171.37 10.74 -2.245

.................. and this pattern continues for thousands of lines and likewise I have about 3000 files with similar pattern of data.

So, I want to delete lines that have these decimal numbers. In most cases, every 8th line has decimal numbers and hence I tried using awk 'NR % 8! == 0' < file_name. But the problem is, not all files in the database have their every 8th line as decimal numbers. So, is there a way in which I can delete the lines that have decimal numbers? I am coding in python 2.7 in ubuntu.

Brian Tompsett - 汤莱恩
  • 5,753
  • 72
  • 57
  • 129
Sanathana
  • 284
  • 4
  • 16

4 Answers4

4

You can just look for lines containing decimal limiters:

with open('filename_without_decimals.txt','wb') as of:
    with open('filename.txt') as fp:
    for line in fp:
        if line.index(".") == -1: of.write(line)

If you prefer to use sed, would be cleaner:

sed -i '/\./d' file.txt
TuTTe
  • 164
  • 4
  • But, I have a huge database of 3000 files. I want to process the same for all the files simultaneously and get their corresponding (decimal deleted) files. – Sanathana Jan 11 '15 at 00:32
  • 1
    Then I would use sed: sed -i '/\./d' file.txt (Updated answer) – TuTTe Jan 11 '15 at 00:38
  • @Krupa_Code This is a nice solution, and mine is horrible in contrast, but this creates a new file with the new contents, and I am not sure if this is what the OP wants... – nbro Jan 11 '15 at 09:53
1

The solution would be something like

file = open('textfile.txt')
text = ""

for line in file.readLines():
    if '.' not in line:
        text += line

print text
  • Hello Sir, This works perfectly fine for a single file. sorry to ask but I have 3000 files to perform the same process. So, is there a way where I can use the same code for all the multiple files as I can't input each of the 3000 files and get the corresponding output :( Since am new to programming, these questions. Am sorry! – Sanathana Jan 11 '15 at 00:39
  • They you can include this at the top of the script if all the files are in the same folder: `import os` and `for filename in os.listdir('dir/name/here'):` – Xavi Magrinyà Jan 11 '15 at 15:06
0

have you tried this:

using awk:

awk '!/\./{print}' your_file
Hackaholic
  • 19,069
  • 5
  • 54
  • 72
0
    deci = open('with.txt')
    no_deci = open('without.txt', 'w')

    for line in with_deci.readlines():  
        if '.' not in line:
            no_deci.write(line)

    deci.close()
    no_deci.close()

readlines returns a list of all the lines in the file.

Loupi
  • 550
  • 6
  • 14