Originally asked on Are there alternative and portable algorithm implementation for reading lines from a file on Windows (Visual Studio Compiler) and Linux? but closed as too abroad, then, I am here trying to reduce its scope with a more concise case usage.
My goal is to implement my own file reading module for Python with Python C Extensions with a line caching policy. The purely Python Algorithm implementation without any line caching policy is this:
# This takes 1 second to parse 100MB of log data
with open('myfile', 'r', errors='replace') as myfile:
for line in myfile:
if 'word' in line:
pass
Resuming the Python C Extensions implementation: (see here the full code with line caching policy)
// other code to open the file on the std::ifstream object and create the iterator
...
static PyObject * PyFastFile_iternext(PyFastFile* self, PyObject* args)
{
std::string newline;
if( std::getline( self->fileifstream, newline ) ) {
return PyUnicode_DecodeUTF8( newline.c_str(), newline.size(), "replace" );
}
PyErr_SetNone( PyExc_StopIteration );
return NULL;
}
static PyTypeObject PyFastFileType =
{
PyVarObject_HEAD_INIT( NULL, 0 )
"fastfilepackage.FastFile" /* tp_name */
};
// create the module
PyMODINIT_FUNC PyInit_fastfilepackage(void)
{
PyFastFileType.tp_iternext = (iternextfunc) PyFastFile_iternext;
Py_INCREF( &PyFastFileType );
PyObject* thismodule;
// other module code creating the iterator and context manager
...
PyModule_AddObject( thismodule, "FastFile", (PyObject *) &PyFastFileType );
return thismodule;
}
And this is the Python code which uses the Python C Extensions code to open a file and read its lines one by one:
from fastfilepackage import FastFile
# This takes 3 seconds to parse 100MB of log data
iterable = fastfilepackage.FastFile( 'myfile' )
for item in iterable:
if 'word' in iterable():
pass
Right now the Python C Extensions code fastfilepackage.FastFile
with C++ 11 std::ifstream
takes 3 seconds to parse 100MB of log data, while the Python implementation presented takes 1 second.
The content of the file myfile
are just log lines
with around 100~300 characters on each line. The characters are just ASCII (module % 256), but due bugs on the logger engine, it can put invalid ASCII or Unicode characters. Hence, this is why I used the errors='replace'
policy while opening the file.
I just wonder if I can replace or improve this Python C Extension implementation, reducing the 3 seconds time to run the Python program.
I used this to do the benchmark:
import time
import datetime
import fastfilepackage
# usually a file with 100MB
testfile = './myfile.log'
timenow = time.time()
with open( testfile, 'r', errors='replace' ) as myfile:
for item in myfile:
if None:
var = item
python_time = time.time() - timenow
timedifference = datetime.timedelta( seconds=python_time )
print( 'Python timedifference', timedifference, flush=True )
# prints about 3 seconds
timenow = time.time()
iterable = fastfilepackage.FastFile( testfile )
for item in iterable:
if None:
var = iterable()
fastfile_time = time.time() - timenow
timedifference = datetime.timedelta( seconds=fastfile_time )
print( 'FastFile timedifference', timedifference, flush=True )
# prints about 1 second
print( 'fastfile_time %.2f%%, python_time %.2f%%' % (
fastfile_time/python_time, python_time/fastfile_time ), flush=True )
Related questions: