19

python's trace module will allow you to run a script printing each line of code as it is run in both the script and all imported modules like so:

 python -m trace -trace myscript.py

Is there a way to do the same thing, but only print the top-level calls, i.e. only print the lines in myscript.py as they are run?

I am trying to debug an abort trap failure and I can't figure out where it's dying. Unfortunately, using the full --trace takes forever - the script normally takes 2-3 minutes to run, and the full trace has been going for hours.

Ciro Santilli OurBigBook.com
  • 347,512
  • 102
  • 1,199
  • 985
keflavich
  • 18,278
  • 20
  • 86
  • 118
  • try `python -m trace --listfuncs --trackcalls myscript.py`? – luoluo Aug 12 '15 at 12:33
  • 1
    luoluo - that at least completed in finite time, but it still left me with a mess of a traceback to dig through. And, strangely, for the script itself, it didn't print all lines, just the imports. – keflavich Aug 12 '15 at 13:07
  • Take look at this: [http://pymotw.com/2/trace/index.html#module-trace](http://pymotw.com/2/trace/index.html#module-trace) – luoluo Aug 12 '15 at 13:09
  • Don't you get a traceback from the exception? – Roland Smith Aug 17 '15 at 21:01
  • 1
    Roland Smith - no, abort traps do not result in tracebacks. The interpreter crashes, as far as I can tell. – keflavich Aug 18 '15 at 05:24
  • 1
    I believe the original question could also be phrased as "is there an equivalent flag for the python interpreter to the -x flag in bash" – spieden Aug 16 '16 at 22:47

6 Answers6

10

I stumbled into this problem and found grep to be a quick and dirty solution:

python -m trace --trace my_script.py | grep my_script.py

My script runs in finite time though. This probably won't work well for more complex scripts.

rizard
  • 173
  • 1
  • 6
2

Maybe lptrace can be useful for you.

amirouche
  • 7,682
  • 6
  • 40
  • 94
2

The current accepted answer is good enough if only one file is being used to run all the code but when a whole project of files is being used is insufficient. One way to trace multiple files would be done listing all the files by hand in the following manner:

python -m trace --trace src/main.py | grep "main.py\|file2.py"

For small projects this may suffice but for big projects with a lot of files this may become annoying to list all the necessary file names and can be prone to error. In this manner, ideally you would need to construct a regex for your file names and using find list all the needed files for you to monitor.

Following this idea, I constructed a regex that matches all file names that do not start with a lower dash (to avoid matching on __init__.py files), do not contain any upper case letters in their name nor folder name and have the .py extension. To list all this files in an src folder the match those criteria the command is:

find src/ -type f -regextype sed -regex '\([a-z_]\+/\)\+[a-z][a-z_]\+.py'

Now for using those files in grep we just need to print them with a backward slash and a pipe between them. To print the names in that structure use the following command:

find src/ -type f -regextype sed -regex '\([a-z_]\+/\)\+[a-z][a-z_]\+.py' -printf '%f\\\\|' | awk '{ print substr ($0, 1, length($0)-3)}'

Lastly, we construct the command to match all files with the tracing:

python -m trace --trace src/main.py | grep "`find src/ -type f -regextype sed -regex '\([a-z_]\+/\)\+[a-z][a-z_]\+.py' -printf '%f\\\\|' | awk '{ print substr ($0, 1, length($0)-3)}'`"

This can be used to match filenames in specific folders of the project and the regex can be modified to include other important details that you might have with the names of the files you want to monitor.

0

It's not exactly what you want, but you may consider using py.test with "-s" which will prevent py.test from capturing the output of your print statment... So you can put some print statment here and there for each function you have in your script and create a dummy test which just execute your script as usual... You may then see where it failed.

Richard
  • 721
  • 5
  • 16
0

If you don't get a traceback, you can use the technique called bisection.

Edit your main function or script body and put an exit(1) call aproximately in the middle. This is the first bisection.

Execute your script. If it reaches your exit, you know the fault is in the second half. If not, it's in the first half.

Move the exit to half of the first half, or half of the second half and try again.

With each cycle, you can narrow the location of the fault down by half of the remaining code.

If you've narrowed it down to one of your own functions, bisect that.

Roland Smith
  • 42,427
  • 3
  • 64
  • 94
  • 2
    While this technique is generally useful, it is extremely tedious and fails completely if there's a for loop and the failure does not happen in the first iteration of the loop. In my view, `trace` is supposed to be the easy way to do this *without* dropping `print` or `exit` statements everywhere. – keflavich Aug 18 '15 at 05:23
0

You may use decorator and decorate all the function of your module dynamically (no edit of each functions). You just need to paste these lines at the end of your script :

def dump_args(func):
    """This decorator dumps out the arguments passed to a function before calling it"""
    argnames = func.func_code.co_varnames[:func.func_code.co_argcount]
    fname = func.func_name
    def echo_func(*args,**kwargs):
        print fname, "(", ', '.join(
            '%s=%r' % entry
            for entry in zip(argnames,args[:len(argnames)])+[("args",list(args[len(argnames):]))]+[("kwargs",kwargs)]) +")"
    return echo_func

### Decorate all the above functions
import types
for k,v in globals().items():
    if isinstance(v, types.FunctionType):
        globals()[k] = dump_args(v)


Reference, code is coming from these two answers :
http://stackoverflow.com/questions/8951787/defining-python-decorators-for-a-complete-module
http://stackoverflow.com/questions/6200270/decorator-to-print-function-call-details-parameters-names-and-effective-values
Richard
  • 721
  • 5
  • 16
  • Interesting idea, but doesn't this necessitate that the script runs to completion before it can work, since it's using `globals` to acquire the function names? – keflavich Aug 21 '15 at 07:35
  • I don't know, we should test it, I don't see why and actually the test, I made was including a numpy call that were crashing python and I get the print of the function call that execute and the trace of the numpy error that crash python. This crash numpy : import numpy import numpy def crash_python_with_numpy(): # Ref : http://stackoverflow.com/questions/5340739/filling-numpy-array-using-crashes-python txtInputs=empty((7,12), dtype=object) txtInputs[:, :]='' – Richard Aug 21 '15 at 13:27