I write a lot of cli tools in python. Most of my tools have something like this to get arguments:
import argparse
parser = argparse.ArgumentParser(description='Process some integers.')
parser.add_argument('integers', metavar='N', type=int, nargs='+',
help='an integer for the accumulator')
parser.add_argument('--sum', dest='accumulate', action='store_const',
const=sum, default=max,
help='sum the integers (default: find the max)')
args = parser.parse_args()
I write code that is half data science (we're bioinformaticians). This args bit normally lives in "main.py" then gets passed to some sort of "run experiment" function/method which will often use multiprocessing Pool to break the task apart (by passing off to other functions/classes). So most of the arguments from the command line need to be parsed to the run function then the new process. This architecture is not not up for debate for various reasons.
I'm cognizant of this https://python-docs.readthedocs.io/en/latest/writing/style.html
BAD
def make_complex(*args):
x, y = args
return dict(**locals())
GOOD
def make_complex(x, y):
return {'x': x, 'y': y}
My tools often have 10-20 parameters stored in args, from argparse. So the question is - should I parse them packaged up as args or unpack them and pass them individually? If I explicitly parse each of them to each function/class that they are intended to end up in it means I end up with a lot of redundant code (at least one function will have a massive parameter list that is identical to args). Conversely, if I just pass args around it goes against the zen...