so - I love jupyter-lab ... it's great that you can work on a large complicated process step by step.
However - once you get it working, I find I always have to either
- add and parse arguments
- put it in a loop
because almost always I step through and get it working with one example, but I might then need to run it against a million instances- or make it a tool where you can say - point it at a database or directory or whatever.
And while you can export it as a python file, put it all in a function, then add your argument parsing or your loop - suddenly you've lost the notebook element of it - and you can't easily go back.
I'm just wondering if anyone's come out with some sort of technique to basically achieve both - ie have a large notebook split into steps, but then somehow run the the whole thing with different sets of arguments, possibly millions of times - ideally without losing jupyter-lab. ie - sort of like putting a for loop across the whole thing or have some sort of "go to cell" or something...
or just solving the underlying problem some completely different way I've never thought of.