I would like to run a kedro pipeline using different inputs and saving the results in an output folder where inputs paths and outputs paths are provided through the command line
I sow the possibility of using the kedro.config.TemplatedConfigLoader to pass new variables to a jinja2 template catalog, but in this way I can only manually define the globals_dict variables in the hooks as shown in the kedro documentation.
Ideally I would like to have to run something like this:
kedro run --pipeline="my_pipeline" --input="path_to_input_1" --output="path_to_output_1"
kedro run --pipeline="my_pipeline" --input="path_to_input_2" --output="path_to_output_2"
with a catalog like this:
input_df:
type: pandas.CSVDataSet
filepath: "${ input_path }"
load_args:
sep: "\t"
index_col: 0
save_args:
index: True
encoding: "utf-8"
output_df:
type: pandas.CSVDataSet
filepath: "${ output_path }"
load_args:
sep: "\t"
index_col: 0
save_args:
index: True
encoding: "utf-8"
and having the correct inputs analysed and the results stored in the correct output paths.
what would be the kedro way to achieve it?