czix
01/17/2022, 9:55 PMdatajoely
01/17/2022, 10:02 PMPipeline
outputspython
my_pipelines = Pipeline(
[
node(
func=some_func_that_returns_dict,
inputs=...,
outputs="my_output"
),
node(
func=some_func_that_accepts_a_dict,
inputs="my_output",
...
]
)
In this situation the my_output
object will be a dictionary and would pertain to a single catalog entry or a single 'input' addressable by downstream nodes.
python
my_pipelines = Pipeline(
[
node(
func=some_func_that_returns_dict,
inputs=...,
outputs={'key_1':'catalog_1', 'key_2':'catalog_2'}
),
node(
func=some_other_func,
inputs='catalog_1',
outputs=...
)
]
)
In this example we demonstrate how you can actually map the keys of the dictionary to individual catalog entires or downstream inputs.czix
01/17/2022, 10:13 PMlist expected at most 1 argument, got 2
datajoely
01/17/2022, 10:14 PMczix
01/17/2022, 10:16 PMsave
method, so it is my fault. But it is very hard to see on the error messagekedro.io.core.DataSetError: Failed while saving data to data set CustomDataSet(filepath=/my_path/filepath.csv, load_args={}, protocol=file, save_args={'index': False}, version=Version(load=None, save='2022-01-17T21.51.04.634Z')).
list expected at most 1 argument, got 2
datajoely
01/17/2022, 10:18 PMCustomDataSet.save()
method?czix
01/17/2022, 10:22 PMdatajoely
01/17/2022, 10:23 PMczix
01/17/2022, 10:23 PMdatajoely
01/17/2022, 10:24 PMbreakpoint()
in your save methodczix
01/17/2022, 10:26 PMdatajoely
01/17/2022, 10:26 PMczix
01/17/2022, 10:27 PMdatajoely
01/17/2022, 10:28 PMTypeError
, it tells you the error occurred during saving. Would that be helpful?exc
object at runtimeczix
01/17/2022, 10:31 PMdatajoely
01/17/2022, 10:31 PMexc
objectczix
01/17/2022, 10:31 PM