Do you absolutely need to use `PartitionedDataSet`...
# advanced-need-help
d
Do you absolutely need to use
PartitionedDataSet
? One area Kedro could be a lot better IMO is in how it handles
PartitionedDataSet
, and how that can be read/consumed in the unpartitioned form. As it stands, you can do this unpacking, as you describe, but: 1. it requires some sort of dynamic behavior to get the list of partitions 2. an unpacking pipeline really doesn't do anything, given the data already exists in a split form--you're just making another set of catalog entries to point to the same data I would probably leave the dynamic behavior to when you're constructing the pipeline, like:
Copy code
from kedro.pipeline import pipeline

all_entities_pipeline = Pipeline()
for i in range(NUM_ENTITIES):
    all_entities_pipeline += pipeline(single_entity_pipeline, namespace=f"entity{i}")
Still not perfect if you want a lot of catalog entries for each entity, probably need to look at using templating in that case.