06/17/2022, 3:46 PM
hi there, i have a question about pipeline design. i am working on a nlp project where i built several text processing pipelines for english text from different data sources, for example i have: - [env: preprocess] fetch data from source 1 - [env: preprocess] fetch data from source 2 - [env: base] preprocess - [env: base] NER - [env: base] text summarization - ... now i would like to scale to more languages and more data sources, my initial thought is that i may need to duplicate my base env per each language i support and manually update all the catalog/params by myself (although they are just conventions like prefixing by "en" or "ja" or "fr" etc). is there more "Kedro" way to do accomplish my goal?