06/08/2022, 6:36 AM
Hi Everyone! I'm an engineer at Weights and Biases and I am working to integrate kedro with wandb. I'm blocked on a particular problem and I am wondering if there is a known solution / workaround to this: For a given node in a pipeline, a hook follows the following order:
Copy code
1) before_dataset_loaded
2) after_dataset_loaded
3) before_node_run
4) after_node_run
5) before_dataset_saved
6) after_dataset_saved
I'm wondering if there is any way to change it to work in the following order:
Copy code
1) before_node_run
2) before_dataset_loaded
3) after_dataset_loaded
4) before_dataset_saved
5) after_dataset_saved
6) after_node_run
Essentially, I need to encapsulate all Dataset operations such that they happen within a given node's lifecycle, not the other way around. Any tips would be greatly appreciated 🙂


06/08/2022, 8:36 AM
Hello @Ramit! The basic answer is no, there's no way of changing this ordering What exactly are you trying to do? There might be some way of achieving it still. Alternatively, if we added
as an argument to the dataset loaded/saved hooks (but with the current ordering), would that help? It's something I've thought in the past would be useful, and if this is another example of a use case then there would be a bigger push for adding it.


06/08/2022, 8:54 AM
+1 for adding the node to arguement


06/08/2022, 4:01 PM
Yes! Absolutely would. DMing you some more information about this, we can continue the conversation over there 😄