Hi All. Please help with a couple of questions I h...
# beginners-need-help
Hi All. Please help with a couple of questions I have: 1. When I create a kedro session and run a kedro pipeline, my databricks job shows success even if the pipeline fails. Kedro is able to report all errors and halt the pipeline where it fails, however the databricks job is not able to catch the exception and shows the job as failure. Is there some exception handling that is required from my end and report it to databricks so that it shows a job failure?
Copy code
from kedro.framework.session import KedroSession
from kedro.framework.startup import bootstrap_project
from pathlib import Path

metadata = bootstrap_project(Path.cwd())
with KedroSession.create(metadata.package_name) as session:
2. Does kedro have pipeline templates? Like for example, a pipeline template for regression use case or classification use case? Or do we just use the
Copy code
kedro pipeline create data_processing
to create a sample template and add processing code in it?
1) absolutely should raise an exception, something else is going on here 2) this isn't possible out of the box, but I'd be supportive of building it so please raise a featurr request!
1. I'll fail the pipeline at different steps and see if its a problem with mlflow plugin not being able to report exceptions or whether something else and get back.
2. Thanks, I'll think about it and discuss internally with the team