rohan_ahire
09/08/2022, 12:33 AMavan-sh
09/08/2022, 12:37 AMconf/base/logging.yml
entirely. Docs on kedro logging might be useful. https://kedro.readthedocs.io/en/stable/logging/logging.htmlrohan_ahire
09/08/2022, 12:57 AMrohan_ahire
09/08/2022, 5:05 PMrohan_ahire
09/08/2022, 5:08 PMrohan_ahire
09/08/2022, 6:51 PMendpoint = '/api/2.0/mlflow/experiments/create' │ │
│ │ host_creds = <mlflow.utils.rest_utils.MlflowHostCreds object at │ │
│ │ 0x7f2203b6bbe0> │ │
│ │ json_body = {'name': '/mnt/files/rohan/kedro_data_science/'} │ │
│ │ method = 'POST' │ │
│ │ response = <Response [404]> │ │
│ │ response_proto = <class 'rich.pretty.Node'>.__repr__ returned empty │ │
│ │ string
datajoely
09/08/2022, 6:58 PMGalileo-Galilei
09/08/2022, 7:00 PMEliãn
09/09/2022, 12:36 PMnoklam
09/09/2022, 12:38 PMkedro-airflow
which helps you to create a Airflow Dag. However, you don't usually want to have a 1-1 mapping between Kedro pipeline and orchestrator DAGs, since they are usually larger node conceptually.noklam
09/09/2022, 12:39 PMnoklam
09/09/2022, 12:40 PMnoklam
09/09/2022, 12:41 PMkedro run --pipeline a"
, or optionally the Python API (which the kedro-airflow
helps you to do that)Eliãn
09/09/2022, 12:44 PMEliãn
09/09/2022, 12:44 PMEliãn
09/09/2022, 12:56 PMpython
configs = {
'products': {
'schedule_interval':'@weekly'
},
'customers': {
'schedule_interval':'@daily'
},
}
def generate_dag(dag_id, start_date, schedule_interval, details):
with DAG(dag_id, start_date=start_date, schedule_interval=schedule_interval) as dag:
@task
...
for name, detail in configs.items():
dag_id = f'dag_{name}'
globals()[dag_id] = generate_dag(dag_id, ...)
@noklam something like thisnoklam
09/09/2022, 2:16 PMkedro run --params=<config>
or just use the Python API with KedroSessions.create(extra_params=<params>)
then do a session.run(pipeline=<some_pipeline>)
Eliãn
09/09/2022, 2:22 PMEliãn
09/09/2022, 2:22 PMrohan_ahire
09/09/2022, 7:07 PMfrom kedro.framework.session import KedroSession
from kedro.framework.startup import bootstrap_project
from pathlib import Path
metadata = bootstrap_project(Path.cwd())
with KedroSession.create(metadata.package_name) as session:
session.run()
2. Does kedro have pipeline templates? Like for example, a pipeline template for regression use case or classification use case? Or do we just use the kedro pipeline create data_processing
to create a sample template and add processing code in it?sri
09/09/2022, 7:10 PMdatajoely
09/09/2022, 7:13 PMbefore_pipeline_run
hook!sri
09/09/2022, 7:59 PMdatajoely
09/09/2022, 8:00 PMdatajoely
09/09/2022, 8:01 PMrohan_ahire
09/09/2022, 8:19 PMdatajoely
09/09/2022, 8:22 PMrohan_ahire
09/09/2022, 8:24 PMdatajoely
09/09/2022, 8:26 PMwaylonwalker
09/13/2022, 9:36 PM