-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kedro hooks do not work inside packaged pipeline for serving #404
Comments
Hi @Lasica, you are right that hooks do not run when the pipeline is stored as a mlflow model, but this was intended. In fact I think most hooks should not be packaged with the pipeline, e.g.:
That said, I understand some hooks may be useful (e.g. if you have a custom hooks which validate your data between nodes). I think the solution may be to enable passing hooks manually to the CLI to specify the ones you want to store with the model. Can you tell me more about your use case? |
Hmm I store configuration as params in kedro and they are packaged together with other dependencies, so I'm not sure if the 1st point is all valid, as such configuration is saved together with the model. My use case is - I'm attempting to add a model monitoring (in particular data drift) to my inference pipeline used for serving, that exposes those metrics to Prometheus to be scraped (or pushed to pushgateway). Such class needs some initialization with reference dataset, address of pushgateway and other values. In the inference pipeline I've added a node at the end that passes the data to monitoring metrics calculation. I wanted to make a hook Currently I needed a workaround to add initialization arguments to the monitoring node and check every time if the monitoring class singleton is already initialized or not. I want this hook to only run for inference pipeline. |
I understand the use, case, but I think this is a very bad design to hide such processing inside the mlflow model. The mlflow model is supposed to be a pure data pipeline which takes a single input (a dataframe / an array) and returns a single output (a series /dataframe array witrh predictions). The main drawback of hiding your monitoring node inside the model is that you will make your model very dependent of the environnement where you deploy it. If your model is in an environment has no access to Prometheus, has credentials issues, has timeout issues, should run on a cluster with restricted permissions... you will need to recreate a mlflow model for each situation. This is supposed to be completely independent. IMHO, the correct way to do this is the following :
This is described in this tutorial and I think it makes sense for your use case, what do you think? |
You make valid points about flexibility of this inference pipeline model with co-located monitoring. I wasn't aware it's easy to re-use pipeline as a node in that way. Thanks for that! I think this solution of monitoring has some pros and cons, what you suggest does not help address that the model will be dependent on environment, you will just have 2 of them, one of them will be. It depends on how fail-safe the monitoring is written whether this environment dependency will be an issue. This is however, a bit off the subject of including hooks in the pipeline however, as I still think there should be a way to include some of them in the model. I see that you look at this functionality of packaging pipelines as models in light of ideal model abstraction that does just calculations. However this functionality gives us extra flexibility to add some utilities around model and easy way to access serving, if one wishes so. I think that hooks are part of that flexibility and it would be nice to have a way to whitelist them. |
It's true, but actually this is still more flexible and robsut to change. Say you want to change your monitoring node:
➡️ If I understand correctly, it is quite clear that the modelify command is not really what you need. You just wa[nt to leverage mlflow capabilities to create an API automatically from a kedro pipeline. I understand this and AFAIK, there is no real better solution for now. I hope that kedro-serving will close the gap one day, but it's not even close for now. In a ideal world, you'd be able to
My opinion here is the following:
If you want to help through a PR:
|
Thanks for the well thought response.
That's all I was looking for as confirmation. I'll look at the code and try to start a PR later this/next week.
I'm not sure I understand this. Wouldn't the monitoring pipeline also need to be packaged as a model to be served (for example by MLFlow)? |
I've never answered this last question, sorry. My idea is that mlflow model should only be used to serve the inference pipeline of ML model, not all pipelines. There are many limitations and it does not make sense to retrain a full model each time you change the code of a pipeline. there are better plugin to serve other pipelines, like monitoring pipelines, e.g. kedro-boot. I'll just document it. |
Description
When I package a kedro pipeline for serving with modelify my kedro hooks no longer work.
Context
I was using hooks for initialization of some structures and had to make workarounds because hooks are not called. I'd like for hooks "before_pipeline_run" or "after_catalog_creation" to be called once when the server starts serving.
https://kedro.readthedocs.io/en/stable/hooks/introduction.html
Steps to Reproduce
kedro mlflow modelify ...
to package it for servingmlflow models serve ...
I may publish minimal example code later to check it out.
Expected Result
Hooks should run.
Actual Result
They do not.
Tell us what happens instead.
Your Environment
Include as many relevant details about the environment in which you experienced the bug:
kedro
andkedro-mlflow
version used (pip show kedro
andpip show kedro-mlflow
): 0.18.4, 0.11.7python -V
): 3.10.9Does the bug also happen with the last version on master?
Yes.
The text was updated successfully, but these errors were encountered: