Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add registered custom objects inside pickled model file #19867

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

mthiboust
Copy link

@mthiboust mthiboust commented Jun 17, 2024

See the discussion in #19832

Context

Currently, keras serialization does not store the definition of custom objects (see here). Thus, pickled models are not self-sufficient if you want to load them in a separate vanilla session and call model.predict() if they contain such objects.

In practice, if you have 2 isolated codebases for training and inference, you always need to update the inference one along with the training one when using a custom object. It does not allow to experiment quickly with new models if your inference codebase has long CI/CD pipelines. Storing the definition inside the pickle file would allow to decouple training from inference.

Suggested change

Modify the __reduce__() method of KerasSaveable to store the definition of registered custom keras objects inside the pickle file when a keras model is serialized via pickle-like libraries (e.g. pickle, cloudpickle, dill). Registered custom keras objects are pickled with the standard pickle to avoid extra dependencies (but inheriting pickle limitations like lambda functions, this is a tradeoff).

Add a simple test with a registered custom layer. This test would ideally load the pickled model in a new subprocess to completely verify the wanted behavior. But it would complexify the test. Or maybe there is another way to test this? I can investigate this point if you want.

return (
self._unpickle_model,
(buf,),
(model_buf, custom_objects_buf),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would break existing pickles, no?

Copy link
Author

@mthiboust mthiboust Jun 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, I have not seen this because I use cloudpickle instead of pickle.

I confirm that it breaks compatibility if users used the standard pickle lib to dump their previous models. Previous models dumped with cloudpickle do not have such compatibility issue because the definition of the unpickling callable is directly stored inside the file (not just a reference).

Handling compatibility issues will probably not be straightforward. Not sure it is worth the effort.

Copy link
Author

@mthiboust mthiboust Jun 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The compatibility issue could be handled by doing something like:

    @classmethod
    def _unpickle_model(cls, model_buf, *args):
        import keras.src.saving.saving_lib as saving_lib

        # pickle is not safe regardless of what you do.

        if len(args) == 0:
            return saving_lib._load_model_from_fileobj(
                model_buf,
                custom_objects=None,
                compile=True,
                safe_mode=False,
            )

        else:
            custom_objects_buf = args[0]
            custom_objects = pickle.load(custom_objects_buf)
            return saving_lib._load_model_from_fileobj(
                model_buf,
                custom_objects=custom_objects,
                compile=True,
                safe_mode=False,
            )

But after experimenting with pickle instead of cloudpickle, I do not think this PR is adding value if we stick with the standard pickle because pickling the custom object only returns a reference in that case (meaning we still need the object definition to be available at inference time).

2 options:

  • We pickle the custom objects with cloudpickle only if cloudpickle is importable (without explicitly adding it as a dependency). Otherwise, we do not pickle the custom objects.
  • We close the PR now

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We pickle the custom objects with cloudpickle only if cloudpickle is importable (without explicitly adding it as a dependency). Otherwise, we do not pickle the custom objects.

Maybe let's try that? For backwards compat support, if we have multiple arguments we might want to use a dict instead of a tuple for better readability and maintainability.

Copy link
Author

@mthiboust mthiboust Jun 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New version with the following changes:

  • Use cloudpickle in the __reduce__() method but keep pickle in the unpickling class method so that we only need cloudpickle in the training environment (possible because cloudpickle.load is just an alias of pickle.load)
  • Use a dict instead of a tuple in the arguments of the unpickling function
  • Better isolate tests by adding a pytest fixture to clean the custom objects global dict before each test runs (failing tests in previous commit were due to already registered (but not accessible) custom objects by other tests
  • Add a test running in a fake __main__ module (so that cloudpickle could serializes the objects) and delete the custom object variable before loading the pickle to verify that its definition is completely stored inside the dumped pickle

Todo:

@gbaned
Copy link
Collaborator

gbaned commented Jul 26, 2024

Hi @mthiboust Can you please resolve the conflicts? Thank you!

@mthiboust
Copy link
Author

Hi @mthiboust Can you please resolve the conflicts? Thank you!

Hello @gbaned, sure, I can dedicate some time to finalize this in the coming days. Adding complete tests would require cloudpickle as a dependency. But from what i understand, dev dependencies are not separated from main dependencies, am I right?

Which one of those 2 options should I choose?

  • A/ Only do a partial coverage without adding the cloudpickle dependency
  • B/ Add the cloudpickle dependency for a better test coverage (do you confirm that I should modify this requirements file?)

@gbaned
Copy link
Collaborator

gbaned commented Aug 7, 2024

Hi @mthiboust Can you please resolve the conflicts? Thank you!

Hello @gbaned, sure, I can dedicate some time to finalize this in the coming days. Adding complete tests would require cloudpickle as a dependency. But from what i understand, dev dependencies are not separated from main dependencies, am I right?

Which one of those 2 options should I choose?

  • A/ Only do a partial coverage without adding the cloudpickle dependency
  • B/ Add the cloudpickle dependency for a better test coverage (do you confirm that I should modify this requirements file?)

Hi @fchollet Can you please assist on the above comment from @mthiboust. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: Assigned Reviewer
Development

Successfully merging this pull request may close these issues.

3 participants