Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can I load and use trained tfdf model in Java? #81

Closed
YujieW0201 opened this issue Feb 16, 2022 · 10 comments
Closed

Can I load and use trained tfdf model in Java? #81

YujieW0201 opened this issue Feb 16, 2022 · 10 comments
Labels
question Further information is requested

Comments

@YujieW0201
Copy link

Hi I trained my tfdf model in python and want to use it in java for production.
For conventional NN model, we can load the model from SavedModelBundle and get prediction.

try (SavedModelBundle b = SavedModelBundle.load("/tmp/model", "serve")) {

        // create the session from the Bundle
        Session sess = b.session();
        // create an input Tensor, value = 2.0f
        Tensor x = Tensor.create(
            new long[] {NUM_PREDICTIONS}, 
            FloatBuffer.wrap( new float[] {2.0f} ) 
        );
        
        // run the model
        float[] y = sess.runner()
            .feed("x", x)
            .fetch("y")
            .run()
            .get(0)
            .copyTo(new float[NUM_PREDICTIONS]);

        // print out the result.
        System.out.println(y[0]);
    }                

I'm currently trying to use my tfdf model and wondering if current tfdf support loading and inference in Java? Will the model's graph and useful info be loaded? I'm still trying to load it and wondering if anyone has clue? Thank you so much!

@achoum
Copy link
Collaborator

achoum commented Feb 19, 2022

Hi AudreyW0201,

We have not yet experimented using TF-DF in Java, so take what I say with a gain of salt.

It seems, that TF-Java relies on the core TF runtime. More clearly, TF-Java uses the same implementation as TF-Python. This means that TF-DF models can likely run in TF-Java seemingly.

The most likely issue (if any) will be to configure TF-Java to use the TF-DF model custom op (which is compatible with the TF runtime). In TF-Python, this is done automatically when importing the TF-DF library. In TF-Java, you might have to read the documentation / ask the TF-Java people.

If you have an error, don't hesitate to post it here. Maybe we can help figuring out.

Alternatively, depending on your and other needs, it could be interesting to implement the TF-DF inference code directly in Java. A simple version of this code will be small (probably less than 20 lines of code; c++ example).

@achoum achoum added the question Further information is requested label Feb 19, 2022
@YujieW0201
Copy link
Author

Thanks @achoum for your reply! I tried to use java's function SavedModelBundle.load('model_path') to load the model but got error message: Op type not registered 'SimpleMLCreateModelResource' in binary. Make sure the Op and Kernel are registered in the binary running in this process.

I also contacted TF-Java people and tried the method but it doesn't work.
tensorflow/java#419 (comment)

For your second suggestion, I'm wondering if I cannot load the model successfully using java, how can I do inference on it?
Thank you!

@achoum
Copy link
Collaborator

achoum commented Feb 24, 2022

Hi AudreyW0201,

@Craigacp mentioned to use TensorFlow.loadLibrary. Could you share what it does when it does not work?

I think the solution should look like:

Download the TF-DF Pip library corresponding to your OS at: https://pypi.org/project/tensorflow-decision-forests/#files

For example, if you are working on Linux, download any version that ends in manylinux_2_12_x86_64.manylinux2010_x86_64.whl

Alternatively, the file should already be on your computer if you installed TF-DF in Python.

Open the .whl file. This is a classical archive Zip file. For example, use 7zip.

Extract the file `/tensorflow_decision_forests/tensorflow/ops/inference/inference.so" from the archive.
This is the "custom op library" that need to be loaded in Java.

In your Java code, before the SavedModelBundle.load, run the following code

byte[] opList = TensorFlow.loadLibrary(path_to_library_so);
assertTrue(opList.length > 0);

with path_to_library_so the path to the .so file you extracted earlier.

For the second solution, you (or a contributor) would have to post the inference code in Java. It is likely more work though.

@YujieW0201
Copy link
Author

Thank you @achoum and @Craigacp for your detailed instruction!
I'm able to load the custom op library now.

However, when I tried to load my model using SavedModelBundle.load, it throws exception:
TFInvalidArgumentException: No shape inference function exists for op 'SimpleMLLoadModelFromPathWithHandle', did you forget to define it?
Should I load other library or ops before loading the model?
More detailed error log:

external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:107] Reading meta graph with tags { serve }
external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:148] Reading SavedModel debug info (if present) from: /Users/
external/org_tensorflow/tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:210] Restoring SavedModel bundle.
external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:194] Running initialization op on SavedModel bundle at path: /Users/
[INFO kernel.cc:1153] Loading model from path
[INFO quick_scorer_extended.cc:824] The binary was compiled without AVX2 support, but your CPU supports it. Enable it for faster model inference.
[INFO abstract_model.cc:1063] Engine "GradientBoostedTreesQuickScorerExtended" built
[INFO kernel.cc:1001] Use fast generic engine
 I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:283] SavedModel load for tags { serve }; Status: success: OK. Took 1266572 microseconds.
Exception in thread "main" org.tensorflow.exceptions.TFInvalidArgumentException: No shape inference function exists for op 'SimpleMLLoadModelFromPathWithHandle', did you forget to define it?
	at org.tensorflow.internal.c_api.AbstractTF_Status.throwExceptionIfNotOK(AbstractTF_Status.java:87)
	at org.tensorflow.SavedModelBundle.load(SavedModelBundle.java:623)
	at org.tensorflow.SavedModelBundle.access$000(SavedModelBundle.java:67)
	at org.tensorflow.SavedModelBundle$Loader.load(SavedModelBundle.java:97)
	at org.tensorflow.SavedModelBundle.load(SavedModelBundle.java:357)

Thanks!

@nicolas-kim-reddit
Copy link

Hey @AudreyW0201 , just following up to see if you've found a solution or workaround for the above issue?

@Craigacp
Copy link

This looks like an issue in TF-DF. The TF C API requires that ops have shape inference functions, but this is disabled by the TF Python API (https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/framework/ops.py#L3211). As that call isn't part of the public TF C API we can't use it in TF-Java, and so need full shape inference for all ops.

@achoum
Copy link
Collaborator

achoum commented Jun 22, 2022

All the TF-DF ops have been augmented with ShapeInference (example). Could you try again ? :)

@MatthewZholud
Copy link

Thank you achoum!

#81 (comment) helped me!

@achoum
Copy link
Collaborator

achoum commented Jul 12, 2022

Awesome :)

@binalj-spotify
Copy link

If I were running this on Dataflow how would I do the same

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

6 participants