Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deeplab/Segmentation: Error: The dtype of dict['ImageTensor'] provided in model.execute(dict) must be int32, but was float32 #3723

Closed
un1crom opened this issue Aug 4, 2020 · 7 comments

Comments

@un1crom
Copy link

un1crom commented Aug 4, 2020

TensorFlow.js version

"@tensorflow/tfjs-node": "^2.0.1"
"@tensorflow/tfjs": "^2.0.1"

Browser version

Nodejs and expressjs, no browser

Describe the problem or feature request

In using a LOCALLY downloaded and served model.... in the NPM/node module the index.js file requires me to cast a tensor to an int32 or it errors out about float32 no matter what I pass it.

Code to reproduce the bug / link to feature request

model loading

const loadModelDeepLab = async () => {
  const modelName = 'pascal';   // set to your preferred model, either `pascal`, `cityscapes` or `ade20k`
  const quantizationBytes = 2;  // either 1, 2 or 4
  const url = 'https://tfhub.dev/tensorflow/tfjs-model/deeplab/pascal/1/default/1/model.json?tfjs-format=file';
  return await deeplab.load({modelUrl: url,base: modelName, quantizationBytes});
};

info on the tensor

Tensor {
  kept: false,
  isDisposedInternal: false,
  shape: [ 1000, 1000, 3 ],
  dtype: 'int32',
  size: 3000000,
  strides: [ 3000, 3 ],
  dataId: {},
  id: 2,
  rankType: '3',
  scopeId: 0 }

the error out

"(node:35252) UnhandledPromiseRejectionWarning: Error: The dtype of dict['ImageTensor'] provided in model.execute(dict) must be int32, but was float32
"

I manually changed offending code to... and everything runs

    SemanticSegmentation.prototype.predict = function (input) {
        var _this = this;
        return tf.tidy(function () {
            var data = utils_1.toInputTensor(input);
            return tf.squeeze(_this.model.execute(tf.cast(data,"int32")));
        });
    };
@rthadur rthadur self-assigned this Aug 4, 2020
@rthadur rthadur added the type:support user support questions label Aug 4, 2020
@rthadur
Copy link
Contributor

rthadur commented Aug 4, 2020

@un1crom please check for a related stack-overflow issue here

@google-ml-butler
Copy link

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 dyas if no further activity occurs. Thank you.

@google-ml-butler
Copy link

Closing as stale. Please @mention us if this needs more attention.

@parthlathiya2697
Copy link

I got the same error.

My Scenerio:
I trained a pre-trained model from tensorflow model zoo using transfer learning using tensorflow api as saved model (model.pb file) and converted it into tfjs format (model.json and shared .bin files).

When I tried running this model.json on the javascript(web), it gives the below error:

Uncaught (in promise) Error: The dtype of dict['input_tensor'] provided in model.execute(dict) must be int32, but was float32

When I tried someone else's working converted model (model.json and shared .bin files) on my javascript(web), it worked.

Conclusion:
There is something wrong with my converted model. I converted it using tensorflowjs_converter. My original model in (model.pb) works accurately in python too.

I'm still trying out to convert my model.pb file with different tensorflowjs_converters as it seems to be the converters versioning issue.

@maxbauer
Copy link

Same problem for me when trying to run the TensorFlow SavedModel Import Demo with a custom ssd mobilenet v2 model.

@parthlathiya42 it worked for me with the following cast:

const integerTensor = imageToPredict.toInt();
return this.model.executeAsync(integerTensor); //execute() doesn't work for me here

@dubrovin-sudo
Copy link

System information

  • Used a stock example script provided in TensorFlow.js;

  • TensorFlow.js installed from (npm or script link):

!pip install tensorflowjs

  • TensorFlow.js version 4.0.0;

  • Google Chrome Version 107.0.5304.87 (Official Build) (64-bit);

Describe the current behavior
After the conversion, the input data type changed from float32 to int32

Describe the expected behavior
The data format remains the same

Standalone code to reproduce the issue

!tensorflowjs_converter \
    --input_format=tf_saved_model \
    --output_format=tfjs_graph_model \
    --signature_name=serving_default \
    --saved_model_tags=serve \
    /content/gdrive/MyDrive/customTF2/data/inference_graph/saved_model \

Other info / logs Include any logs or source code that would be helpful to

TFmodel.signatures['serving_default'].output_dtypes
{'detection_anchor_indices': tf.float32, 'raw_detection_scores': tf.float32, 'detection_classes': tf.float32, 
'num_detections': tf.float32, 'raw_detection_boxes': tf.float32, 'detection_boxes': tf.float32, 
'detection_multiclass_scores': tf.float32, 'detection_scores': tf.float32}

Colab link

@hozeis
Copy link

hozeis commented May 27, 2023

Has there been a fix or bypass for this issue. I created a model using transferred learning from the modelzoo/mask-rcnn model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants