Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Output object shape mismatch, interpreter returned output of shape: [1, 10] while shape of output provided as argument in run is: [1, 10, 4] #134

Open
iammohit1311 opened this issue Aug 21, 2023 · 18 comments

Comments

@iammohit1311
Copy link

iammohit1311 commented Aug 21, 2023

I have trained a custom model on SSD MobileNet V2 FPNLite 320 x 320. This is the error I'm getting:

Invalid argument(s): Output object shape mismatch, interpreter returned output of shape: [1, 10] while shape of output provided as argument in run is: [1, 10, 4] E/flutter (27268): #0 Tensor._duplicateList (package:tflite_flutter/src/tensor.dart:232:7) E/flutter (27268): #1 Tensor.copyTo (package:tflite_flutter/src/tensor.dart:202:7) E/flutter (27268): #2 Interpreter.runForMultipleInputs (package:tflite_flutter/src/interpreter.dart:183:24) E/flutter (27268): #3 _DetectorServer._runInference (package:live_object_detection_ssd_mobilenet/service/detector_service.dart:363:19) E/flutter (27268): #4 _DetectorServer.analyseImage (package:live_object_detection_ssd_mobilenet/service/detector_service.dart:284:20) E/flutter (27268): #5 _DetectorServer._convertCameraImage.<anonymous closure> (package:live_object_detection_ssd_mobilenet/service/detector_service.dart:246:25) E/flutter (27268): <asynchronous suspension>

@PaulTR
Copy link
Collaborator

PaulTR commented Aug 21, 2023

Hey! So this is for image classification, yeah? Since it's a costume model, would you mind running it through this tool that we have to see if it works there? If it works there, then can you share a link to your code up on GitHub (unless you're using an unmodified version of one of the examples) as well as a link to the model to investigate further?

Thanks!

@iammohit1311
Copy link
Author

Hi! Thank you for replying. This is for live object detection. I have tried running my model on Android (Kotlin) and it works perfectly. I have tested it in Jupyter Notebook as well, works well. I have used the unmodified version of this repository's live_object_detection_ssd_mobilenet example. The model is not supposed to be open source, please drop your E-mail so I can mail you instead!

@iammohit1311 iammohit1311 changed the title Output object shape mismatch, interpreter returned output of shape: [1, 10] while shape of output provided as argument in run is: [1, 1, 4] Output object shape mismatch, interpreter returned output of shape: [1, 10] while shape of output provided as argument in run is: [1, 10, 4] Aug 22, 2023
@arbile26
Copy link

arbile26 commented Aug 22, 2023

@PaulTR I use Google Teachable Machine trained model with live_object_detection_ssd_mobilenet example and get the same error!
nvalid argument(s): Output object shape mismatch, interpreter returned output of shape: [1, 2] while shape of output provided as argument in run is: [1, 10, 4]
How to configure the example to run Google Teachable Machine trained model?

@PaulTR
Copy link
Collaborator

PaulTR commented Aug 22, 2023

Ah. I have no idea, I've never used Google Teachable Machine, so I don't know what it's outputting that's different. When you say you've run this on Android, are you using the Task Library, or direct TensorFlow Lite inference? Task Library does some stuff behind the hood to figure out shapes and work correctly that might not translate as well to this without knowing what exactly your model does. Unfortunately that might fall a bit outside of the scope of what we can help with here.

@iammohit1311
Copy link
Author

Ah. I have no idea, I've never used Google Teachable Machine, so I don't know what it's outputting that's different. When you say you've run this on Android, are you using the Task Library, or direct TensorFlow Lite inference? Task Library does some stuff behind the hood to figure out shapes and work correctly that might not translate as well to this without knowing what exactly your model does. Unfortunately that might fall a bit outside of the scope of what we can help with here.

@PaulTR I am using direct TensorFlow Lite inference. Also, my model is simply trained using TensorFlow 2.x

@Somaru-chan
Copy link

I have this issue as well, trained a custom model using Teachable Machine and applied it to my Flutter app using the live object detection model provided in this repo. When I run the app on my physical iPhone, this error pops up:
"[VERBOSE-2:dart_isolate.cc(1098)] Unhandled exception:
Invalid argument(s): Output object shape mismatch, interpreter returned output of shape: [1, 36] while shape of output provided as argument in run is: [1, 10, 4]"

Any idea around this? Perhaps I should change the output arguments to match the custom-trained model?

@mohitsriv23
Copy link

I have this issue as well, trained a custom model using Teachable Machine and applied it to my Flutter app using the live object detection model provided in this repo. When I run the app on my physical iPhone, this error pops up: "[VERBOSE-2:dart_isolate.cc(1098)] Unhandled exception: Invalid argument(s): Output object shape mismatch, interpreter returned output of shape: [1, 36] while shape of output provided as argument in run is: [1, 10, 4]"

Any idea around this? Perhaps I should change the output arguments to match the custom-trained model?

You found something ? I would recommend testing your model on Android (Kotlin) project just to ensure if it is running properly there. If it does, it's only an issue with this repository since it is under development

@lurongshuang
Copy link

I have this issue as well
归档.zip

@ysumiit005
Copy link

same error when used my coin model made from youtube

@UsamaHameed1
Copy link

Does anyone know how to fix the problem I am using the efficient-det-0 lite model. i have test the model on the media pipe example project code and it works over there but on flutter plugin i get this error:
ERROR:flutter/runtime/dart_isolate.cc(1097)] Unhandled exception: E/flutter (10232): Invalid argument(s): Output object shape mismatch, interpreter returned output of shape: [1, 25] while shape of output provided as argument in run is: [1, 25, 4] E/flutter (10232): #0 Tensor._duplicateList (package:tflite_flutter/src/tensor.dart:233:7) E/flutter (10232): #1 Tensor.copyTo (package:tflite_flutter/src/tensor.dart:203:7) E/flutter (10232): #2 Interpreter.runForMultipleInputs (package:tflite_flutter/src/interpreter.dart:183:24) E/flutter (10232): #3 _DetectorServer._runInference (package:live_object_detection_ssd_mobilenet/service/detector_service.dart:363:19) E/flutter (10232): #4 _DetectorServer.analyseImage (package:live_object_detection_ssd_mobilenet/service/detector_service.dart:280:20) E/flutter (10232): #5 _DetectorServer._convertCameraImage.<anonymous closure> (package:live_object_detection_ssd_mobilenet/service/detector_service.dart:242:25) E/flutter (10232): <asynchronous suspension>

The input tensor to efficient-det is 320x320

@Somaru-chan
Copy link

I have this issue as well, trained a custom model using Teachable Machine and applied it to my Flutter app using the live object detection model provided in this repo. When I run the app on my physical iPhone, this error pops up: "[VERBOSE-2:dart_isolate.cc(1098)] Unhandled exception: Invalid argument(s): Output object shape mismatch, interpreter returned output of shape: [1, 36] while shape of output provided as argument in run is: [1, 10, 4]"
Any idea around this? Perhaps I should change the output arguments to match the custom-trained model?

You found something ? I would recommend testing your model on Android (Kotlin) project just to ensure if it is running properly there. If it does, it's only an issue with this repository since it is under development

Sorry, I hadn't realized there was a comment on my comment. I don't have access to a physical Android device at the moment ^^", but I appreciate the suggestion.

Any updates regarding this issue? or its root?

@rickgrotavi
Copy link

rickgrotavi commented Dec 20, 2023

Same problem. I use a custom model on ssd-mobilenet-v2. I checked it on mediapipe-studio and its works.

@ysumiit005
Copy link

ysumiit005 commented Dec 20, 2023

I solved it so in your code you will have output object/tensor in form of some array

Now order of that 2-3 array should be same as of model. Try and test various order .

@occasionalcode
Copy link

@ysumiit005 hello, what did you do to solve it?

@Rikerz08
Copy link

Hi @ysumiit005! We are experiencing the same problem, is it okay if you share how you solved it? Thank you!

@Rikerz08
Copy link

Hi @iammohit1311! I am currently experiencing the same problem, did you already manage to solve it?

@rickgrotavi
Copy link

At example live object detection in file detector_service.dart we have
final output = { 0: [List<List<num>>.filled(10, List<num>.filled(4, 0))], 1: [List<num>.filled(10, 0)], 2: [List<num>.filled(10, 0)], 3: [0.0], };
It doesn't always fit. In my case, it worked with :
final output = { 0: [List<num>.filled(10, 0)], 1: [List<List<num>>.filled(10, List<num>.filled(4, 0))], 2: [0.0], 3: [List<num>.filled(10, 0)], };
If we go to Kaggle in Outputs, we will find the right order.

@occasionalcode
Copy link

@rickgrotavi hi i tried your code and it returns this error. do you know how to fix this problem?

E/FlutterJNI(  837): android.graphics.ImageDecoder$DecodeException: Failed to create image decoder with message 'unimplemented'Input contained an error.
E/FlutterJNI(  837): 	at android.graphics.ImageDecoder.nCreate(Native Method)
E/FlutterJNI(  837): 	at android.graphics.ImageDecoder.-$$Nest$smnCreate(Unknown Source:0)
E/FlutterJNI(  837): 	at android.graphics.ImageDecoder$ByteBufferSource.createImageDecoder(ImageDecoder.java:242)
E/FlutterJNI(  837): 	at android.graphics.ImageDecoder.decodeBitmapImpl(ImageDecoder.java:2015)
E/FlutterJNI(  837): 	at android.graphics.ImageDecoder.decodeBitmap(ImageDecoder.java:2008)
E/FlutterJNI(  837): 	at io.flutter.embedding.engine.FlutterJNI.decodeImage(FlutterJNI.java:558)
D/BLASTBufferQueue(  837): [SurfaceView[com.example.object_detection_ssd_mobilenet/com.example.object_detection_ssd_mobilenet.MainActivity]@0#5](f:0,a:0) onFrameAvailable the first frame is available```

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests