-
Notifications
You must be signed in to change notification settings - Fork 434
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
import frozen graph with error "Input 0 of node X was passed float from Y:0 incompatible with expected float_ref." #77
Comments
Hi,thanks for your solution about the issue at first! I got the same error during freeze graph, after change the AssignSub to Sub, AssignAdd to Add... it works! |
@xwyf05 I assume there should be no difference on the result between AssignSub and Sub. You might have to validate whether that's caused by it. for example, you can choose nodes before and after AssignSub, to see whether the difference is caused by this op. |
Hi: Thanks for your suggestion. But according the results, they are difference between AssignSub and Sub. I did not find the exactly difference between them but it seems the sub can't calculate the moving_mean and moving_variances correctly..... According the assumption, I rewrite the parameters, and it works! Thank you reply... |
@xwyf05 This is interesting. There is a difference on the behavior between Sub and AssignSub. After AssignSub node runs, its ref input tensor will get updated. I guess the input of AssignSub in your graph might be used by multiple consumer nodes (that could be executed in parallel), once AssignSub is triggered to run by the first consumer node, the input ref data is changed, then other consumer nodes will get updated one. |
I was wondering, how would you replace 'Assign' op in this case? So far I'm doing the conversion as follows, but I encounter: for node in graph_def.node: |
@blagodurov there is still Assign op in your frozen graph? what node is "ref"ed, a const or a placeholder? (I assume there is no variables there, right?) If in this case, possible to use "Identity" to replace that? |
hi, @blagodurov : I also meet the same problem as you:
Have you solved it yet? When replace ValueError: NodeDef mentions attr 'validate_shape' not in Op<name=Identity; signature=input:T -> output:T; attr=T:type>; NodeDef: InceptionV3/Conv2d_1a_3x3/weights/Assign = Identity[T=DT_FLOAT, _class=["loc:@InceptionV3/Conv2d_1a_3
x3/weights"], validate_shape=true, _device="/device:CPU:0"](InceptionV3/Conv2d_1a_3x3/weights, InceptionV3/Conv2d_1a_3x3/weights/Initializer/truncated_normal). (Check whether your GraphDef-interpreting binary is up to date with you
r GraphDef-generating binary.). Thanks. |
be noted about the error “Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.). ” This error should be caused by different tf versions, used to generate the graph, and load the graph. |
Thanks for your quick reply. I will try. |
@OneDirection9
|
Thanks for your reply. |
today I revisited this problem with error " Input 0 of node bilm/Assign was passed float from bilm/Variable:0 incompatible with expected float_ref." And this time, "bilm/Assign"'s first input is a ref. So @blagodurov code nicely changed the Assign op to Identity and removed unseless input. I think this would be needed as part of the pre-processing before conversion. |
Hi guys, the following simple graph is compiled correctly on my computer: graph_unique = tf.Graph() with graph_unique.as_default(): When I try to write it as a stack of two graphs, though, I get an error. adr_big = '' # please add a valid addres graph_small = tf.Graph() with graph_small.as_default(): with tf.Session(graph=graph_small) as sess: graph_big = tf.Graph() with graph_big.as_default(): with tf.Session(graph=graph_big) as sess: graph_together = tf.Graph() with graph_together.as_default(): Tensorflow says: Please notice: everything works correctly if the definition of graph_small above is replaced with the following one: with graph_small.as_default(): Thanx a lot! |
@TanCari are you having issue using tf-onnx tool, if not, I think you should ask here in tf repo. |
Hello, guys
I find a weird error:
So, any advice? |
it seems to be a TF issue actually, so you may want to ask in TF repo to get better help. :-) |
Hi! I have posted the question in tensorflow repository, as a bug, but it got closed immediately: tensorflow/tensorflow#26346 (comment) A new, more concise version of my question is available here: https://stackoverflow.com/questions/55176530/addin-variables-upstream-to-compute-hessian-matrix |
Hi, @TanCari I think it's better to list your environment, e.g. Ubuntu 16.04, tensorflow 1.13. This can help people to reproduce your question and solve it. Thanks. |
How can i modify the graph_def in session? if i do it in this way, the model saved by sess didn't change from RefSwitch to Switch. |
You might need to re-import graph_def by tf.import_graph_def in a new session. |
In my experience, this is caused by using tf.cond in BN, so using the BN in tf.layers and tc.layers is a good choice, but note to use it correctly, refering to https://towardsdatascience.com/pitfalls-of-batch-norm-in-tensorflow-and-sanity-checks-for-training-networks-e86c207548c8 |
HI,@blagodurov, I did as you said, and the pb file was saved successfully with batch size 128. |
Awesome! Thanks for your excellent and amazing code! |
@blagodurov I'm getting a similar error:
What's the magic bugfix for that op? :) |
ValueError: Input 0 of node Rbn1a/cond/AssignMovingAvg was passed float from Rbn1a/cond/AssignMovingAvg/Switch:1 incompatible with expected float_ref. I got the above error when I try to load the .pd file in my TensorFlow code. thank you |
float_ref is a variable. Basically freezing the graph did not work completely. We see this rarely that the model implementation is like this. An example might be you have a dropout op and the keep_prob comes from a variable that is not initialized. |
@OneDirection9 @carlosgalvezp @blagodurov hi all , i meet "fold_constants: Ignoring error Input 7 of node model/rnn/while was passed float from rnn/bias:0 incompatible with expected resource." when i use transform tool to deal the pb model.,can you give me some advice, thxs |
@OneDirection9 @carlosgalvezp @blagodurov hi all,
|
thanks I solve this problem |
Hi @ashnaeldho |
I copy up code , but run at |
Hey guys. How to fix the same problem using java? |
There are plenty of answers already. None of them fixed it for me so I'd like to share you how I fixed it. Hopefully I save someone elses time even though my solution isn't that fancy. My error:
The command I used:
The code and model I used: https://github.com/swook/GazeML -> ELG model My system:
Solution:
I know it's supposed to work using tensorflow 1.15 but in my situation this is an acceptable solution. (I tried removing the batchnorm layer entirely before realizing the solution above but then I ran into another problem. If you are in the same situation than me but you must use tf 1.15 then you're better off following the instructions of other people) |
@flyzyh could you share the code you used and the error message you got? |
Note: create this issue for anybody who might come across the similar issue in the future.
when I tried to convert frozen DCGAN inference model (trained with https://github.com/carpedm20/DCGAN-tensorflow), the error was thrown as below:
This is actually caused by the node AssignSub's first input is expected to be a float_ref, but actually after freeze_graph.py handling, it is a float. There is a discussion at davidsandberg/facenet#161 and https://www.bountysource.com/issues/36614355-unable-to-import-frozen-graph-with-batchnorm.
To get this fixed, we need to do extra work for the frozen graph, basically, at least we change the AssignSub to Sub in the graph. look at below code as an example:
The text was updated successfully, but these errors were encountered: