Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

InvalidArgumentError (see above for traceback): TypeError: can't pickle dict_values objects #4856

Closed
programowalny opened this issue Jul 21, 2018 · 13 comments
Assignees

Comments

@programowalny
Copy link

System information

What is the top-level directory of the model you are using: object_detection
Have I written custom code (as opposed to using a stock example script provided in TensorFlow):
OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Window 10
TensorFlow installed from (source or binary): conda installation
TensorFlow version (use command below): 1.8
Bazel version (if compiling from source): None
CUDA/cuDNN version: 9.0
GPU model and memory: GeForce GTX 1050TI (Laptop)
Exact command to reproduce: None

Describe the problem:

Fallow to instruction i want to train my images in 25 categories with e Faster-RCNN-Inception-V2-COCO after installation and configuration i typed

python model_main.py --logtostderr --train_dir=training/ --pipeline_config_path=training/faster_rcnn_inception_v2_pets.config
in my conda virtual enviroment, after a couple of minutes i get error presented above, i don't know, how to manage with this problem. In addition before a error i get warnning

WARNING:tensorflow:Ignoring ground truth with image id 1389737172 since it was previously added
WARNING:tensorflow:Ignoring detection with image id 1389737172 since it was previously added

But i don;t know if it is important.



Caused by op 'PyFunc_1', defined at:
  File "model_main.py", line 101, in <module>
    tf.app.run()
  File "C:\Users\damia\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\platform\app.py", line 125, in run
    _sys.exit(main(argv))
  File "model_main.py", line 97, in main
    tf.estimator.train_and_evaluate(estimator, train_spec, eval_specs[0])
  File "C:\Users\damia\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\estimator\training.py", line 447, in train_and_evaluate
    return executor.run()
  File "C:\Users\damia\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\estimator\training.py", line 531, in run
    return self.run_local()
  File "C:\Users\damia\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\estimator\training.py", line 681, in run_local
    eval_result, export_results = evaluator.evaluate_and_export()
  File "C:\Users\damia\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\estimator\training.py", line 886, in evaluate_and_export
    hooks=self._eval_spec.hooks)
  File "C:\Users\damia\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\estimator\estimator.py", line 453, in evaluate
    input_fn, hooks, checkpoint_path)
  File "C:\Users\damia\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\estimator\estimator.py", line 1348, in _evaluate_build_graph
    features, labels, model_fn_lib.ModeKeys.EVAL, self.config)
  File "C:\Users\damia\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\estimator\estimator.py", line 1107, in _call_model_fn
    model_fn_results = self._model_fn(features=features, **kwargs)
  File "C:\Users\damia\Desktop\Magisterka\SSD\models\research\object_detection\model_lib.py", line 383, in model_fn
    include_metrics_per_category=eval_config.include_metrics_per_category)
  File "C:\Users\damia\Desktop\Magisterka\SSD\models\research\object_detection\eval_util.py", line 629, in get_eval_metric_ops_for_evaluators
    input_data_fields.groundtruth_is_crowd)))
  File "C:\Users\damia\Desktop\Magisterka\SSD\models\research\object_detection\metrics\coco_evaluation.py", line 349, in get_estimator_eval_metric_ops
    first_value_op = tf.py_func(first_value_func, [], tf.float32)
  File "C:\Users\damia\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\ops\script_ops.py", line 384, in py_func
    func=func, inp=inp, Tout=Tout, stateful=stateful, eager=False, name=name)
  File "C:\Users\damia\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\ops\script_ops.py", line 227, in _internal_py_func
    input=inp, token=token, Tout=Tout, name=name)
  File "C:\Users\damia\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\ops\gen_script_ops.py", line 130, in py_func
    "PyFunc", input=input, token=token, Tout=Tout, name=name)
  File "C:\Users\damia\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "C:\Users\damia\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\ops.py", line 3414, in create_op
    op_def=op_def)
  File "C:\Users\damia\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\ops.py", line 1740, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access

InvalidArgumentError (see above for traceback): TypeError: can't pickle dict_values objects
Traceback (most recent call last):

  File "C:\Users\damia\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\ops\script_ops.py", line 158, in __call__
    ret = func(*args)

  File "C:\Users\damia\Desktop\Magisterka\SSD\models\research\object_detection\metrics\coco_evaluation.py", line 339, in first_value_func
    self._metrics = self.evaluate()

  File "C:\Users\damia\Desktop\Magisterka\SSD\models\research\object_detection\metrics\coco_evaluation.py", line 193, in evaluate
    self._detection_boxes_list)

  File "C:\Users\damia\Desktop\Magisterka\SSD\models\research\object_detection\metrics\coco_tools.py", line 118, in LoadAnnotations
    results.dataset['categories'] = copy.deepcopy(self.dataset['categories'])

  File "C:\Users\damia\Anaconda3\envs\tensorflow\lib\copy.py", line 174, in deepcopy
    rv = reductor(4)

TypeError: can't pickle dict_values objects


         [[Node: PyFunc_1 = PyFunc[Tin=[], Tout=[DT_FLOAT], token="pyfunc_3", _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
         [[Node: SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/non_max_suppression_4/NonMaxSuppressionV3/_3221 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_4536_...pressionV3", tensor_type=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]


@manipopopo
Copy link

manipopopo commented Jul 22, 2018

You could change the call to eval_util.get_eval_metric_ops_for_evaluators in model_lib.py
from

eval_metric_ops = eval_util.get_eval_metric_ops_for_evaluators(
          eval_metrics,
          category_index.values(),
          eval_dict,
          include_metrics_per_category=eval_config.include_metrics_per_category)

to

eval_metric_ops = eval_util.get_eval_metric_ops_for_evaluators(
          eval_metrics,
          list(category_index.values()),
          eval_dict,
          include_metrics_per_category=eval_config.include_metrics_per_category)

duplicate: #4780

@programowalny
Copy link
Author

Thank you, it work.

@swg209
Copy link

swg209 commented Sep 18, 2018

@programowalny
Hi, I'd like to know whether the warning is solved using the way above? I take the way but still have this warning ,without TypeError: can't pickle dict_values objects .
WARNING:tensorflow:Ignoring ground truth with image id 65457264 since it was previously added
WARNING:tensorflow:Ignoring detection with image id 65457264 since it was previously added

@programowalny
Copy link
Author

I still had the this warning, but i read, You can ignore them

@swg209
Copy link

swg209 commented Sep 18, 2018

@programowalny But this warning make my model can't get evaluate result? Do you have any idea to sovle?

@programowalny
Copy link
Author

programowalny commented Sep 18, 2018

What do you mean by model can't get evaluate result? After learning proces you get some errors or your loss function is too high. I still have problem with loss function, after first 5 hundrets step is decreases, but after it grows. And my final model can't recognize any object. Do think this warnings can be a cause of it? Now i changing my enviroment, including OS. Tomorrow i let's you known abaut results

@swg209
Copy link

swg209 commented Sep 18, 2018

I use model_main.py to train model fit my dataset, and I set every 5min save a ckpt and trigger a evaluation on the model, then I can choose a best model. But the screen keep printing this warning ,
and doesn't show evaluation result.

@programowalny
Copy link
Author

You mean that kind of evaluation result? :

image

@swg209
Copy link

swg209 commented Sep 18, 2018

Do you use legacy >train.py to train model. The evalution result like this:
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from /home/swg/tfmodels/research/object_detection/demo/trains/model.ckpt-9168
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Evaluation [1/5]
INFO:tensorflow:Evaluation [2/5]
INFO:tensorflow:Evaluation [3/5]
INFO:tensorflow:Evaluation [4/5]
INFO:tensorflow:Evaluation [5/5]
creating index...
index created!
INFO:tensorflow:Loading and preparing annotation results...
INFO:tensorflow:DONE (t=0.00s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type bbox
DONE (t=0.01s).
Accumulating evaluation results...
DONE (t=0.01s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.452
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 1.000
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.203
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.452
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.500
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.500
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.500
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.500

@programowalny
Copy link
Author

Ok i tried model_main and train. And i must say i always get the output . When i use model_main i get average precision, when a train.py a loss function. If you did't get a average precision it is not cause of warnings like this:
WARNING:tensorflow:Ignoring ground truth with image id 65457264 since it was previously added

@Victorsoukhov
Copy link

I have the same problem. And it was solved by setting
num_examples: 1
in eval_config section of the pipeline config file. As was pointed in the documentation -
[https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/configuring_jobs.md] and as you can see in proto files
[https://github.com/tensorflow/models/blob/master/research/object_detection/protos/eval.proto]
num_examples - is the batch size of the evaluation.

@swg209
Copy link

swg209 commented Sep 19, 2018

@Victorsoukhov
Thanks a lot. Your way does solve my issue.And I find that the num_example can be set bigger, eg: 10 also works.

@wt-huang
Copy link

wt-huang commented Nov 3, 2018

Closing as this is resolved

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants