Skip to content

Commit

Permalink
Merge master
Browse files Browse the repository at this point in the history
  • Loading branch information
bedapisl committed Sep 29, 2020
2 parents f1e7ff5 + b51df2f commit f100430
Show file tree
Hide file tree
Showing 51 changed files with 1,313 additions and 238 deletions.
18 changes: 13 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@

| Build Type | OS | Python | Tensorflow | Onnx opset | Status |
| --- | --- | --- | --- | --- | --- |
| Unit Test - Basic | Linux, MacOS<sup>\*</sup>, Windows<sup>\*</sup> | 3.6, 3.7 | 1.12-1.15, 2.1-2.2 | 7-12 | [![Build Status](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_apis/build/status/unit_test?branchName=master)](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=16&branchName=master) |
| Unit Test - Full | Linux, MacOS, Windows | 3.6, 3.7 | 1.12-1.15, 2.1-2.2 | 7-12 | [![Build Status](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_apis/build/status/unit_test-matrix?branchName=master)](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=18&branchName=master) | |
| Unit Test - Basic | Linux, MacOS<sup>\*</sup>, Windows<sup>\*</sup> | 3.6, 3.7, 3.8 | 1.12-1.15, 2.1-2.3 | 7-12 | [![Build Status](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_apis/build/status/unit_test?branchName=master)](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=16&branchName=master) |
| Unit Test - Full | Linux, MacOS, Windows | 3.6, 3.7, 3.8 | 1.12-1.15, 2.1-2.3 | 7-12 | [![Build Status](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_apis/build/status/unit_test-matrix?branchName=master)](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=18&branchName=master) | |

## Supported Versions

Expand All @@ -20,7 +20,7 @@ If you want the graph to be generated with a specific opset, use ```--opset``` i

We support all ```tf-1.x graphs```. To keep our test matrix manageable we test tf2onnx running on top of ```tf-1.12 and up```. tf2onnx-1.5.4 was the last version that was tested all the way back to tf-1.4.

There is now ```experimental support for tf-2.x```.
There is now ```support for tf-2.x```.
With the exception of LSTM unit tests, all unit tests are enabled and passing.
Unit tests that we still need to fix are marked with ```@skip_tf2```.
GRU/LSTM's are converting but not runnable due to type/shape inference issues at runtime (working on that one).
Expand Down Expand Up @@ -193,6 +193,12 @@ Only valid with parameter `--saved_model`. Specifies which signature to use with

Only valid with parameter `--saved_model`. If a model contains a list of concrete functions, under the function name `__call__` (as can be viewed using the command `saved_model_cli show --all`), this parameter is a 0-based integer specifying which function in that list should be converted. This parameter takes priority over `--signature_def`, which will be ignored.

#### --large_model

(This is experimental, valid only for TF2.x models)

Only valid with parameter `--saved_model`. When set, creates a zip file containing the ONNX protobuf model and large tensor values stored externally. This allows for converting models that exceed the 2 GB protobuf limit.

#### --target

Some models require special handling to run on some runtimes. In particular, the model may use unsupported data types. Workarounds are activated with ```--target TARGET```. Currently supported values are listed on this [wiki](https://github.com/onnx/tensorflow-onnx/wiki/target). If your model will be run on Windows ML, you should specify the appropriate target value.
Expand Down Expand Up @@ -274,7 +280,8 @@ tf2onnx.tfonnx.process_tf_graph(tf_graph,
opset=None, custom_op_handlers=None,
custom_rewriter=None, extra_opset=None,
shape_override=None, inputs_as_nchw=None,
input_names=None, output_names=None):
input_names=None, output_names=None,
const_node_values=None):
"""Convert tensorflow graph to onnx graph.
Args:
tf_graph: tensorflow graph
Expand All @@ -289,11 +296,12 @@ tf2onnx.tfonnx.process_tf_graph(tf_graph,
inputs_as_nchw: transpose inputs in list from nchw to nchw
input_names: list of input node names in graph, input name format as node_name:port_id
output_names: list of output node names in graph, output name format as node_name:port_id
const_node_values: an optional dict mapping node names to tensor values
Return:
onnx graph
"""
```
For example in [examples/call_coverter_via_python.py]():
For example in [examples/call_converter_via_python.py]():
```
import tensorflow as tf
import tf2onnx
Expand Down
47 changes: 47 additions & 0 deletions examples/benchmark_tfmodel_ort.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
"""
The following code compares the speed of tensorflow against onnxruntime
with a model downloaded from Tensorflow Hub.
"""
import time
import numpy
from tqdm import tqdm
import tensorflow_hub as hub
import onnxruntime as ort


def generate_random_images(shape=(100, 100), n=10):
imgs = []
for i in range(n):
sh = (1,) + shape + (3,)
img = numpy.clip(numpy.abs(numpy.random.randn(*sh)), 0, 1) * 255
img = img.astype(numpy.float32)
imgs.append(img)
return imgs


def measure_time(fct, imgs):
results = []
times = []
for img in tqdm(imgs):
begin = time.perf_counter()
result = fct(img)
end = time.perf_counter()
results.append(result)
times.append(end - begin)
return results, times


imgs = generate_random_images()

# Download model from https://tfhub.dev/captain-pool/esrgan-tf2/1
# python -m tf2onnx.convert --saved-model esrgan --output "esrgan-tf2.onnx" --opset 12
ort = ort.InferenceSession('esrgan-tf2.onnx')
fct_ort = lambda img: ort.run(None, {'input_0:0': img})
results_ort, duration_ort = measure_time(fct_ort, imgs)
print(len(imgs), duration_ort)

model = hub.load("https://tfhub.dev/captain-pool/esrgan-tf2/1")
results_tf, duration_tf = measure_time(model, imgs)
print(len(imgs), duration_tf)

print("ratio ORT / TF", sum(duration_ort) / sum(duration_tf))
File renamed without changes.
75 changes: 75 additions & 0 deletions examples/end2end_tfhub.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
"""
This example retrieves a model from tensorflowhub.
It is converted into ONNX. Predictions are compared to
the predictions from tensorflow to check there is no
discrepencies. Inferencing time is also compared between
*onnxruntime*, *tensorflow* and *tensorflow.lite*.
"""
from onnxruntime import InferenceSession
import os
import sys
import subprocess
import timeit
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import Input
try:
import tensorflow_hub as tfhub
except ImportError:
# no tensorflow_hub
print("tensorflow_hub not installed.")
sys.exit(0)

########################################
# Downloads the model.
hub_layer = tfhub.KerasLayer(
"https://tfhub.dev/google/efficientnet/b0/classification/1")
model = keras.Sequential()
model.add(Input(shape=(224, 224, 3), dtype=tf.float32))
model.add(hub_layer)
print(model.summary())

########################################
# Saves the model.
if not os.path.exists("efficientnetb0clas"):
os.mkdir("efficientnetb0clas")
tf.keras.models.save_model(model, "efficientnetb0clas")

input_names = [n.name for n in model.inputs]
output_names = [n.name for n in model.outputs]
print('inputs:', input_names)
print('outputs:', output_names)

########################################
# Testing the model.
input = np.random.randn(2, 224, 224, 3).astype(np.float32)
expected = model.predict(input)
print(expected)

########################################
# Run the command line.
proc = subprocess.run(
'python -m tf2onnx.convert --saved-model efficientnetb0clas '
'--output efficientnetb0clas.onnx --opset 12'.split(),
capture_output=True)
print(proc.returncode)
print(proc.stdout.decode('ascii'))
print(proc.stderr.decode('ascii'))

########################################
# Runs onnxruntime.
session = InferenceSession("efficientnetb0clas.onnx")
got = session.run(None, {'input_1:0': input})
print(got[0])

########################################
# Measures the differences.
print(np.abs(got[0] - expected).max())

########################################
# Measures processing time.
print('tf:', timeit.timeit('model.predict(input)',
number=10, globals=globals()))
print('ort:', timeit.timeit("session.run(None, {'input_1:0': input})",
number=10, globals=globals()))
70 changes: 70 additions & 0 deletions examples/end2end_tfkeras.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
"""
This example builds a simple model without training.
It is converted into ONNX. Predictions are compared to
the predictions from tensorflow to check there is no
discrepencies. Inferencing time is also compared between
*onnxruntime*, *tensorflow* and *tensorflow.lite*.
"""
from onnxruntime import InferenceSession
import os
import subprocess
import timeit
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers, Input

########################################
# Creates the model.
model = keras.Sequential()
model.add(Input((4, 4)))
model.add(layers.SimpleRNN(8))
model.add(layers.Dense(2))
print(model.summary())
input_names = [n.name for n in model.inputs]
output_names = [n.name for n in model.outputs]
print('inputs:', input_names)
print('outputs:', output_names)

########################################
# Training
# ....
# Skipped.

########################################
# Testing the model.
input = np.random.randn(2, 4, 4).astype(np.float32)
expected = model.predict(input)
print(expected)

########################################
# Saves the model.
if not os.path.exists("simple_rnn"):
os.mkdir("simple_rnn")
tf.keras.models.save_model(model, "simple_rnn")

########################################
# Run the command line.
proc = subprocess.run('python -m tf2onnx.convert --saved-model simple_rnn '
'--output simple_rnn.onnx --opset 12'.split(),
capture_output=True)
print(proc.returncode)
print(proc.stdout.decode('ascii'))
print(proc.stderr.decode('ascii'))

########################################
# Runs onnxruntime.
session = InferenceSession("simple_rnn.onnx")
got = session.run(None, {'input_1:0': input})
print(got[0])

########################################
# Measures the differences.
print(np.abs(got[0] - expected).max())

########################################
# Measures processing time.
print('tf:', timeit.timeit('model.predict(input)',
number=100, globals=globals()))
print('ort:', timeit.timeit("session.run(None, {'input_1:0': input})",
number=100, globals=globals()))
30 changes: 21 additions & 9 deletions tests/backend_test_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,8 @@
from tf2onnx import optimizer
from tf2onnx.tf_loader import tf_reset_default_graph, tf_session, tf_placeholder, from_function, freeze_session
from tf2onnx.tf_loader import tf_optimize, is_tf2
from tf2onnx.tf_utils import compress_graph_def
from tf2onnx.graph import ExternalTensorStorage


class Tf2OnnxBackendTestBase(unittest.TestCase):
Expand Down Expand Up @@ -72,9 +74,10 @@ def run_onnxruntime(self, model_path, inputs, output_names):
results = m.run(output_names, inputs)
return results

def run_backend(self, g, outputs, input_dict):
model_proto = g.make_model("test")
model_path = self.save_onnx_model(model_proto, input_dict)
def run_backend(self, g, outputs, input_dict, large_model=False):
tensor_storage = ExternalTensorStorage() if large_model else None
model_proto = g.make_model("test", external_tensor_storage=tensor_storage)
model_path = self.save_onnx_model(model_proto, input_dict, external_tensor_storage=tensor_storage)

if self.config.backend == "onnxruntime":
y = self.run_onnxruntime(model_path, input_dict, outputs)
Expand All @@ -86,7 +89,8 @@ def run_backend(self, g, outputs, input_dict):

def run_test_case(self, func, feed_dict, input_names_with_port, output_names_with_port, rtol=1e-07, atol=1e-5,
convert_var_to_const=True, constant_fold=True, check_value=True, check_shape=True,
check_dtype=True, process_args=None, onnx_feed_dict=None, graph_validator=None, as_session=False):
check_dtype=True, process_args=None, onnx_feed_dict=None, graph_validator=None, as_session=False,
large_model=False):
# optional - passed to process_tf_graph
if process_args is None:
process_args = {}
Expand Down Expand Up @@ -121,7 +125,9 @@ def run_test_case(self, func, feed_dict, input_names_with_port, output_names_wit
concrete_func = tf.function(func, input_signature=tuple(input_tensors))
concrete_func = concrete_func.get_concrete_function()
graph_def = from_function(concrete_func,
input_names=list(feed_dict.keys()), output_names=output_names_with_port)
input_names=list(feed_dict.keys()),
output_names=output_names_with_port,
large_model=large_model)
else:
#
# use graph to execute the tensorflow func
Expand Down Expand Up @@ -151,6 +157,9 @@ def run_test_case(self, func, feed_dict, input_names_with_port, output_names_wit

tf_reset_default_graph()
with tf_session() as sess:
const_node_values = None
if large_model:
const_node_values = compress_graph_def(graph_def)
tf.import_graph_def(graph_def, name='')

if self.config.is_debug_mode:
Expand All @@ -161,9 +170,11 @@ def run_test_case(self, func, feed_dict, input_names_with_port, output_names_wit
g = process_tf_graph(sess.graph, opset=self.config.opset,
input_names=list(feed_dict.keys()),
output_names=output_names_with_port,
target=self.config.target, **process_args)
target=self.config.target,
const_node_values=const_node_values,
**process_args)
g = optimizer.optimize_graph(g)
actual = self.run_backend(g, output_names_with_port, onnx_feed_dict)
actual = self.run_backend(g, output_names_with_port, onnx_feed_dict, large_model)

for expected_val, actual_val in zip(expected, actual):
if check_value:
Expand All @@ -180,10 +191,11 @@ def run_test_case(self, func, feed_dict, input_names_with_port, output_names_wit

return g

def save_onnx_model(self, model_proto, feed_dict, postfix=""):
def save_onnx_model(self, model_proto, feed_dict, postfix="", external_tensor_storage=None):
target_path = utils.save_onnx_model(self.test_data_directory, self._testMethodName + postfix, feed_dict,
model_proto, include_test_data=self.config.is_debug_mode,
as_text=self.config.is_debug_mode)
as_text=self.config.is_debug_mode,
external_tensor_storage=external_tensor_storage)

self.logger.debug("create model file: %s", target_path)
return target_path
Loading

0 comments on commit f100430

Please sign in to comment.