README.md
1 Using TensorRT in TensorFlow
2 ============================
3
4 This module provides necessary bindings and introduces TRT_engine_op
5 operator that wraps a subgraph in TensorRT.
6
7 Compilation
8 -----------
9
10 In order to compile the module, you need to have a local TensorRT
11 installation (libnvinfer.so and respective include files). During the
12 configuration step, TensorRT should be enabled and installation path
13 should be set. If installed through package managers (deb,rpm),
14 configure script should find the necessary components from the system
15 automatically. If installed from tar packages, user has to set path to
16 location where the library is installed during configuration.
17
18
19 ```
20 bazel build --config=cuda --config=opt //tensorflow/tools/pip_package:build_pip_package
21 bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/
22 ```
23
24 After the installation of tensorflow package, TensorRT transformation
25 will be available. An example use is shown below.
26
27 ```python
28 import tensorflow as tf
29 import tensorflow.contrib.tensorrt as trt
30 #... create and train or load model
31 gdef = sess.graph.as_graph_def()
32 trt_gdef = trt.create_inference_graph(
33 gdef, #original graph_def
34 ["output"], #name of output node(s)
35 max_batch_size, #maximum batch size to run the inference
36 max_workspace_size_bytes) # max memory for TensorRT to use
37 tf.reset_default_graph()
38 tf.import_graph_def(graph_def=trt_gdef)
39 #...... run inference
40 ```
41