Tensorflow Serving with Slim Inception-V4
Prerequisite
To use model definition in ./tf_models/research/slim, we need to first make slim nets public visible, and then add slim as local repository to tensorflow serving bazel workspace.
- make slim nets as public visible
# format of edit file ./tf_models/research/slim/BUILD with following chages
# add visibility = ["//visibility:public"] into py_library()
- ./tf_models/research/slim/BUILD
...
py_library(
name = "dataset_factory",
srcs = ["datasets/dataset_factory.py"],
deps = [
":cifar10",
":flowers",
":imagenet",
":mnist",
],
visibility = ["//visibility:public"],
)
py_library(
name = "preprocessing_factory",
srcs = ["preprocessing/preprocessing_factory.py"],
deps = [
":cifarnet_preprocessing",
":inception_preprocessing",
":lenet_preprocessing",
":vgg_preprocessing",
],
visibility = ["//visibility:public"],
)
py_library(
name = "nets",
deps = [
":alexnet",
":cifarnet",
":inception",
":lenet",
":mobilenet_v1",
":overfeat",
":resnet_v1",
":resnet_v2",
":vgg",
],
visibility = ["//visibility:public"],
)
...
# and comment out all deps to //tensorflow in this build file
- ./tf_models/research/slim/BUILD
...
py_library(
name = "dataset_utils",
srcs = ["datasets/dataset_utils.py"],
deps = [
# "//tensorflow",
],
)
...
- add slim as local repository to tensorflow serving bazel workspace
- ./tensorflow_serving/workspace.bzl
...
def tf_serving_workspace():
native.new_local_repository(
name = "inception_model",
path = "tf_models/inception",
build_file = "tf_models/inception/inception/BUILD",
)
native.new_local_repository(
name = "slim_model",
path = "tf_models/research/slim",
build_file = "tf_models/research/slim/BUILD",
)
tf_workspace(path_prefix = "", tf_repo_name = "org_tensorflow")
...
...
In this way, the scripts in ./tensorflow_serving can understand the dataset, preprocessing and nests defined in ./tf_models/research/slim.
slim_inception_v4_saved_model.py
Just like ./tensorflow_serving/example/mnist_saved_model.py
and ./tensorflow_serving/example/inception_saved_model.py
,
we want to add our own ./tensorflow_serving/example/slim_inception_v4_saved_model.py
.
This script should load pre-trained pre-saved slim-inception-v4 checkpoints, and create a model servable,
in a simliar way of the script inception_v3_saved_model.py
.
Of course, the slim_inception_v4_saved_model.py
script depends on the dataset, preprocessing and nets defined in
./tf_models/research/slim
. These dataset, preprocessing and nets are now available in the ./tensorflow_serving
workspace, as a result of previous prerequisite step. However, we still need to declare the dependence in
./tensorflow_serving/example/BUILD
, so that in the ./tensorflow_serving/example/slim_inception_v4_saved_model.py
,
the environment is aware of dataset, preprocessing and nets.
- add slim_inception_v4_saved_model deps in example's BUILD
- ./tensorflow_serving/example/BUILD
...
py_binary(
name = "slim_inception_v4_saved_model",
srcs = [
"slim_inception_v4_saved_model.py",
],
srcs_version = "PY2AND3",
deps = [
"@slim_model//:dataset_factory",
"@slim_model//:preprocessing_factory",
"@slim_model//:nets",
"@org_tensorflow//tensorflow:tensorflow_py",
],
)
...
- load slim modules in slim_inception_v4_saved_model.py
- ./tensorflow_serving/example/slim_inception_v4_saved_model.py
...
import tensorflow as tf
from datasets import imagenet
from preprocessing import inception_preprocessing
from nets import inception
...
- create slim inception v4 saved model target
- ./tensorflow_serving/example/slim_inception_v4_saved_model.py
# note: we output not only top 5 classes and scores, but also the last layer before the output layer,
# which can be considered as the features of an image extracted by the model for making prediction.
...
# flexible to retrieve any tensor on graph
prelogits = sess.graph.get_tensor_by_name(
'InceptionV4/Logits/PreLogitsFlatten/flatten/Reshape:0'
)
# an optional alternative to get predefined outputs
# prelogits = end_points['PreLogitsFlatten']
...
...
builder.add_meta_graph_and_variables(
sess, [tf.saved_model.tag_constants.SERVING],
signature_def_map={
'predict_images':
prediction_signature,
},
legacy_init_op=legacy_init_op
)
...
- download pre-trained pre-saved slim-inception-v4 weights
# download pre-trained pre-saved slim-inception-v4 weights
$ mkdir -p tf_checkpoints/slim/inception-v4
$ wget -O tf_checkpoints/slim/inception-v4/inception_v4_2016_09_09.tar.gz http://download.tensorflow.org/models/inception_v4_2016_09_09.tar.gz
$ tar -tvsf tf_checkpoints/slim/inception-v4/inception_v4_2016_09_09.tar.gz
$ tar -xvzf tf_checkpoints/slim/inception-v4/inception_v4_2016_09_09.tar.gz -C tf_checkpoints/slim/inception-v4
$ ls tf_checkpoints/slim/inception-v4
> inception_v4_2016_09_09.tar.gz inception_v4.ckpt
$ rm tf_checkpoints/slim/inception-v4/inception_v4_2016_09_09.tar.gz
# inspect the checkpoint
$ cd tensorflow
$ bazel --output_user_root=../tf_bazel_cache build -c opt //tensorflow/python/tools:inspect_checkpoint
$ cd ..
$ ./tensorflow/bazel-bin/tensorflow/python/tools/inspect_checkpoint --file_name=./tf_checkpoints/slim/inception-v4/inception_v4.ckpt
- setup saved model as servable
$ bazel --output_user_root=./tf_bazel_cache build -c opt //tensorflow_serving/example:slim_inception_v4_saved_model
$ bazel-bin/tensorflow_serving/example/slim_inception_v4_saved_model \
--checkpoint_dir=tf_checkpoints/slim/inception-v4 \
--output_dir=tf_servables/slim/inception-v4 \
--model_version=1 \
--image_size=299
- serve
$ bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server \
--model_name=slim_inception_v4 \
--model_base_path=$PWD/tf_servables/slim/inception-v4 \
--port=9000
slim_inception_v4_client.py
As slim inception v4 is served via tensorflow model server, we need a client to call the model, receive and present the response.
Just like ./tensorflow_serving/example/mnist_client.py
and ./tensorflow_serving/example/inception_client.py
,
we want to add our own ./tensorflow_serving/example/slim_inception_v4_client.py
.
This script should expect an image_url from users, instead of a local file path. Then, it will read the image_url as image_bytes, send to served slim inception v4 model, receive and present the response.
- add slim_inception_v4_client deps in example's BUILD
- ./tensorflow_serving/example/BUILD
...
py_binary(
name = "slim_inception_v4_client",
srcs = [
"slim_inception_v4_client.py",
],
srcs_version = "PY2AND3",
deps = [
"//tensorflow_serving/apis:predict_proto_py_pb2",
"//tensorflow_serving/apis:prediction_service_proto_py_pb2",
"@org_tensorflow//tensorflow:tensorflow_py",
],
)
...
- create slim inception v4 client target
- ./tensorflow_serving/example/slim_inception_v4_client.py
...
def main(_):
...
image_bytes = urllib2.urlopen(FLAGS.image_url).read()
...
request.inputs['images'].CopyFrom(
tf.contrib.util.make_tensor_proto(
image_bytes, shape=[1]
)
)
...
- client
$ bazel --output_user_root=./tf_bazel_cache build -c opt //tensorflow_serving/example:slim_inception_v4_client
$ bazel-bin/tensorflow_serving/example/slim_inception_v4_client \
--server=localhost:9000 \
--image_url=/url/to/my_favoriate_image.jpg
$ bazel-bin/tensorflow_serving/example/slim_inception_v4_client \
--server=localhost:9000 \
--image_url=https://upload.wikimedia.org/wikipedia/commons/d/d9/First_Student_IC_school_bus_202076.jpg
...
outputs {
key: "classes"
value {
dtype: DT_STRING
tensor_shape {
dim {
size: 1
}
dim {
size: 5
}
}
string_val: "school bus"
string_val: "wooden spoon"
string_val: "burrito"
string_val: "stupa, tope"
string_val: "amphibian, amphibious vehicle"
}
}
outputs {
key: "prelogits"
value {
dtype: DT_FLOAT
tensor_shape {
dim {
size: 1
}
dim {
size: 1536
}
}
float_val: 0.229630336165
float_val: 0.012188008055
float_val: 0.890766382217
...
float_val: 0.010527072474
float_val: 0.118053443730
float_val: 0.0
}
}
outputs {
key: "scores"
value {
dtype: DT_FLOAT
tensor_shape {
dim {
size: 1
}
dim {
size: 5
}
}
float_val: 0.99850153923
float_val: 0.00031825833139
float_val: 8.37764491735e-06
float_val: 8.29482996778e-06
float_val: 7.34377454137e-06
}
}
...