TensorNets
High level network definitions with pre-trained weights in TensorFlow (tested with 2.1.0 >=
TF >= 1.4.0
).
Guiding principles
- Applicability. Many people already have their own ML workflows, and want to put a new model on their workflows. TensorNets can be easily plugged together because it is designed as simple functional interfaces without custom classes.
- Manageability. Models are written in
tf.contrib.layers
, which is lightweight like PyTorch and Keras, and allows for ease of accessibility to every weight and end-point. Also, it is easy to deploy and expand a collection of pre-processing and pre-trained weights. - Readability. With recent TensorFlow APIs, more factoring and less indenting can be possible. For example, all the inception variants are implemented as about 500 lines of code in TensorNets while 2000+ lines in official TensorFlow models.
- Reproducibility. You can always reproduce the original results with simple APIs including feature extractions. Furthermore, you don’t need to care about a version of TensorFlow beacuse compatibilities with various releases of TensorFlow have been checked with Travis.
Installation
You can install TensorNets from PyPI (pip install tensornets
) or directly from GitHub (pip install git+https://github.com/taehoonlee/tensornets.git
).
A quick example
Each network (see full list) is not a custom class but a function that takes and returns tf.Tensor
as its input and output. Here is an example of ResNet50
:
import tensorflow as tf
# import tensorflow.compat.v1 as tf # for TF 2
import tensornets as nets
# tf.disable_v2_behavior() # for TF 2
inputs = tf.placeholder(tf.float32, [None, 224, 224, 3])
model = nets.ResNet50(inputs)
assert isinstance(model, tf.Tensor)
You can load an example image by using utils.load_img
returning a np.ndarray
as the NHWC format:
img = nets.utils.load_img('cat.png', target_size=256, crop_size=224)
assert img.shape == (1, 224, 224, 3)
Once your network is created, you can run with regular TensorFlow APIs
?
because all the networks in TensorNets always return tf.Tensor
. Using pre-trained weights and pre-processing are as easy as pretrained()
and preprocess()
to reproduce the original results:
with tf.Session() as sess:
img = model.preprocess(img) # equivalent to img = nets.preprocess(model, img)
sess.run(model.pretrained()) # equivalent to nets.pretrained(model)
preds = sess.run(model, {inputs: img})
You can see the most probable classes:
print(nets.utils.decode_predictions(preds, top=2)[0])
[(u'n02124075', u'Egyptian_cat', 0.28067636), (u'n02127052', u'lynx', 0.16826575)]
You can also easily obtain values of intermediate layers with middles()
and outputs()
:
<div class="highlight highlight-source-python position-relative" data-snippet-clipboard-copy-content="with tf.Session() as sess:
img = model.preprocess(img)
sess.run(model.pretrained())
middles = sess.run(model.middles(), {inputs: img})
outputs = sess.run(model.outputs(), {inputs: img})
model.print_middles()
assert middles[0].shape == (1, 56, 56, 256)
assert middles[-1].shape == (1, 7, 7, 2048)
model.print_outputs()
assert sum(sum((outputs[-1] – preds) ** 2))
with tf.Session() as sess: img = model.preprocess(img) sess.run(model.pretrained()) middles = sess.run(model.middles(), {inputs: img}) outputs = sess.run(model.outputs(), {inputs: img}) model.print_middles() assert middles[0].shape == (1, 56, 56, 256) assert middles[-1].shape == (1, 7, 7, 2048) model.print_outputs() assert sum(sum((outputs[-1] - preds) ** 2)) < 1e-8