Amongst other things v1.13 will see TensorFlow Lite and the Nvidia Collective Communications Library rehomed to the project’s core. TensorFlow Lite’s source code can then be found in tensorflow/lite rather than tensorflow/contrib/lite, with Python modules accessible via tf.lite.
This has been due to the planned dissolution of the lately hard to maintain contrib module. Come v2.0 it will no longer be distributed, and its contents will be split up into smaller projects which will either be integrated into the core, moved into other repositories, or completely removed.
The TensorFlow team have also used the spring-clean to move endpoints in versions.py to corresponding endpoints in tf.sysconfig und tf.version, constants under tf.saved_model submodules to tf.saved_model, deprecate a couple of elements, and update sklearn imports for deprecated packages.
Since consistent behavior is one of the things helpful to anyone trying to get started with TensorFlow, v.1.13 is matching floating types in Python to other integer types by disallowing conversion to uint32/64 in tf.constant. It also comes with a slightly changed implementation of the gain argument of convolutional orthogonal initialisers such as convolutional_orthogonal_2D. By, for example, scaling the output l2-norm by gain instead of by sqrt(gain), this aligns its behavior with that of the tf.initializers.orthogonal initialiser.
Other changes include some new operations, for example one for nearest neighbour resizing, and some to encode, decode, and transcode a number of text encoding formats into main unicode encodings such as UTF-8, as well as some performance improvements for GPU and TPU usage. GPU binaries are now built against version 10 of the CUDA platform.
V1.13 will be the first to include the promised command line tool to help users upgrading to TensorFlow 2.0. A first release candidate for that is supposed to see the light of day some time in early 2019.
TensorFlow is licensed under the Apache License 2.0. The library for numerical computation originally stems from the Google Brain team and is especially popular amongst machine learning professionals.