TensorFlow 1.14 hits with promise of write once, run on 1.x/2.x

TensorFlow 1.14 hits with promise of write once, run on 1.x/2.x

The TensorFlow team have released v1.14, finally delivering the compatibility module necessary to generate code working with both 1.x and the soon to arrive 2.x TF versions.

Version 1.14 of the numerical computation library, which is especially popular in machine learning circles, comes with MKL-DNN enabled by default, which is meant to dispatch “the best kernel implementation based on CPU vector architecture”. It also has loss reduction set to AUTO out of the box, meaning reduction options will from now on be determined by usage context to improve reliability.

Since DType will no longer be convertible into int with this release, you’ll have to replace your int(dtype) statements with dtype.as_datatype_enum to keep them working. Something similar is true for those referencing :pooling:ops operators: since transitive dependencies were removed for that library, you might have to add explicit dependencies.

Apart from that, v1.14 has seen quite a lot of additions to the Keras and Python API, as well as a slew of new operators and improved op functionality. For example, raw TensorFlow functions now be used together with the Keras Functional API without developers having to create Lambda layers in most cases.

Layer and Model now contain a dynamic constructor argument and users are given a way to  implement RNN cells with custom behaviour. A complete list of new functionalities and operators, including strings.byte_split, RaggedTensor.placeholder(), tf.random.binomial, and support for add_metric in the graph function mode, can be found in the release notes.

System package maintainers and users building TensorFlow extensions should be aware that non-Windows system libraries are versioned starting with the current release.

TensorFlow is an open source library available on GitHub under the Apache License 2.0. The project was initially released by Google and is steadily heading towards its second major release at the moment.

To make sure the next version will facilitate wider adoption, the TF team has worked on simplifying the project’s API and added features to make the deployment of machine learning models easier and more robust. Version 2.0 reached beta status last week.