TensorFlow 2.0 arrives with more power, greater simplicity


The much anticipated final release of TensorFlow 2.0 is now available, bringing with it a greater focus on simplicity and ease of use for developers working with machine learning and neural network projects.

TensorFlow was originally created by the Google Brain team, but is now an open source project available under the Apache License 2.0. This latest release is a major update for the platform, featuring changes driven by feedback from the community seeking greater ease of use without sacrificing power or flexibility.

Accordingly, one of the changes is tighter integration of the Keras neural-network library to make the experience of developing applications as familiar as possible for Python developers. It also features eager execution by default, where operations are executed immediately without the need to define a graph. This is claimed to make it easier to get started with TensorFlow, and make research and development more intuitive.

However, for those researchers pushing the boundaries of machine learning, the TensorFlow team says it has greatly enhanced the low-level APIs. The platform now exports all ops that are used internally, and there are inheritable interfaces for crucial concepts such as variables and checkpoints. This allows developers to build onto the internals of TensorFlow without having to rebuild TensorFlow, according to the TensorFlow team.

François Chollet, creator of Keras and a Software Engineer at Google commented on Twitter that one of the best things about TensorFlow 2.0 is that it brings high-level user experience and low-level flexibility together fluently.

“You no longer have on one hand, a high-level API that’s easy to use but inflexible, and on the other hand a low-level API that’s flexible but only approachable by experts. Instead, you have a spectrum of workflows, from the high-level to the low-level. And they’re all compatible,” he said.

TensorFlow 2.0 has now standardised on the SavedModel file format, which enables models to run on a wide variety of runtimes covering everything from browsers to Node.js and mobile and embedded systems.

“This allows you to not just run your models with TensorFlow, but deploy them to web and cloud with TensorFlow Extended. You can use them on mobile and embedded systems with TensorFlow Lite, and you can train and run them in the browser or Node.js with TensorFlow.js,” said Laurence Moroney, Artificial Intelligence Advocate at Google.

Other enhancements include a Distribution Strategy API that allows developers to boost training performance by distributing the workload with minimal code changes. TensorFlow 2.0 is also claimed to deliver up to 3x faster training performance using mixed precision on Volta and Turing GPUs, and uses an updated API to provide improved usability and high performance during inference on Nvidia T4 Cloud GPUs on Google Cloud.

The TensorFlow team also hinted that Cloud TPU support is coming in a future release, which would enable developers to access Google’s Tensor Processing Unit accelerator hardware.

For developers that have been using TensorFlow 1.x, the good news is that existing code will still run under TensorFlow 2.0, but a migration guide is available online.