TensorFlow 2.3 aims for program ‘understanding’, resource economy

TensorFlow

A good two months after its last big release, the TensorFlow team has bestowed version 2.3 upon followers of the self-proclaimed machine learning framework for everyone.

TensorFlow 2.3 seems to have put a special focus on understanding and reducing resource usage, with new mechanisms in the data library and fresh profiler tools being among the most highlighted additions. An experimental snapshot API in tf.data for example is meant to store the output of a preprocessing pipeline to disk, so that already processed data can be reused, saving CPU resources needed to compute them again in other steps.

Moreover, the tf.data service aims to speed up the training process in cases where the attached host isn’t able to “keep up with the data consumption of the model”. If a model for example can process more images then the host can generate, the service can take over leveraging a cluster of workers to prepare the needed amount of training data. 

It has to be noted, though, that the data service currently only supports a processing mode in which all workers process the entire input dataset independently. Options to let each worker handle a different shard for example are still in development, meaning it’s best suited for cases in which input data doesn’t have to arrive in a certain order to get good results at this point.

Advertisement

Interesting new tools can also be found in the TensorFlow profiler, which now comes with a memory profiler and a Python tracer. The former grants users insight into how their model uses memory over time, which can be very useful for optimisation purposes or to simply get a better idea of what it is doing. The Python tracer meanwhile pretty much does what it says on the tin, allowing devs to trace the Python function calls in their TensorFlow programs.

Speaking of Python, the tf.debugging.experimental.enable_dump_debug_info() API for the language can now write debugging information to a directory. This can then be read and visualised through the new Debugger V2 dashboard, which also provides a more in-depth look at a program, showcasing graph structures, history of operation executions, tensor composition and code locations amongst other things.

As in the last couple of releases, the Keras team – which now works under the TensorFlow umbrella – was especially busy moving its module forward. This time, they came up with a replacement for the feature column API called preprocessing layers. As the name suggests, the API takes care of data preprocessing operations such as normalisation, random image transformations, and text vectorisation. The feature is still experimental, but definitely worth to be taken for a spin – especially since it’s supposed to work with composite tensor inputs.

Keras also saw the inclusion of a couple of processing layers for categorical data, that is statistical data made up of categorical variables like educational level or age group. The new layers are supposed to help developers build an index of categorical feature values, turn continuous numerical features into categorical ones by merging some values (see age groups), or create features “representing co-occurrences of previous categorical feature values”, for instance.

Besides those, Keras now comes with new dataset generation utilities, so that users can transform structured image or text file directories into labeled datasets, or create a time series dataset from an appropriate data array. Other areas of improvement were text vectorisation, image preprocessing and augmentation.

A quick glance at the release notes is recommended before updating, since TensorFlow 2.3 includes quite a few breaking changes concerning components like the C++ API of tf.data, the latter’s DatasetBase::IsStateful method, and tf.image.extract_glimpse endpoints.

- Advertisement -