Another one bites the dust? Keras team steps away from multi-backends, refocuses on tf.keras

Another one bites the dust? Keras team steps away from multi-backends, refocuses on tf.keras

The team behind deep learning library Keras has pushed out version 2.3 of the open source project. It is the last major release of multi-backend Keras.

The decision to step away from classic Keras and focus development efforts on TensorFlow module tf.keras instead was made in a SIG Keras meeting earlier this month. This is in slight contrast to what Keras creator François Chollet said after Google announced it was going to move the library into the TensorFlow core in 2017. Back then he stated that “Theano support will continue for as long as Keras exists, because Keras is meant as an interface rather than as an end-to-end framework”.

To be fair, times have changed and neither Theano nor CNTK are under active development anymore. The new direction might therefore help to steer contributions to where they are of most use, given that the TensorFlow module also seems to be better maintained than the regular Keras project. 

Multi-backend Keras will receive bug fixes for another six months until maintenance ceases. The team apparently still has to decide on what will happen to the issues that are still open at this point, but chances are, they’ll just focus on the notably smaller number of requests which have been made for tf.keras.

Users are encouraged to consider switching to tf.keras in the soon to be finished TensorFlow 2.0. The implemented API corresponds to the one in Keras 2.3, which should facilitate migrations, but includes ways to make use of additional TensorFlow functionalities such as eager execution. This is important to know, since – even though Keras 2.3 supports TF 2.0 – the new multi-backend version can’t work with these features.

The last big Keras release comes with a number of breaking changes, which include that the loss aggregation mechanism now sums over batch sizes which may lead to changes in reported loss values. Metrics and losses are reported under the name the user specified in v2.3 and recurrent activation for all layers of a recurrent neural network has been changed to sigmoid by default. 

Apart from that the Keras API saw the introduction of class-based losses and metrics, which means both can be parameterized via constructor arguments and metrics can be stateful. The team also added size(x) to the backend API, and a add_metric method and a metrics property to Layer/Model. 

Speaking of which, variables and layers set as attributes of a Layer are now tracked, a behaviour some might recognise from the way things are handled with Models. Those in turn were fitted with a model.reset_metrics method to clear a metric state. The complete list of changes can be found in the Keras repository.

Keras is a Python deep learning library which was presented to the public in 2015.