The community used the last couple of months to include support for more strategies for distributed training in tf.distribute as well as custom training loops and Keras subclassed models. It also went ahead and deprecated tf.contrib – its functionalities have either been migrated to the core of the TensorFlow API, the addons module, or removed completely.
Another breaking change came in the form of tf.estimator.DNN/Linear/DNNLinearCombined estimators now using tf.keras.optimizers instead of tf.compat.v1.train.Optimizer. A checkpointer converter is part of the current release to help updating, since the optimizers have separate variables.
The transition into beta goes along with the stabilisation of the TensorFlow 2.0 API, which means that its current state is the final one. A compatibility module for 1.14 will be part of the soon to hit release of the 1.x series.
With over 100 issues closed, the alpha phase has proven to be rather helpful in getting rid of bugs for the big 2.0. Before the first release candidate hits the repositories however, there are still some issues to tackle. Namely, the team aims at still improving performance and completing Keras model support on Cloud TPUs and TPU pods.
Works on TensorFlow 2.0 have been going on in parallel to the regular development of the 1.x series, which just saw the release of a first v1.14 release candidate. The focus of the TensorFlow team was mainly on simplifying the library and making it more comfortable to use in v2.0, since the entry barriers for those not used to working in machine learning were often cited as something slowing down adoption.
Aside from the simplified API, the new major release will include features to make model deployment more robust, easy model building with Keras, and functionality meant to facilitate research experiments.