PyTorch has debuted a slew of experimental features in its just-released version 1.3 as support for the TensorFlow competitor broadens, and new tools to tackle challenges like privacy appear.
PyTorch 1.3 seems to be right on trend with its new capabilities, adding, for example, previews of implementations for model quantisation and on-device machine learning. The latter is heavily looked into these days, as interest in privacy-focused approaches soars. Mobile support is one of the building blocks to, for example, realise federated learning, a technique which allows training data to be spread between clients, meaning that data doesn’t have to leave a device anymore to be included in the training of a centralised model.
In its first iteration, mobile support comes down to prebuilt LibTorch libraries for Android and iOS, optimised implementations for certain operators, modules making sure that TorchScript inference is possible and forward operations can be executed on mobile CPUs. Future releases are planned to resolve any performance and size issues, include a high-level API, and support for backwards operations. Tutorials and demos can be found on the PyTorch website.
Quantisation is another popular topic right now, since computational resources can be sparse and reducing precision might yield good results with less resources in some scenarios. On a very basic level PyTorch is now able to convert float tensors to quantised ones and offers 8-bit-quantised implementations of common convolutional neural networks operations.
This however is needed to allow the experimental new addition to implement post-training and dynamic quantisation as well as quantisation-aware training. The latter is realised by mimicking quantisation during training, while dynamic quantisation can be accomplished by using quantised weights but keep floating point activations. Another preview feature lets developers explicitly name tensors, which can help with rearranging dimensions and checking if APIs are used correctly.
Breaking changes introduced in v1.3 include NumPy-style type promotion, which lets arithmetic and comparison operations perform mixed-type operations promoting a common dtype. This could lead to programs returning different dtypes and values than in previous versions. Other adjustments enforcing behavioural changes are for example the different handling of 0-dimensional inputs in torch.flatten and reworkings in nn.functional.affine_grid when align_corners is set to true. Details can be found in the release notes of the project.
PyTorch is a machine learning library mainly developed by Facebook and is a main competitor to Google’s TensorFlow. It is based on the Torch project, an ML framework programmed in Lua which isn’t in active development anymore. Recently Alibaba Cloud added support for PyTorch, joining the likes of AWS, Microsoft Azure, and Google Cloud.
Since interest in the library is on the up with contributions and citations rising, the project is now seemingly trying to make even more of an impact by releasing some additional helpers into the community.
How a model makes a certain decision, for example, is a question currently troubling those looking into using ML in enterprise scenarios. Captum, a tool introduced at this years’ PyTorch Developer Conference, is a first try at getting better with that. It offers insight into the importance of specific neurons or layers in a neural network and is a first step in the direction of better interpretability.
Privacy, as already mentioned, is another concern bubbling up in the research community, which is why Facebook’s AI team has put CrypTen forward. The project is meant to let users work with encrypted data and models to preserve privacy. It currently uses an implementation of secure multiparty computation, which in the future should be joined by homomorphic encryption and secure enclaves. Details are available in a separate blog post.
Apart from that, Facebook AI Research keeps improving their more traditional avenues and just announced Detectron2, a rewrite of its object detection and segmentation framework. The new version uses PyTorch and is designed in a modular way, making it more extensible than its predecessor.