Facebook’s PyTorch to light the way to speedy Machine Learning workflows


Facebook’s development department has finished a first release candidate for v1 of its PyTorch project – just in time for the first conference dedicated to the Python package.

For those not familiar with the tool, its main features are NumPy-like tensor computation with GPU acceleration and a special deep neural network implementation.

The preview contains a new set of compiler tools that at runtime rewrite PyTorch models to be more efficient. The just-in-time compiler should also be able to export models that are able to run in a C++ only runtime. Optimisation is optional and can be done either by tracing native Python code with torch.jit.trace or using a Python subset called Torch Script.

Another addition is the C++ frontend, whose API is yet to be marked as stable and should therefore be used for research purposes only. It contains C++ equivalents to torch.nn, torch.optim, and torch.data as well as other components of the Python frontend. The release notes state, that the purpose of the new interface is research in high performance, low latency, and bare metal c++ applications.


To help speeding up distributed computations, torch.distributed and torch.nn.parallel.DistributedDataParallel are now backed by a asynchronously working library called C10D. This should especially improve parallel computations in slower networks such as ethernet-based hosts. It also adds support for send and recv in the Gloo backend.

A complete list of newly available operators, distributions, and bug fixes can be found at GitHub. 

Not yet ready, but already integrated

Other big players dabbling in the field of Machine Learning used the first PyTorch Developer conference this week to make their support for the project known. Engineers of the Google Cloud Platform for example are now collaborating with Facebook’s team to give PyTorch users access to the company’s Cloud TPUs. Apparently they have already finished a prototype that uses an open source linear algebra compiler called XLA, which is planned to be open sourced as well. AWS however includes PyTorch in its Amazon SageMaker offering, which is a platform for training and deploying Machine Learning.

Last year Microsoft partnered with Facebook on the open neural network exchange format ONNX and has now refreshed Azure Machine Learning to keep its “first-class” PyTorch support up to date. Azure Data Science Virtual Machine and Notebooks come with PyTorch already installed as well. Microsoft now plans contributions to make data loading and processing run faster for PyTorch. One focus is formats defined for speech datasets in the Hidden Markov Model Toolkit

- Advertisement -