Open source AI collab project ONNX turns 1.7, takes first steps towards multi-framework training

Open source AI collab project ONNX turns 1.7, takes first steps towards multi-framework training

Open machine learning model representation ONNX has hit version 1.7, previewing training support, and including additional operators as well as some model checker and documentation enhancements.

Since the last update, the ONNX team has been busy adding features to represent the various stages of training, to facilitate the distribution of the process between various frameworks in future releases. 

At this stage the preview contains a protobuff message to store information such as the algorithm and initialisers used, functions describing most commonly used loss functions and optimisers, as well as Gradient and GraphCall operators.

Meanwhile, ONNX’s model checker was improved to make sure typing constraints specified by the op schema are satisfied and have a way of inferring a node’s output type from those constraints. The checker now also calls “shape-inference to do the extra-checking performed by the type-and-shape-inference methods of ops”. 

Version 1.7 also provides some new operators, such as Einsum, GreaterOrEqual, LessOrEqual, or SoftmaxCrossEntropyLoss, and some updates to often used favourites such as Min, Max, and Constant. 

Other than that, the body graph of functions can now make use of multiple external operator sets and the operator registration APIs have learned to work with subgraph registration. Also, the project documentation finally described functions and external tensor data for clarity. A list of all changes can be found in the release notes.

ONNX is a Linux Foundation AI project, introduced by Facebook and Microsoft in 2017. The project started with the goal of providing developers of machine learning applications with an option to easily switch between frameworks, so that they could make use of their different strengths. 

ONNX wasn’t the first to tackle the issue of what could be called “framework lock-in”. The Khronos Group, known for standards such as OpenGL, for example proposed its neural network exchange format NNEF in 2015 to make deep learning models more portable. The list of participants involved in its creation contain a couple of companies, such as Huawei, Intel, and Arm, which also count themselves as supporters of the ONNX initiative. Converters for both formats exist.

One company that seems to be missing from both lists, however, is TensorFlow propagator Google. This could suggest the assumption that ONNX is especially meant to facilitate model portability between TensorFlow and Facebook’s PyTorch, which to be fair is more probable than teams jumping back and forth between frameworks during development. And maybe something Google doesn’t really consider added value, given that its project seemingly knows how to draw a crowd.