Go big or go home: ONNX 1.8 enhances big model and unit test support

ONNX 1.8

ONNX, the Facebook and Microsoft initiated machine learning model representation, is now available in version 1.8 and sports enhancements like serialisation for sequence and map data type inputs and outputs.

The latter was largely added to make the project more stable, as it allows for sequence and map operator unit tests to be enabled. A size check in make_tensor is meant to ensure that dimensions and elements in a tensor are consistent to prevent users from running into difficulties later on. Meanwhile Windows users may be pleased to learn that there’s a new conda package available as part of the release. 

Researchers that are working with make-model without explicitly telling the system which version of the ONNX intermediate representation (IR) should be used can now adopt an extended variant of the tool, which generates the minimum required version from the specified opsets. This was made possible by opening the version-table for programmatic Python access, a change that might be helpful in other contexts as well.

The team behind the open standard also focused on improving the training and shape inference modules. The training module, for example, underwent some reworking of its IR and graph, and removed the rarely used GraphCall. It also got some Differentiable tags for better Gradient operator definition as well as a tool to help developers create the TrainingInfoProto protobuf message introduced for storing training information.


Shape inference received some fixes at the node and graph level, so users should notice fewer hiccups in the current iteration. The module can now also be used with models larger than 2GB, which should help the standard find more adopters in vision-related machine learning scenarios. To support that change the ONNX API had to be slightly modified as well. 

Those who previously had trouble with loop outputs or scalar ConstantofShape in shape inference have been encouraged to update and try again, as some bug fixes in the latest module should have mitigated these issues. More details on bug fixes and changes to the project’s infrastructure can be found in the ONNX release notes

ONNX was started in 2017 to lay the groundwork needed to let practitioners switch between machine learning frameworks. The project aims to promote model interoperability, as the lack of a standard makes it hard for developers to use their models anywhere other than in the context they were trained in, leading to a bit of a lock-in situation. 

With the potential to tap into new user groups, it didn’t take long for companies including IBM and Intel to jump in and support the initiative. To make the project more enticing to those that are sceptical about open source driven by large established tech vendors, ONNX was moved under the umbrella of the Linux Foundation’s AI Foundation in November 2019. Since then, the project was updated twice.

- Advertisement -