Machine learning interoperability project ONNX has been made available in version 1.10, which sees the ML model representation’s type system and API expanding.
ONNX was initially released in 2017 as a cooperative project between Facebook and Microsoft. It consists of an intermediate representation (IR) which is made up of definitions of standard data types and an extensible computation graph model, as well as descriptions of built-in operators.
Developed as a standard to make sure models can be transferred between frameworks, it was soon able to draw support from more companies and found a new home in the LF AI Foundation in 2019. Currently the project is supported by 43 organisations — though TensorFlow developer Google is notably missing from that list. A backend for using ONNX models in TensorFlow can however be found under the ONNX GitHub project.
In version 1.10, the ONNX IR comes fitted with an Optional and a SparseTensor type, and has learned to include a list of model local functions in
model protos — which is a format for bundling models, graphs, and metadata.
The update contains new operators
OptionalHasElement besides new function operators
CastLike. Type constraints in
BatchNormalisation, bfloat16 support in
Pow, and the capability to return a slice with optional
end attributes via
Shape are meant to give developers additional flexibility.
The ONNX team also improved the project’s API, exporting the parser methods to Python so that devs can use it to construct models, and introducing symbolic shape inference. The latter has been implemented to keep the shape inference process from stopping when confronted with symbolic dimensions or dynamic scenarios. Shape inference should now also be able to work with
Dynamic QuantizeLinear, pick up shape input from partial data propagation, and run into fewer problems when using
Teams that had trouble using ONNX on aarch64 should struggle less thanks to newly added wheel build support for the architecture. Other changes worth noting include the update of
protobuf to v3.16, clearer specs for
QLinearMatMul, fixed compilation warnings on Linux, and reworked
BatchNorm outputs for training mode.
Details of the update are available via the project’s release notes.