Microsoft throws open ONNX Runtime – now roll up your sleeves

Microsoft throws open ONNX Runtime – now roll up your sleeves

Microsoft has open sourced the Open Neural Network Exchange Runtime, the inference engine for Machine Learning models in the ONNX format developed with AWS and Facebook.

Now residing on GitHub under an MIT License, Microsoft called the release a “significant step” towards an open and interoperable ecosystem for Artificial Intelligence.

“We hope this makes it easier to drive product innovation in AI,” Microsoft posted here.

Translated: the founding fathers now expect open sourcers to get on with the job of customising and integrating the engine with their products.

Builds of ONNX runtime are initially available for Python on CPUs running Windows, Linux and Mac, GPUs running Windows and Linux, and for C# on CPU’s running Windows.

Microsoft would dearly love you to adopt ONNX Runtime as it means greater indirect support from the AI community for Windows. Redmond already employs the runtime to improve scoring latency and bring greater efficiency to models used in its Bing Search and Ads and Office services.

It’s also a component in Windows ML for Windows 10. Windows ML lets you run trained Machine Learning models in Windows apps locally, on the device, via CPU and GPU optimisation.

Of course, you have machine-model choices, but the ONNX camp would prefer you backed their horse in this Machine Learning race. Microsoft highlighted models in TensorFlow, Keras, Scikit-Learn or CoreML can be converted using its OONXML and TF2ONNX open-source converters.

Microsoft also claimed performance and efficiency gains would be yours. It reckoned models it had converted to ONNX had seen a doubling in performance while the runtime consumed just a few megabytes on the CPU, thereby providing low levels of latency and higher levels of efficiency for a smoother end-user experience and reducing costs through lower machine utilisation.