Welcome to the machine learning: Microsoft plays to ML devs with slew of AI announcements

Welcome to the machine learning: Microsoft plays to ML devs with slew of AI announcements

Microsoft’s developer conference, Build, is in full swing, boasting machine learning-related announcements from Visual Studio expansions, to responsible AI tools, and a new “AI supercomputer”. Meanwhile Amazon has flung out a whitepaper instructing devs how to build cloud native machine learning systems.

A new AI supercomputer may sound exciting, but not much is known about the system, which Microsoft built in cooperation with OpenAI, an AI research laboratory it teamed up with (and invested in) in summer 2019. The Azure-hosted behemoth apparently reflects OpenAI’s “dream system” sporting “285,000 CPU cores, 10,000 GPUs and 400 gigabits per second of network connectivity for each GPU server.”

In a canned statement, OpenAI CEO Sam Altman explained the project with their observation “that larger-scale systems are an important component in training more powerful models”. Its main goal seems to be multitasking, since Microsoft CTO Kevin Scott described the project’s aim as follows: “This is about being able to do a hundred exciting things in natural language processing at once and a hundred exciting things in computer vision, and when you start to see combinations of these perceptual domains, you’re going to have new applications that are hard to even imagine right now.” 

After thinking up something worth the money one surely would have to spend to use Azure for such data-intensive tasks, developers will apparently soon be able to investigate Redmond’s approach to language understanding. At Build, the company promised it “soon” would be “open-sourcing its Microsoft Turing models”, giving devs the tools to train large AI models “in a distributed and optimised way”…in Azure Machine Learning.

Those who don’t want to wait that long can also take a second look at DeepSpeed, a deep learning optimisation library Microsoft introduced to the deep learning community only earlier this year. According to a Build announcement, the library’s optimiser now looks into all sorts of memory consumption during training, which promises DeepSpeed users scale and speed improvements by an order of magnitude during deep learning.

But it’s not just speed that is a limiting factor on the way to implementing some sort of machine learning approach or artificial intelligence into a company’s products and workflow. Depending on where it is used, tech to realise strict transparency guidelines to make decisions explainable and therefore understandable is a top priority before getting the go-ahead in the first place. Microsoft jumps in here with some responsible machine learning capabilities, including interpretability and fairness helpers, which are meant to make their way into Azure ML. 

Data privacy is another important factor when training models and getting buy-in from any compliance team – and of course Azure also got a new tool for that. WhiteNoise, a differential privacy toolkit, was developed in cooperation with Harvard’s Institute for Quantitative Social Science (IQSS) and School of Engineering. It injects statistical noise in data to “prevent disclosure of private information” and helps to come up with privacy preserving queries. 

Just recently, Microsoft quietly lifted the lid off its own experimental AI programming language, Bosque. The repo describes the language as simultaneously supporting “a high productivity development experience expected by modern cloud developers, coming from say a TypeScript/Node stack, while also providing a resource efficient and predictable runtime with a performance profile similar to a native C++ application”. How and if it catches on still remains to be seen, but it goes to show that a lot seems to be going on behind the AI curtain.

Even Microsoft’s IDE Visual Studio has been roped into the machine learning business, with the just released version 16.6 introducing Model Builder as a preview feature. The former VS extension was included to facilitate the use of machine learning models in .NET applications “without prior ML experience”. Devs can activate it by adding Machine Learning to a project, which can be done by right-clicking on it in the solution explorer. The UI will then present a number of scenarios the tool can support, ranging from text classification to anomaly and object detection.

While those interested in the latter will have to make do with example walkthroughs to include them into their applications, devs that are content with text and image classification, value prediction and recommendation tasks can fall back on automated machine learning. AutoML and the ML.NET platform are Model Builder’s tooling of choice, and while most training comes in local flavours only, VS 16.6 users willing to invest have the option to improve performance on computationally heavy image classification by using AzureML.

Microsoft of course isn’t the only one trying to flog its cloud services to the machine learning masses. AWS is another contender in the field and tried to divert some attention to its own platform yesterday, by publishing a whitepaper on designing “machine learning workloads following cloud best practices”. 

The document is meant as an addition to the “well-architected framework” paper the company released a while ago, and provides system architects with some pointers to set up ML workloads on AWS – or indeed review old ones. One of the aspects the paper takes into account are cost optimisations, which could make it worth a read if you think about the company’s sometimes cloudy pricing model.