Intel has released version 5.0 of its Deep Learning Reference Stack, adding natural language processing capabilities and some examples to the offering.
The company introduced its collection of open source tools for deep learning which has been optimised for the company’s Xeon range in December 2018. It was meant as a way of facilitating deep learning projects on Intel platforms by providing components needed to start and deploy a project in a neat package.
Back then the stack contained Intel’s Linux distribution Clear Linux, Kata Containers with some additional protection, machine learning platform TensorFlow, and some other, more specialist libraries.
To make it easier to deploy, the stack is available as a Docker image, and uses the Kubeflow Pipelines platform for orchestration purposes. Since preferences vary, it is offered as either a TensorFlow or PyTorch-based version in different configurations, an overview of which can be found on the project’s website.
With v5.0 of the reference stack, Intel provides updated versions of the Intel OpenVINO model server, Intel Deep Learning Boost, TensorFlow, PyTorch and PyTorch Lightning. It also made sure to add natural language processing tools to the package, since the connected approaches, which are used for tasks like machine translation or as part of a transfer learning efforts, saw a lot of interest in the past year.
Users will therefore now find NLP library Flair and the Transformers project incorporated into the stack. The latter supplies application builders with general-purpose architectures for natural language understanding and NL generation with pretrained models in a large variety of languages.
On the infrastructure side, the company added integrations for function-as-a-service scenarios. How those can be used, can be examined in one of the new example projects. At the moment, there is one for image-to-image translation within a serverless architecture and one to automatically classify GitHub issues. Additional use cases are promised for the coming weeks.