Google has found another way to lure machine learning aficionados to its cloud, offering so-called Deep Learning Containers to get ML projects up and running quicker.
The product consists of a set of performance-optimised Docker containers that come with a variety of tools necessary for deep learning tasks already installed. In the introductory blog entry, Google software engineer Mike Cheng illustrates the idea behind the project with a workflow spanning local prototyping and the integration of cloud tools to an eventual deployment to a cloud service.
In this scenario, Deep Learning Containers are meant to ensure the availability of all necessary dependencies in the different environments. They are also supposed to improve the scalability of a project and facilitate the use of additional hardware, since for example Nvidia’s GPU tools are already part of the package.
Each of the available container images includes a Python 3 (including packages such as numpy, sklearn, scipy, pandas, and nltk) and a Jupyter environment, conda, the Nvidia tools CUDA, cuDNN, and NCCL2, and a few other supporting packages. They differ however in the framework installed, giving customers the choice between TensorFlow 1.x and 2.x, PyTorch, scikit-learn, and R.
Deep Learning Containers can be used locally and on Google’s Kubernetes Engine, the Compute Engine, or in the AI Platform Training. According to the documentation, Deep Learning Container instances are available free of charge. However, if they are combined with other Google services, such as GKE etc, which makes sense if scalability or more computational power is what you’re going for, the pricing of those offerings applies.