AWS tunes TensorFlow for latest Deep Learning images

Amazon has unwrapped a pile of updated machine learning integrations for its Deep Learning machine images, including its own tuning of the latest version of TensorFlow.

The books and bits giant has announced that customers can use its AWS Deep Learning and Deep Learning Base AMIs on Amazon Linux 2, the data beasts’s own flavour of the open source operating system. It is tuned for “optimal performance on AWS” and has long term support until June 2023.

AWS said its Deep Learning AMI now comes with an optimised build of TensorFlow 1.13.1, to “accelerate performance on Intel Xeon Platinum processors that power EC2 C5 Instances.” It claimed this means training a ResNet-50 model with synthetic ImageNet data using the Deep Learning AMI” delivers 9.4 times faster throughput than stock TensorFlow 1.13 binaries. TensorFlow 1.13.1 was released in February.

GPU instances also get an optimized build of TensorFlow 1.13 that is configured with Nvidia CUDA 10 and cuDNN 7.4 “to take advantage of mixed precision training on Volta V100 GPUs powering EC2 P3 instances”. Apparently, the AMI will automatically deploy the most performant build of TensorFlow “for the EC2 instance of your choice when you activate the TensorFlow virtual environment for the first time.”

Advertisement

The AMIs also include the Horovod distributed training framework, which should help with scaling TensorFlow training to multiple GPUs. The framework is again optimised for AWS, in this case for its Nvidia Tesla-powered  EC2 P3 instances. AWS claimed 27 per cent increase throughput training a ResNet-50 model compared to “stock TensorFlow 1.13 on 8 nodes”.

The Deep Learning AMIs now also support Chainer 5.3, the Python-based Deep Learning framework. Chainer comes fully-configured to take advantage of CuPy with NVIDIA CUDA 9 and cuDNN 7 drivers for accelerating computations on NVIDIA Volta GPUs powering Amazon EC2 P3 instances.

Lastly, the AMIs feature the latest release of MXNet, 1.4, which adds Java bindings for inference, Julia bindings, and JVM memory enhancement, amongst other enhancements.

- Advertisement -