What’s the point: CLion, Rancher, Apache TVM, and AWS Lambda

What's the point

The final CLion release of the year aims to lend C/C++ developers a hand at debugging. To make the experience smoother, it now allows devs to use core dumps for debugging, and to run and debug configurations and unit test applications with root/admin privileges. CLion also provides expandable inline variable views and inline watches so users can follow complex expressions in the editor instead of having to switch into the Watches panel.

Other enhancements in version 2020.3 include better integration with testing tools CTest and Google Test, MISRA C 2012 and MISRA C++ 2008 checks, means to disable CMake profiles and some additional help for working with Makefile projects.

Rancher shuffles under the SUSE CaaS hood

A good five months after having announced its plan to buy enterprise Kubernetes distributor Rancher, SUSE has now declared the acquisition has been finalised. In a blog post on the topic, Rancher co-founder Sheng Liang provided some insight into the future of Rancher, especially in relation to SUSE’s CaaS Platform, which onlookers had been wondering about for a while. 

“SUSE and Rancher customers can expect their existing investments and product subscriptions to remain in full force and effect according to their terms. Additionally, the delivery of future versions of SUSE’s CaaS Platform will be based on the innovative capabilities provided by Rancher. We will work with CaaS customers to ensure a smooth migration.” 

He also underlined both companies’ commitment to open source, promising to “continue contributing to upstream projects”.

Deep Learning project Apache TVM joins ASF top level

Apache TVM, the open source machine learning compiler stack for CPUs, GPUs and specialised accelerators, has graduated into the realms of Apache Software Foundation’s top-level projects. 

TVM promises its users, which include AMD, Arm, AWS, Intel, Nvidia and Microsoft, a high degree of flexibility and performance, by offering functionality to deploy deep learning applications across hardware modules.

AWS Lambda onboards containers, ups function memory limits

Even if you haven’t heard about it for a while, serverless computing is still a thing, and so AWS used its stage at Re:Invent to announce some changes in its AWS Lambda services. The company has decided to start “rounding up duration to the nearest millisecond with no minimum execution time” which should make things a bit cheaper. 

It now also allows memory allocation up to 10GB (it was about a third of that before) for Lambda Functions, and lets developers package their functions as container images or deploy arbitrary base images to Lambda, provided they implement the Lambda service API.