DeepMind says RLax..or try Haiku(s)

DeepMind says RLax..or try Haiku(s)

Artificial intelligence company DeepMind has open-sourced new libraries for neural networks and reinforcement learning, making the most of mothership Google’s JAX.

The news was spread via Twitter, informing machine learning aficionados about the release of (still experimental) project’s Haiku and RLax to GitHub. 

As the name Haiku might suggest to those familiar with DeepMind’s open source activities, the library is a riff on neural network Sonnet. However, instead of making use of TensorFlow, Haiku draws on Google’s numerical computation library JAX. In the project’s repo, the library’s aim is described as enabling “users to use familiar object-oriented programming models while allowing full access to JAX’s pure function transformations”.

JAX was introduced by Google in 2018 as a way to automatically differentiate native Python and NumPy functions and realise fast scientific computing, which is almost always useful in machine learning related tasks. 

While JAX is already serving in a number of libraries, Flax being only one of them, Haiku aims to facilitate rather specific use cases, such as managing model parameters and model states, without interfering too much in other areas.

Other advantages DeepMind points out are the fact that Haiku comes heavily tested in large-scale environments like language and image processing, and its alignment with Sonnet. The latter is also meant to make the library easier to use for those already familiar with other tools and to open them up to JAX’s promised performance and productivity improving capabilities.

RLax is also based on JAX and “exposes useful building blocks for implementing reinforcement learning agents”. In reinforcement learning, systems called agents – those can be robots, for example – learn to interact with the world it is embedded in, often using reward-based systems. 

RLax doesn’t come with complete algorithms but rather “implementations of reinforcement learning specific mathematical operations that are needed when building fully-functional agents”, so a bit of brain power is needed to build a functioning system. 

Those who aren’t daunted by that fact can find an implementation of an agent able to play a game of catch in the project’s repository.