LinkedIn debuts Java machine learning framework Dagli

LinkedIn debuts Java machine learning framework Dagli

Java isn’t exactly the first programming language that springs to mind when discussing machine learning – however, that doesn’t mean it can’t be used for such purposes. To help make it more of an option, LinkedIn’s research team has now released ML framework Dagli, which claims to “make it easy to write bug-resistant, readable, efficient, maintainable and trivially deployable models” for JVM languages.

According to LinkedIn research scientist Jeff Pasternack, models are still part of an integrated pipeline whose deployment “remains more cumbersome than it should be” – although the machine learning community has gained lots of “excellent tools” in the last couple of years. 

Dagli wants to tackle this by abstracting machine learning pipelines as directed acyclic graphs (DAG, hence the name) that promise extensive optimisation opportunities. 

DAG roots are either placeholders (representations for training and inference data) or generators that provide values to child nodes such as transformers (data transformations and learned models) and views. The directed edges of the graphs specify that data flows from parent to child node only, while the acyclic quality of the graph makes sure that nodes can’t be their own ancestor to prevent infinity loops.

Compared to other projects, training and inference are both handled by a single pipeline, which the LinkedIn team thinks cuts down on technical debt, as it reduces duplicated and glue code, making systems easier to maintain in the long-run. Another upside of this approach is that pipelines can be serialised as a single object making it easier to deploy than multi-part setups.

For those fearing that the single pipeline approach could lead to overfitting, Pasternack assured that the project’s architecture allows transformers to have different outputs during training and inference which gives way for cross-training and similar strategies. 

Pipeline definitions are meant to be “easy-to-read” and have been designed making use of static typing and immutability to help prevent logic errors. Meanwhile parallel execution, graph optimisation, and minibatching should ensure speedy processes. 

To facilitate uptake, Dagli comes loaded with pipeline components such as statistical models, neural networks, and feature transformers so that users don’t have to write everything from scratch to see if the tool works for them. As everything is written in Java, programmers can make use of common IDE features such as code completion or inline documentation for modifications. It also helps to push modern JVM languages like Kotlin more into the machine learning realm, since it makes for easier integration.

Dagli is mostly suited to regular use cases involving neural networks or more basic machine learning approaches. The Dagli team however was also quick to agree that there is no one size fits all framework in machine learning, directing those looking for more cutting edge stuff to TensorFlow, PyTorch, and DeepLearning4J.

Example code along with more explanations can be found in the project’s repository.