OpenAI fires up on PyTorch

OpenAI fires up on PyTorch

OpenAI has opted to standardise its development on PyTorch, saying the move should make it easier for its developers “to create and share optimised implementations of our models”.

The AI non-profit turned profit making concern with a non-profit arm said the move would help it increase its research productivity at scale on GPUs.

“It is very easy to try and execute new research ideas in PyTorch,” it said. “For example, switching to PyTorch decreased our iteration time on research ideas in generative modeling from weeks to days. We’re also excited to be joining a rapidly-growing developer community, including organizations like Facebook and Microsoft, in pushing scale and performance on GPUs.”

“Going forward we’ll primarily use PyTorch as our deep learning framework but sometimes use other ones when there’s a specific technical reason to do so,” said OpenAI, before adding, “Many of our teams have already made the switch, and we look forward to contributing to the PyTorch community in upcoming months.”

So technically, other frameworks – say Google’s TensorFlow –  aren’t completely out the window at OpenAI. 

At the same time, the move, and the namechecks, do suggest a shift in the group’s centre of gravity. PyTorch was spawned at Facebook, and the social giant worked with Microsoft on the Open Neural Network Exchange project.

Moreover, Microsoft invested $1bn into OpenAI last July, shortly after the research outfit said it needed to attract investors to bankroll its original mission to create AI that would benefit mankind as a whole. OpenAI also announced that it would work with Microsoft on “Azure AI super computing technologies” and port OpenAI services to Microsoft Azure.

So those “specific technical reasons” to use other frameworks will probably have to be pretty compelling.