Facebook releases DeepFocus to open source

Facebook releases DeepFocus to open source
Facebook, by solomon7 via Shutterstock

Facebook has open sourced DeepFocus, the AI-powered framework inside its Half Dome varifocal headset unveiled earlier this year.

DeepFocus works with the Half Dome hardware to follow your eye and change the focus, creating that blur you experience in real life behind or in front of objects when you shift gaze.

Facebook said it’s open sourcing DeepFocus to help others working in VR research. It reckoned DeepFocus demonstrates AI can “solve the challenges of highly compute-intensive visuals in VR.”

“Deep focus provides a foundation to overcome practical rendering and optimisation limitations for future novel display systems,” Facebook wrote here.

Source code and network models – implemented in TensorFlow – along with datasets are on GitHub here.

Facebook employed deep learning for de-focussing, claiming conventional approaches struggle to deliver results in real time and placed too much demand on processors. It claims to have built a “novel, end-to-end convolutional neural network that produces the image.”

This network works by producing retinal blur as soon as the eye looks at an object in realtime.

The system works with all existing VR games and applications, Facebook said, because it uses standard RGB-D colour and depth input. It’s also compatible with multifocal displays and light-field displays in addition to varifocal systems such as Half Dome.

You can read more on DeepFocus here and get the research paper here.