Late last year, Google announced a new mobile library for Tensorflow on mobile devices: Lite.
If you didn’t know by now, Tensorflow is Google’s framework for developing machine-learning models on a large scale and is proving popular with the community – even if many developers choose to use a higher level abstraction on top of it such as Keras.
One of the reasons for this is Tensorflow’s ability to deliver machine learning that scales across large clusters of servers along with the ability to use GPU units on each server to deliver even more speed. These clusters are used to train machine-learning models that will then be able to make inferences when presented with new data.
The problem is it takes a lot of data to train these models. For instance, one of the image-recognition models used in Tensorflow Lite sample applications (
MobileNet_v1_1.0_224) was trained on over 2.5 million images – and that doesn’t include validation and test.
Obviously, you are not going to run this sort of large-scale model train on a mobile phone platform any time soon. Tensorflow Lite works by providing a library of modules that can import pre-trained models optimised for mobile phones into a mobile app for use on Android or iOS platforms.
I had a look at one of the test apps for iOS that allows an image to be loaded and the app tries to guess its main element. An advanced app will hook up to the camera and do the same trick on images you’ve taken.
It’s quite impressive. Lite confidently stated that the image below is a sports car, although – given that it’s an MX-5 – some might disagree.
Is it a bird, a plane or sports car? Photo: Andy Cobley
Presented with harder images, such as the one below, the model took a brave stab but doesn’t quite manage. This time it gave a list of possible main elements with the most confident at the top. Given the number of wires, lights and what looks like displays, Tensorflow Lite decided it was an oscilloscope. Good try, but wrong: it’s a Eurorack synthesizer.
To be honest, downloading the kit and sending different images to this simple example is enough good fun to justify the extremely long time it takes to install.
The oscilloscope that wasn’t: a Eurorack synthesizer. Photo: Andy Cobley
Parlour tricks aside, why would you want to make machine learning available on a mobile device? Currently, if your app wants this sort of “intelligence”, it will rely on a server-based setup – it will need to contact the server and wait for a response. This raises a number of problems regarding privacy and reliance upon data services.
If your data (in this case a photo of something you want identified) has to go back to the server, it can be intercepted along the way, probably through the weakest link of the channel.
Even if it’s not intercepted, you have effectively given your data to some behemoth to do with as they wish. Some users may be uncomfortable with this idea. Sending the data back to a server means a reliance on some sort of mobile or Wi-Fi signal. This may not seem to be a problem if you’re hanging around central London, but in mobile black spots – and these exist as much in the highlands of Scotland as the metropolitan South East around London – it’s a different matter. For scientists or engineers working in the field, it might be an issue. Having the model in the phone solves both of these problems – although the speed of response will be dependent on the speed of your phone.
I found the simple image recognition model is a lot of fun to play with, running it with different images is just a case of importing the image into the project and changing one line of code to load the new image (line 155 of
RunModelViewController.mm of the iOS app).
Getting to this point may not be so simple. Besides the usual hoops Apple has you jump through to get the iOS app registered, you will need to download Tensorflow Lite from GitHub, make sure your installation of xcode is up to date, that you have
brew installed, install
automake:libtool, the dependencies for Tensorflow and – finally – build the Tensorflow Lite library for iOS.
If you want to run the demo with the camera, you’ll need CocoaPods – which relies on Ruby – as this will let you to run the Pod install command that, you’ve guessed it, installs yet more files. Building for Android is similarly long: it requires Bazel and Android NDK along with the usual Android SDK. Oh, and Windows users, there is a little note in the docs: “Bazel does not fully support building Android on Windows yet.”
Just getting started is probably an afternoon’s work.
Of course this is just the demo. It would be great to start developing your app using machine learning. Here for mobile developers the problems really start, though: suddenly you are deep in the field of machine learning – not an easy subject to just pick up. If you’re lucky, you can find some Tensorflow code out there to do what you want, but even then you’ll probably need to get your hands dirty in the code to make it do what you need.
I was lucky, I had the code for a film recognition engine left over from another piece of work and this had been altered to save a trained model to the file system. I thought this would be simple: this model just needs to go through a convertor to create a
.tflite file that can be loaded into the iOS app.
I was wrong. Hurdle number one for iOS developers is that there is no swift API, so at this point you are going to have to drop down to Objective C++. Hurdle number two comes when converting the model into a frozen graph, which not only requires the model in the correct format, but also a folder with the checkpoints for the model and crucially the output node names. The tutorial admits that these might not be obvious outside of the model building the code but they can be found by looking at a visualisation of the graph.
Needless to say my recommendation model is still not integrated into an app.
I’ve no doubt Tensorflow Lite is no more difficult than any other emerging technology. Only here, you must accept that developing with this framework requires not only knowledge of mobile app development but quite deep knowledge of how Tensorflow works.
I suspect it’s just a matter of time before job ads start appearing seeking mobile development skills mixed with Tensorflow and machine learning capabilities. Given the opportunity for machine learning built into mobile apps, this is a bandwagon that it would seem sensible to jump on. Time to straddle the divide by getting up to speed on the tech side of the equation where you are probably coming up short.