This week’s Google I/O saw a slew of machine learning features coming to the company’s mobile platform Firebase. The new capabilities are still in beta and include an on-device translation API, an interface for object detection and tracking, as well as AutoML Vision Edge.
As part of ML Kit, they will give application developers the opportunity to use the offline models used in Google Translate for translations in their offerings, track objects in a live camera feed and create image classification models.
Cloud TPU pods switch to beta
Speaking of Google and machine learning, Google Cloud TPU v2 and v3 Pods are now available as betas as well. The company’s so-called machine learning supercomputers are made of their custom tensor processing units and should help researchers and data scientists build complex machine learning models more quickly or train more models in less time for comparison purposes for example.
To get started, Google has compiled a Cloud TPU Quickstart and a couple of reference models. With them, engineers can try the computational power of the Google chips on natural language processing, speech recognition and image classification tasks amongst other things.
And some more cloudy betas..
The second major version of time series database platform InfluxDB Cloud can now be taken for a spin as well. The beta comes with a single unified API, and wizards to help setting up and connecting to data.
Other than that it includes a rate-limited free tier that is meant to remain free and there’s support for the Flux language which should help with analytics amongst other things.