Dave Burke, VP of engineering at Google, announced a new version of Tensorflow optimised for mobile phones.
This new library, called Tensorflow Lite, would enable developers to run their artificial intelligence applications in real time on the phones of users. According to Burke, the library is designed to be fast and small while still enabling state-of-the-art techniques. It will be released later this year as part of the open source Tensorflow project.
At the moment, most artificial intelligence processing happens on servers of software as a service providers. By making this library available Google hopes to offload some of these processes to the phone of the user. This would save both processing power and data. It would also ensure that the data of the user remains private, and would not require an internet connection anymore.
Tensorflow Lite is the second deep learning tool that will become available on mobile phones. In November 2016 Facebook already announced its own framework: Caffe2Go. This framework has been used for real time style-transfer: adding art-like filters to your mobile phone.
Burke also announced that he and his team are adding functionality to communicate with processors designed for neural networks. These networks use a lot of the same type of arithmetic, which can be optimised using GPUs and Googles own Tensor Processing Unit (TPU). At the moment phones are already using neural networks, for example in the Google Translate app. Training these networks is still too heavy for mobile phones.