At WWDC 2017 Apple announced ways it uses machine learning, and ways for developers to add machine learning to their own applications.
Their machine learning API, called Core ML, allows developers to integrate machine learning models into apps running on Apple devices with iOS, macOS, watchOS, and tvOS. Models reside on the device itself, so data never leaves the device.
Multiple API calls are already available that application developers can use without having to add any additional models to their app. Examples of such computer vision algorithms are face detection and tracking, landmark detection, and event detection. Developers are also able to analyze natural language, for example in emails, text, and web pages. Natural language processing API calls include language identification, tokenization, part of speech tag extraction, and recognizing named entities.
[Click on the image to enlarge it]
Developers can also design and use their own machine learning models. Core ML supports deep neural networks, with over 30 layer types. It also supports other machine learning methods such as SVMs and linear models. Devices can use both the CPU and GPU, giving a lot of room for powerful algorithms to run on Apples devices.
Apple provides pre-trained models developers can download and add to their app. One model displayed on their website can detect the scene of an image from 205 categories (such as airport terminal or bedroom). Three other models can detect objects present in an image. Developers can also convert existing models to the Core ML format using a by apple supplied conversion tool. Supported machine learning tools are Keras (with Tensorflow backend), Caffe, Scikit-learn, libsvm and XGBoost. It is not possible to import an existing Tensorflow model into Core ML, which would be possible with Tensorflow Lite on Android.
Developers who want to add artificial intelligence to their app should visit the official documentation page.
[Click on the image to enlarge it]