Core ML 3, Apple's machine learning framework that enables iOS developers to integrate machine learning technology into their apps, received a number of updates at WWDC 2019, including: a number of new model types, many new neural network layer types, and support for on-device retraining of existing models using new data generated locally by the user.
The new models introduced by Core ML 3 makes it possible to use Core ML to solve a larger variety of problems. They include k-Nearest Neighbor classifiers and ItemSimilarityRecommender, both of which can be used to build recommender systems; SoundAnalysisPreprocessing, which can be used for sound classification; and Linked models, which are basically an optimization mechanism, so, for example, if you have two models that both rely on a third model, they can link to the latter instead of including it, which means it will only be loaded once. The new model types add up to Core ML model type library, which included generalized linear models, support vector machines, and tree ensembles, which can all be used for supervised classification or regression problems; VisionFeaturePrint, which is a neural network for feature extraction from images; NLP models for text analysis and classification; and pipelines, which are meta-models built combining other models.
Most interestingly, a Core ML 3 model can be updated, i.e., retrained, based on new data collected on the device. This also applies to the ready-to-use models that Core ML comes with, which means you can let them evolve with new data generated from your app users. On-device retrain is only supported for neural networks and k-Nearest Neighbor model types and ensures re-training does not involve any external service so your data does not ever need to leave your device. On the constrary, previous CoreML versions relied on server-side training. While on-device training opens up many novel possibilities, it also brings some level of complication at the UI level, since retraining a model is no simple task. Additionally, new on-device generated models will need to be persisted in some way to make sure they can be used cross-device or after an app is deleted and reinstalled.
At a lower level, Core ML 3 includes support for over 100 neural network layer types. Each layer type specializes in a kind of task, such as rounding off values, clamping inputs, etc. The availability of about 70 new layers means you can convert more complex neural-networks to Core ML without resorting to the use of custom layers. For a complete enumeration of all new layer types, please check out the official Apple documentation.
Core ML 3.0 is included in iOS 13 and requires MacOS 10.15 for development, both of which are available in beta for registered developers.