The new Entity Extraction API, now available in beta, enables analyzing text inside of an app to detect different textual entities such as dates, URLs, payment cards, and so on. Selfie Segmentation aims to make it easier to add effects to pictures.
The Entity Extraction API is able to detect 11 different entities in 15 languages. Examples of supported entities are addresses, dates and times, email addresses, tracking numbers, and so on. The API works on-device, thus not requiring an internet connection, and can also be used while typing text thanks to its real-time performance.
The algorithm used by the Entity Extraction API is run in two steps: first, all words appearing in the input text are used to build all possible subsequences of a given maximum length; all subsequences are then scored using a neural network based on the likeliness they represent a valid entity. A second neural network is then used on the top scoring subsequences to classify them as addresses, numbers, and so on.
You can control which language-specific model is available on a device using ML Kit's model management API or you can let it automatically download the required language model when necessary. Once your model is available, you can use the annotateText
method like in the following example. Results are returned in an array if no error is found.
let options = EntityExtractorOptions(modelIdentifier:
EntityExtractionModelIdentifier.english)
let extractor = EntityExtractor.entityExtractor(options: options)
extractor.annotateText(text.string) {
result, error in
if let annotations = result {
let entities = annotation.entities
for entity in entities {
switch entity.entityType {
case EntityType.dateTime:
if let dateTimeEntity = entity.dateTimeEntity {
//-- do something
}
break
case EntityType.flightNumber:
if let flightNumberEntity = entity.flightNumberEntity {
//-- do something
}
break
case EntityType.money:
//-- ...
default:
//-- handle default case
}
}
}
According to Google, this new API will enable more advanced user experience when handling text input, besides the usual cut/copy/paste paradigm. For example, an app could automatically display a shortcut allowing the user to carry through a specific operation based on the kind of entity detected in the input text.
The Selfie Segmentation API is another new API in ML Kit available now in closed beta. It makes it possible to easily separate the background of a scene from its most prominent content. This allows developers to apply effects to the background, such as blur, or conversely to replace a picture background with a different one.
The Selfie Segmentation API works both with single or multiple people and can run in real-time on most modern iOS and Android phones.