InfoQ Homepage Apple Content on InfoQ
-
How Apple Uses Neural Networks for Object Detection in Point Clouds
Apple invented a neural network configuration that can segmentate objects in point clouds obtained with a LIDAR sensor. Recently Apple joined the field of autonomous vehicles. Apple has now created an end-to-end neural network to segmentate objects in point clouds. This approach does not rely on any hand-crafted features or other machine learning algorithms other than neural networks.
-
Apple Open-Sourced the iOS Kernel for ARM CPUs
Apple has quietly made available arm and arm64-specific files on its GitHub XNU-darwin repository. While this may not be interesting to all developers, it still enables interesting possibilities for security researchers and others.
-
Swift 4 is Officially Available: What's New
Swift’s latest major release contains many changes and updates to the language and the standard library, most notably new String features, extended collections, archival and serialization, and more.
-
Apple’s iPhone X Has Custom Neural Engine Processor Built In
Speaking in the Steve Jobs Theatre at Apple Park yesterday Philip Schiller, senior vice president of worldwide marketing at Apple, described some of the technology behind the facial recognition system in the newly announced iPhone X including a dedicated neural engine built into the A11 chip.
-
Apple Reveals the Inner Workings of Siri's New Intonation
Apple has explained how they use deep learning to make Siri's intonation sound more natural. IPhone owners can interact with Siri by asking questions in natural language and Siri responds by voice. At WWDC 2017, Apple announced that in iOS 11 Siri would use a new text to speech engine. In August 2017, Apple's machine learning journal unveiled how they were able to make Siri sound more human.
-
Google Researcher Invented New Technology to Bring Neural Networks to Mobile Devices
Recently, many companies released applications that use deep neural networks. For applications that should run without internet access, must be fast and responsible, or in which privacy is a concern, using networks on servers is not possible. Google researcher Sujith Ravis invented a novel way to train two neural networks, of which one efficient network can be used with mobile applications.
-
Safari 11 Adds Missing Features, Improves Privacy by Default
Apple has taken the wraps off Safari 11, the newest version of their web browser. Available on iOS and MacOS, the browser now includes WebRTC and WebAssembly. Also included is a new tracking blocker that purports to reduce the ability for third-parties to track users as they move around the web.
-
WebKit Now Has Full Support for WebAssembly
Apple Safari has full support for WebAssembly including preparation for future integration with ECMAScript Modules and threads.
-
Apple Announces Core ML: Machine Learning Capabilities on Apple Devices
At WWDC 2017 Apple announced ways it uses machine learning, and ways for developers to add machine learning to their own applications. Their machine learning API, called Core ML, allows developers to integrate machine learning models into apps running on Apple devices running iOS, macOS, watchOS, and tvOS. Models run on the device itself, so data never leaves the device.
-
ARKit Sets the Foundations for Augmented Reality on Apple’s Platform
At WWDC 2017, Apple unveiled ARKit, a framework to build augmented reality (AR) apps for iOS. ARKit aims to allow for accurate and realistic immersion of virtual content on top of real-world scenes.
-
Apple TestFlight Now Supports A/B Testing of iOS Apps
With its recent update to TestFlight, Apple has introduced a number of features, such as multiple builds and enhanced groups, that make it possible to do A/B testing for iOS apps.
-
Apple Plans to Develop a Fully Custom GPU Architecture
Apple will develop its own custom graphics architecture to power the GPUs for its future devices, according to UK-based firm Imagination Technologies, Apple’s current GPU provider. The new GPUs should be ready in 15 months to two years' time and will be the first Apple-made GPUs that will bear no resemblance to Imagination Technologies’.
-
Swift 3.1 Improves Language, Package Manager, and Linux Implementation
Staying true to its plan, the recently announced Swift 3.1 is source compatible with Swift 3.0. Still, it includes a number of changes to the language, the standard library, and improved Linux implementation.
-
Swift Memory Ownership Manifesto
According to Chris Lattner, Swift creator and Swift team lead before moving to Tesla, defining a Rust/Cyclone-inspired memory ownership model is one of the main goals for Swift development. Now that Swift 4 has entered its phase 2, the Swift team has published a manifesto detailing how Swift memory ownership could work.
-
Swift 4 Enters Final Stage, Defers ABI Stability
Apple has detailed the release process for Swift 4, which should become available in the Fall of 2017. The main focus of this release is to provide significant enhancements to the core language and standard library, while delivering source compatibility. ABI compatibility, which was originally in the roadmap, will be deferred, explains Apples' new Swift team lead Ted Kremenek.