For the third time in as many years, Google I/O kicked off yesterday at the Shoreline Amphitheatre in Mountain View, California. Nearly perfect weather greeted the 7,000 attendees who met to learn from Google's Annual flagship developer conference. Core takeaways included:
- AI remains a top priority and core to what Google plans to focus on in the near term
- Google Assistant is maturing and seeing a strong push from the company
- Key themes for the upcoming release of Android P include Intelligence, Simplicity, and Digital Wellbeing
Sundar Pichai, Google's chief executive officer, kicked off the conference after a forty minute pre-show that showcased NSynth, a music synthesizer that generates new sounds using neural networks, and World Draw, a live interactive world canvas where people from Shoreline and around the world drew objects in a shared space.
Speaking at a town hall earlier this year, Pichai was quoted as saying "AI is one of the most important things humanity is working on. It is more profound than electricity or fire." Artificial Intelligence (AI) and Machine Learning (ML) remained a common theme throughout the day at the conference with the announcement of ML Kit, TPU 3.0, and the even more powerful Google Assistant. However, Pichai also acknowledged the concerns raised by AI yesterday as the company continues to stress their shift from mobile-first to AI-first:
We know the path ahead needs to be navigated carefully and deliberately, and we know we have a deep sense of responsibility to get this right.
ML Kit is an SDK available with Firebase that targets Android and iOS developers. The SDK leverages the device's camera and includes five machine learning models: recognizing text, detecting faces, detecting landmarks, scanning barcodes, and labeling images. The kit comes in an online and on-device version. The cloud-based ML Kit requires an Internet connection, but offers high accuracy whereas the on device version, though less accurate and dependent on the device's processing power, keeps the data offline and local to the device.
Powering the transformation to an AI company, Google announced Tensor Processing Units (TPUs) at Google I/O in 2016 as the underlying engine that enables the neural network computations of Google services such as Google Translate, Google Photos, and Street View. At Google I/O 2018, TPU 3.0 was announced to the world. TPU 3.0 is a next-generation chip that is eight times more powerful than previous versions. Pinchai said the chip can operate at over 100 petaflops (a petaflop is a unit of computing speed equal to a quadrillion or thousand trillion or 10^15) floating-point operations per second.
Driving home the power of TPU 3.0, Pinchar demonstrated Google Assistant making a near-perfect exchange with a human being to schedule an appointment on behalf of its person. The demo shows Google assistant calling a local hairdresser to "make me a haircut appointment on Tuesday morning anytime between 10 and 12". The demo showed incredible precision and stunned the audience. "60% of small businesses don't have an online booking system setup." Pinchar feels AI can help.
Google Assistant continued to be a big part of the day. Available on 500 million devices, enabled on over 5,000 connected home devices, and in over 40 car brands, Google Assistant continues to make strides. During the keynote, Google announced six new voices (including John Legend later this year), a feature called Continued Conversation where you don't have to continue saying "Okay, Google" but can contextually continue to interact with the device, and even a new feature for families called Pretty Please. User feedback said that families felt Google Assistant made children too demanding with its direct instructions. Pretty Please provides positive reinforcement when enabled so Assistant can understand and encourage polite conversation.
Android started with the idea of building a powerful and flexible open mobile platform. Leveraging machine learning, Android P continues that mission. Adaptive Battery is a new feature introduced after Google partnered with their team at DeepMind. The feature prioritizes battery power only for the apps and services you use most. Similarly, Adaptive Brightness learns how you like to set brightness given your surroundings and will automatically match the brightness based on your past behavior.
New Dashboards to help you understand where you're spending your time on your phone, App Timers that give you a gentle nudge if you're spending too much time in a given app, and even a wind down mode which turns your screen grey after a configurable time in the evening are all part of Google's efforts to improve Digital Wellbeing. These features are all available now on the Android P Beta made available yesterday. Developers can get access to Android P on certain devices by visiting http://android.com/beta.
Throughout the remainder of the day a large number of sessions, codelabs, offices hours, and app reviews were held on much of the topics announced. This year's attendee gift included a Google Home Mini and an Android Things Starter Kit (Android Things exited Beta as well) and enables the assembled developers to bring home and use some of the lessons they learned.
Other announcements during the morning keynote include:
- AI-powered photos that allow features like colorization, brightness correction, suggested rotations, and recognition/sharing of people in the photos
- Google Assistant coming to maps enabling experience like "let my Kim know I'm running 15 minutes behind"
- A more powerful news experience powered by AI that allows users to keep up with the news they care about
- More powerful experiences leveraging the camera and maps together to quickly find navigation while walking
The conference resumes at 8:30 am PDT at the Shoreline Amphitheater. You can view live videos online.