InfoQ Homepage Neural Networks Content on InfoQ
-
How Apple Uses Neural Networks for Object Detection in Point Clouds
Apple invented a neural network configuration that can segmentate objects in point clouds obtained with a LIDAR sensor. Recently Apple joined the field of autonomous vehicles. Apple has now created an end-to-end neural network to segmentate objects in point clouds. This approach does not rely on any hand-crafted features or other machine learning algorithms other than neural networks.
-
Start-up Vicarious Defeats CAPTCHA Security with AI Inspired by Brain’s Visual Cortex
Vicarious improved on neural network capable of solving CAPTCHA challenges using a novel network layout called Recursive Cortical Network. In contrast to a normal neural network, which starts without any knowledge before training, a RCN starts with knowledge of contours and surfaces. This prior knowledge facilitates model building and generalisability.
-
Jensen Huang Announces NVIDIA's New Projects at the GPU Technology Conference
Today the GPU Technology conference in Munich kicked off with a keynote by NVIDIA CEO Jensen Huang. NVIDIA announced the NVIDIA Holodeck, the Tensor RT 3 library, NVIDIA's Drive platform, and the Pegasus computer for autonomous taxis.
-
Teachable Machine: Teach a Machine Using Your Camera in Your Browser
Teachable Machine is a browser application that you can train with your webcam to recognize objects or expressions. In the demo you use your webcam as input to recognize three different classes of objects or expressions. Based on your camera input, the site shows different gifs, plays prerecorded sounds, or plays speech. The demo can be found here: teachablemachine.withgoogle.com
-
Apple Details Face ID Security
Apple has described how Face ID works and how it guarantees security in a new white paper.
-
Q&A with Hillery Hunter: IBM Reduces Neural Network Training Times by Efficiently Scaling Training
In August 2017 IBM announced it broke the training record for image recognition capabilities. IBM research reduced their training time for the neural network layout called "ResNet-50" to only 50 minutes. On another network layout called ResNet-101, they obtained an accuracy record of 33.8 percent. Using 256 GPUs they trained their neural network on a dataset containing 7.5 million images.
-
Apple’s iPhone X Has Custom Neural Engine Processor Built In
Speaking in the Steve Jobs Theatre at Apple Park yesterday Philip Schiller, senior vice president of worldwide marketing at Apple, described some of the technology behind the facial recognition system in the newly announced iPhone X including a dedicated neural engine built into the A11 chip.
-
Q&A with Movidius, a Division of Intel Who Just Launched the Neural Compute Stick
Recently Movidius (a division of Intel's New Technology Group) released the neural compute stick: a usb-based development kit that runs embedded neural networks. With this stick users can run neural network and computer vision models on devices with low computational power. InfoQ reached out to Gary Brown, marketing director for Movidius, Intel New Technology Group, and asked him a few questions.
-
Facebook Transitioning to Neural Machine Translation
Facebook recently announced the global rollout of NMT. Switching from phrase based translation models to NMT has been rolled out for more than 2,000 translation directions and 4.5 billion translations per day. According to Facebook this provides an 11% increase in BLEU score. We will discuss how it was achieved, what it means for machine generated translation and how it fares against competition.
-
Google Invests in Cognitive: Cloud Speech API Reaches General Availability
In a recent blog post, Google announced their Cloud Speech API has reached General Availability. The Cloud Speech API allows developers to include pre-trained machine learning models for cognitive tasks such as video, image and text analysis in addition to dynamic translation. The Cloud Speech API was launched, in open beta, last summer.
-
Google Reveals Details of TensorFlow Processor Unit Architecture
Google's hardware engineering team that designed and developed the TensorFlow Processor Unit detailed the architecture and benchmarking experiment earlier this month. This is a follow up post on the initial announcement of the TPU from this time last year.
-
Facebook Builds an Efficient Neural Network Model over a Billion Words
Using Neural Networks for sequence prediction is a well-known Computer Science problem with a vast array of applications in speech recognition, machine translation, language modeling and other fields. FB AI Research scientists designed adaptive softmax, an approximation algorithm tailored for GPUs which can be used to efficiently train neural networks over vocabularies of a billion words & beyond.
-
DeepMind AI Program Increases Google Data Center Cooling Power Usage Efficiency by 40%
DeepMind Sensor data captured from Google data centers yield a 40% increase in data center power usage efficiency and an overall site-wide 15% power usage efficiency gain using an AI program similar to an earlier game-like program of theirs that had learned how to play Atari games.
-
Deep Convolutional Networks for Super-Resolution Image Reconstruction at Flipboard
Flipboard recently reported on an in-house application of deep learning to scale up low-resolution images that illustrates the power and flexibility of this class of learning algorithms.
-
Nvidia Introduces cuDNN, a CUDA-based library for Deep Neural Networks
Nvidia earlier this month released cuDNN, a set of optimized low-level primitives to boost the processing speed of deep neural networks (DNN) on CUDA compatible GPUs. The company intends to help developers harness the power of graphics processing units for deep learning applications.