InfoQ Homepage Neural Networks Content on InfoQ
-
OpenAI Introduces Sparse Transformers for Deep Learning of Longer Sequences
OpenAI has developed the Sparse Transformer, a deep neural-network architecture for learning sequences of data, including text, sound, and images. The networks can achieve state-of-the-art performance on several deep-learning tasks with faster training times.
-
Xipeng Shen on a New Technique to Reduce Deep-Learning Training Time
Researchers at North Carolina State University recently presented a paper at the 35th IEEE International Conference on Data Engineering (ICDE 2019) on their new technique that can reduce training time for deep-neural-networks by up to 69%.
-
PyTorch 1.1 Release Improves Performance, Adds New APIs and Tools
Facebook AI Research announced the release of PyTorch 1.1. The latest version of the open-source deep learning framework includes improved performance via distributed training, new APIs, and new visualization tools including native support for TensorBoard.
-
Teaching the Computer to Play the Chrome Dinosaur Game with TensorFlow.js Machine Learning Library
A simple, yet entertaining and useful for educational purposes application of machine learning, was recently made available on Fritz's HeartBeat Medium publication. Google's machine learning TensorFlow.js library is leveraged in the browser to teach the computer to play the Chrome Dinosaur Game.
-
NSFW.js: Machine Learning Applied to Indecent Content Detection
With the beta-released NSFW.js, developers can now include in their applications a client-side filter for indecent content. NSFW.js classifies images into one of five categories: Drawing, Hentai, Neutral, Porn, Sexy. Under some benchmarks, NSFW categorizes images with a 90% accuracy rate.
-
Google Open-Sources GPipe Library for Faster Training of Large Deep-Learning Models
Google AI is open-sourcing GPipe, a TensorFlow library for accelerating the training of large deep-learning models.
-
Facebook Open-Sources DeepFocus, Bringing More Realistic Images to Virtual Reality
In a recent blog post, Facebook announced they have open-sourced DeepFocus, an AI powered framework for improving focus on close objects. This technology ensures nearby objects are in-focus, while distant objects appear out of focus, much like cinematic experiences. DeepFocus takes advantage of an end-to-end convolutional neural network that produces an accurate retinal blur in near real-time.
-
Exploring the Relationship between Quantum Computers and Machine Learning
The Google AI Quantum team recently published two papers that contribute to the exploration of the relationship between quantum computers and machine learning. InfoQ has spoken with Google senior research scientist Jarrod McClean to better understand the importance of these results.
-
Sony Trains ResNet-50 on ImageNet in 224 Seconds
Researchers from Sony announced that they trained a ResNet 50 architecture on ImageNet in only 224 seconds. The resulting network has a top-1 accuracy of 75% on the validation set of ImageNet. They achieved this record by using 2.100 Tesla V100 Tensor Core GPUs from NVIDIA. Besides this record, they also got a 90% GPU scaling efficiency using 1.088 Tesla V100 Tensor Core GPUs.
-
Apple Has Released Core ML 2
At WWDC Apple released Core ML 2: a new version of their machine learning SDK for iOS devices. The new release of Core ML should create an inference time speedup of 30% for apps developed using Core ML 2. An important new feature of the Core ML SDK is Create ML. Developers can create and train custom machine learning models on their mac.
-
Microsoft Embeds Artificial Intelligence Platform in Windows 10 Update
The next Windows 10 update opens the way for the integration of artificial intelligence functionalities within Windows applications. Developers will be able to integrate pre-trained deep-learning models converted to the ONNX framework in their Windows applications.
-
Microsoft Achieves Human Parity on Chinese-English Machine Translation
Microsoft created a translation algorithm that translates Chinese sentences to English as well as human translators do. Translating Chinese sentences into English has been difficult in the past. Thanks to neural machine translation, a technique that created amazing results in the last couple of years, Microsoft got their machine translated sentences on par with human translated sentences.
-
Facebook Releases Open Source "Detectron" Deep-Learning Library for Object Detection
Recent releases from Facebook and Google implement the most current deep-learning algorithms to take a crack at the challenging problem of machine object detection.
-
Autonomous Vehicles Became Better at Predicting Lane-Changes
Researchers created an algorithm that allows self-driving cars to predict lane-changes of the surrounding cars. The system works by using a deep-learning technique called Long Short-Term Memories (LSTMs). Although the most likely scenario on the highway is that every car stays in its own lane, their algorithm was able to slightly improve on this baseline prediction.
-
Deep Image Priors on Neural Networks with No Training
Researchers at Oxford and Skoltech develop a generative neural network that successfully renders deep-image priors with no training.