InfoQ Homepage Deep Learning Content on InfoQ
-
DeepMind's Agent57 Outperforms Humans on All Atari 2600 Games
Researchers at Google's DeepMind have produced a reinforcement-learning (RL) system called Agent57 that has scored above the human benchmark on all 57 Atari 2600 games in the Arcade Learning Environment. Agent57 is the first system to outperform humans on even the hardest games in the suite.
-
Google Releases Quantization Aware Training for TensorFlow Model Optimization
Google announced the release of the Quantization Aware Training (QAT) API for their TensorFlow Model Optimization Toolkit. QAT simulates low-precision hardware during the neural-network training process, adding the quantization error into the overall network loss metric, which causes the training process to minimize the effects of post-training quantization.
-
Google's SEED RL Achieves 80x Speedup of Reinforcement-Learning
Researchers at Google Brain recently open-sourced their Scalable, Efficient Deep-RL (SEED RL) algorithm for AI reinforcement-learning. SEED RL is a distributed architecture that achieves state-of-the-art results on several RL benchmarks at lower cost and up to 80x faster than previous systems.
-
Google Introduces TensorFlow Developer Certification
Google has launched a certification program for its deep-learning framework TensorFlow. The certification exam is administered using a PyCharm IDE plugin, and candidates who pass can be listed in Google's world-wide Certification Directory.
-
Google Announces Beta Launch of Cloud AI Platform Pipelines
Google Cloud Platform (GCP) recently announced the beta launch of Cloud AI Platform Pipelines, a new product for automating and managing machine learning (ML) workflows, which leverages the open-source technologies TensorFlow Extended (TFX) and Kubeflow Pipelines (KFP).
-
Researchers Publish Survey of Explainable AI
A team of researchers from IBM Watson and Arizona State University have published a survey of work in Explainable AI Planning (XAIP). The survey covers the work of 67 papers and charts recent trends in the field.
-
TensorFlow Quantum Joins Quantum Computing and Machine Learning
TensorFlow Quantum (TFQ) brings Google quantum computing framework Cirq and TensorFlow together to enable the creation of quantum machine learning (ML) models.
-
Facebook Research Develops AI System for Music Source Separation
Facebook Research recently released Demucs, a new deep-learning-powered system for music source separation. Demucs outperforms previously reported results based on human evaluations of overall quality of sound after separation.
-
Deep Learning Accelerates Scientific Simulations up to Two Billion Times
Researchers from several physics and geology laboratories have developed Deep Emulator Network SEarch (DENSE), a technique for using deep-learning to perform scientific simulations from various fields from high-energy physics to climate science. Compared to previous simulators, the results from DENSE achieved speedups ranging from 10 million to 2 billion times.
-
Spotify Open-Sources Terraform Module for Kubeflow ML Pipelines
Spotify has open-sourced their Terraform module for running machine-learning pipeline software Kubeflow on Google Kubernetes Engine (GKE). By switching their in-house ML platform to Kubeflow, Spotify engineers have achieved faster time to production and are producing 7x more experiments than on the previous platform.
-
PyTorch 1.4 Release Introduces Java Bindings, Distributed Training
PyTorch, Facebook's open-source deep-learning framework, announced the release of version 1.4. This release, which will be the last version to support Python 2, includes improvements to distributed training and mobile inference and introduces support for Java.
-
GitHub Releases ML-Based "Good First Issues" Recommendations
GitHub shipped an updated version of good first issues feature which uses a combination of both a machine learning (ML) model that identifies easy issues, and a hand curated list of issues that have been labeled "easy" by project maintainers. New and seasoned open source contributors can use this feature to find and tackle easy issues in a project.
-
Microsoft Open-Sources Project Petridish for Deep-Learning Optimization
A team from Microsoft Research and Carnegie Mellon University has open-sourced Project Petridish, a neural architecture search algorithm that automatically builds deep-learning models that are optimized to satisfy a variety of constraints. Using Petridish, the team achieved state-of-the-art results on the CIFAR-10 benchmark with only 2.2M parameters and five GPU-days of search time.
-
Google Open-Sources Reformer Efficient Deep-Learning Model
Researchers from Google AI recently open-sourced the Reformer, a more efficient version of the Transformer deep-learning model. Using a hashing trick for attention calculation and reversible residual layers, the Reformer can handle text sequences up to 1 million words while consuming only 16GB of memory on a single GPU accelerator.
-
Microsoft Open-Sources ONNX Acceleration for BERT AI Model
Microsoft's Azure Machine Learning team recently open-sourced their contribution to the ONNX Runtime library for improving the performance of the natural language processing (NLP) model BERT. With the optimizations, the model's inference latency on the SQUAD benchmark sped up 17x.