InfoQ Homepage Deep Learning Content on InfoQ
-
Microsoft Open-Sources TensorWatch AI Debugging Tool
Microsoft Research open-sourced TensorWatch, their debugging tool for AI and deep-learning. TensorWatch supports PyTorch as well as TensorFlow eager tensors, and allows developers to interactively debug training jobs in real-time via Jupyter notebooks, or to build their own custom UIs in Python.
-
Researchers Develop Technique for Reducing Deep-Learning Model Sizes for Internet of Things
Researchers from Arm Limited and Princeton University have developed a technique that produces deep-learning computer-vision models for internet-of-things (IoT) hardware systems with as little as 2KB of RAM. By using Bayesian optimization and network pruning, the team is able to reduce the size of image recognition models while still achieving state-of-the-art accuracy.
-
Google Releases Post-Training Integer Quantization for TensorFlow Lite
Google announced new tooling for their TensorFlow Lite deep-learning framework that reduces the size of models and latency of inference. The tool converts a trained model's weights from floating-point representation to 8-bit signed integers. This reduces the memory requirements of the model and allows it to run on hardware without floating-point accelerators and without sacrificing model quality.
-
Google Releases TensorFlow.Text Library for Natural Language Processing
Google released a TensorFlow.Text, a new text-processing library for their TensorFlow deep-learning platform. The library allows several common text pre-processing activities, such as tokenization, to be handled by the TensorFlow graph computation system, improving consistency and portability of deep-learning models for natural-language processing.
-
Google Releases Deep Learning Containers into Beta
In a recent blog post, Google announced Deep Learning Containers, allowing customers to get Machine Learning projects up and running quicker. Deep Learning consists of numerous performance-optimized Docker containers that come with a variety of tools necessary for deep learning tasks already installed.
-
Facebook Open-Sources Deep-Learning Recommendation Model DLRM
Facebook AI Research announced the open-source release of a deep-learning recommendation model, DLRM, that achieves state-of-the-art accuracy in generating personalized recommendations. The code is available on GitHub, and includes versions for the PyTorch and Caffe2 frameworks.
-
AWS Enhances Deep Learning AMI, AI Services SageMaker Ground Truth, and Rekognition
Amazon Web Services (AWS) announced updates to their Deep Learning virtual machine image, as well as improvements to their AI services SageMaker Ground Truth and Rekognition.
-
MIT Researchers Open-Source AutoML Visualization Tool ATMSeer
A research team from MIT, Hong Kong University, and Zhejiang University has open-sourced ATMSeer, a tool for visualizing and controlling automated machine-learning processes.
-
Google Uses Mannequin Challenge Videos to Learn Depth Perception
Google AI Research published a paper describing their work on depth perception from two-dimensional images. Using a training dataset created from YouTube videos of the Mannequin Challenge, researchers trained a neural network that can reconstruct depth information from videos of moving people, taken by moving cameras.
-
Google Announces TensorFlow Graphics Library for Unsupervised Deep Learning of Computer Vision Model
At a presentation during Google I/O 2019, Google announced TensorFlow Graphics, a library for building deep neural networks for unsupervised learning tasks in computer vision. The library contains 3D rendering functions written in TensorFlow, as well as tools for learning with non-rectangular mesh-based input data.
-
Google's Cloud TPU V2 and V3 Pods Are Now Publicly Available in Beta
Recently, Google announced that its second- and third-generation Cloud Tensor Processing Units (TPU) Pods — its scalable cloud-based supercomputers with up to 1,000 of its custom TPU — are now publicly available in beta. With these Pods, Machine Learning (ML) researchers, engineers, and data scientists can speed up the time needed to train and deploy machine learning models.
-
OpenAI Introduces Sparse Transformers for Deep Learning of Longer Sequences
OpenAI has developed the Sparse Transformer, a deep neural-network architecture for learning sequences of data, including text, sound, and images. The networks can achieve state-of-the-art performance on several deep-learning tasks with faster training times.
-
Xipeng Shen on a New Technique to Reduce Deep-Learning Training Time
Researchers at North Carolina State University recently presented a paper at the 35th IEEE International Conference on Data Engineering (ICDE 2019) on their new technique that can reduce training time for deep-neural-networks by up to 69%.
-
PyTorch 1.1 Release Improves Performance, Adds New APIs and Tools
Facebook AI Research announced the release of PyTorch 1.1. The latest version of the open-source deep learning framework includes improved performance via distributed training, new APIs, and new visualization tools including native support for TensorBoard.
-
Google Scales Weak Supervision to Overcome Labeled Dataset Problem
Google recognizes that the need for labeled data in machine learning (ML) is a significant bottleneck and recently adapted the open-source Snorkel framework to overcome the problem at scale. Google enhanced Snorkel by integrating it with Tensorflow, using the file system for sharing data instead of a database, and creating separate executables for labeling functions.