InfoQ Homepage Deep Learning Content on InfoQ
-
Amazon Announces ECS Now Supports EC2 Inf1 Instances
In a recent blog post, Amazon announced that customers can now use the Amazon EC2 Inf1 instances on Amazon Elastic Container Service (ECS). The company promises the instances will be high performant and have low, predictable costs.
-
TensorFlow 2.3 Features Pipeline Bottleneck Reduction and Improved Preprocessing
The TensorFlow project announced the release of version 2.3.0, featuring new mechanisms for reducing input pipeline bottlenecks, Keras layers for pre-processing, and memory profiling.
-
PyTorch 1.6 Released; Microsoft Takes over Windows Version
PyTorch, Facebook's open-source deep-learning framework, announced the release of version 1.6 which includes new APIs and performance improvements. Along with the release, Microsoft announced it will take over development and maintenance of the Windows version of the framework.
-
Google Open-Sources AI for Mapping Natural Language to Mobile UI Actions
Google has open-sourced their AI model for converting sequences of natural language instructions to actions in a mobile device UI. The model is based on the Transformer deep-learning architecture and achieves 70% accuracy on a new benchmark dataset created for the project.
-
Google Announces TensorFlow 2 Support in Object Detection API
Google announced support for TensorFlow 2 (TF2) in the TensorFlow Object Detection (OD) API. The release includes eager-mode compatible binaries, two new network architectures, and pre-trained weights for all supported models.
-
Microsoft's ZeRO-2 Speeds up AI Training 10x
Microsoft open-sourced Zero Redundancy Optimizer version 2 (ZeRO-2), a distributed deep-learning optimization algorithm that scales super-linearly with cluster size. Using ZeRO-2, Microsoft trained a 100-billion-parameter natural-language processing (NLP) model 10x faster than with previous distributed learning techniques.
-
Spark AI Summit 2020 Highlights: Innovations to Improve Spark 3.0 Performance
At the recent Spark AI Summit 2020, held online for the first time, the highlights of the event were innovations to improve Apache Spark 3.0 performance, including optimizations for Spark SQL, and GPU acceleration.
-
MIT and Toyota Release Autonomous Driving Dataset DriveSeg
Toyota's Collaborative Safety Research Center (CSRC) and MIT's AgeLab have released DriveSeg, a dataset for autonomous driving research. DriveSeg contains over 25,000 frames of high-resolution video with each pixel labelled with one of 12 classes of road object. DriveSeg is available free of charge for non-commercial use.
-
Google ML Kit SDK Now Focuses on On-Device Machine Learning
Google has introduced a new ML Kit SDK aimed at working in standalone mode without requiring a tight integration with Firebase, as the original ML Kit SDK did. Additionally, it provides limited support for replacing its default models with custom ones for image labeling and object detection and tracking.
-
Facebook Announces TransCoder AI to Translate Code across Programming Languages
Facebook AI Research has announced TransCoder, a system that uses unsupervised deep-learning to convert code from one programming language to another. TransCoder was trained on more than 2.8 million open source projects and outperforms existing code translation systems that use rule-based methods.
-
Uber Open-Sources AI Abstraction Layer Neuropod
Uber open-sourced Neuropod, an abstraction layer for machine learning frameworks that allows researchers to build models in the framework of their choice while reducing the effort of integration, allowing the same production system to swap out models implemented in different frameworks. Neuropod currently supports several frameworks, including TensorFlow, PyTorch, Keras, and TorchScript.
-
Paddle Quantum: Bringing Baidu Deep Learning Perform to Quantum Computing
Baidu has announced quantum machine learning toolkit Paddle Quantum, which makes it possible to build and train quantum neural network models. Paddle Quantum aims to support advanced quantum computing applications as well as to allow developers new to quantum machine learning to create their models step-by-step.
-
Google Open-Sources Computer Vision Model Big Transfer
Google Brain has released the pre-trained models and fine-tuning code for Big Transfer (BiT), a deep-learning computer vision model. The models are pre-trained on publicly-available generic image datasets and can meet or exceed state-of-the-art performance on several vision benchmarks after fine-tuning on just a few samples.
-
OpenAI Announces GPT-3 AI Language Model with 175 Billion Parameters
A team of researchers from OpenAI recently published a paper describing GPT-3, a deep-learning model for natural-language with 175 billion parameters, 100x more than the previous version, GPT-2. The model is pre-trained on nearly half a trillion words and achieves state-of-the-art performance on several NLP benchmarks without fine-tuning.
-
Google Open-Sources New Higher Performance TensorFlow Runtime
Google open-sourced the TensorFlow Runtime (TFRT), a new abstraction layer for their TensorFlow deep-learning framework that allows models to achieve better inference performance across different hardware platforms. Compared to the previous runtime, TFRT improves average inference latency by 28%.