InfoQ Homepage Google DeepMind Content on InfoQ
-
AI Conference Recap: Google, Microsoft, Facebook, and Others at ICLR 2021
At the recent International Conference on Learning Representations (ICLR), research teams from several tech companies, including Google, Microsoft, IBM, Facebook, and Amazon, presented nearly 250 papers out of a total of 860 on a wide variety of AI topics related to deep learning.
-
Perceiver: One Neural-Network Model for Multiple Input Data Types
Google’s DeepMind company has recently released a state-of-the-art deep-learning model called Perceiver that receives and processes multiple input data ranging from audio to images, similarly to how the human brain perceives multimodal data. Perceiver is able to receive and classify input multiple data types, namely point cloud, audio and images.
-
Google DeepMind’s NFNets Offers Deep Learning Efficiency
Google’s DeepMind AI company recently released NFNets, a normalizer-free ResNet image classification model that achieved a training performance of 8.7x faster than current state-of-the-art EfficientNet. In addition, it helps neural networks to generalize better.
-
DeepMind's AlphaFold2 AI Solves 50-Year-Old Biology Challenge
The Protein Structure Prediction Center announced that AlphaFold2, an AI system developed by DeepMind, has solved its Protein Structure Prediction challenge. AlphaFold2 achieved a median score of 92.4 on the Global Distance Test (GDT) metric, above the threshold considered competitive with traditional methods.
-
DeepMind's Agent57 Outperforms Humans on All Atari 2600 Games
Researchers at Google's DeepMind have produced a reinforcement-learning (RL) system called Agent57 that has scored above the human benchmark on all 57 Atari 2600 games in the Arcade Learning Environment. Agent57 is the first system to outperform humans on even the hardest games in the suite.
-
Microsoft and Google Release New Benchmarks for Cross-Language AI Tasks
Research teams at Microsoft Research and Google AI have announced new benchmarks for cross-language natural-language understanding (NLU) tasks for AI systems, including named-entity recognition and question answering. Google's XTREME covers 40 languages and includes nine tasks, while Microsoft's XGLUE covers 27 languages and eleven tasks.
-
Google's SEED RL Achieves 80x Speedup of Reinforcement-Learning
Researchers at Google Brain recently open-sourced their Scalable, Efficient Deep-RL (SEED RL) algorithm for AI reinforcement-learning. SEED RL is a distributed architecture that achieves state-of-the-art results on several RL benchmarks at lower cost and up to 80x faster than previous systems.
-
Facebook Open-Sources CraftAssist Framework for AI Assistants in Minecraft
Facebook AI researchers open-sourced CraftAssist, a framework for building interactive assistants for the Minecraft video game. The bots use natural language understanding (NLU) to parse and execute text commands from human players, such as requests to build houses in the game world. The framework's modular structure can be extended by researchers to perform their own ML experiments.
-
DeepMind's AI Defeats Top StarCraft Players
DeepMind's AlphaStar AI defeated two top professional StarCraft players 5-0.
-
DeepMind AI Program Increases Google Data Center Cooling Power Usage Efficiency by 40%
DeepMind Sensor data captured from Google data centers yield a 40% increase in data center power usage efficiency and an overall site-wide 15% power usage efficiency gain using an AI program similar to an earlier game-like program of theirs that had learned how to play Atari games.
-
Deep Mind Discloses Details to InfoQ about NHS Partnership amid Reports of Vast Patient Data Access
After months of awaiting details about the NHS and Google DeepMind partnership InfoQ gains insights into recent claims of widespread patient data access.