Latest from Google AI – An Open Source Vibrotactile Haptics Platform for On-Body Applications

Posted by Artem Dementyev, Hardware Engineer, Google Research Most wearable smart devices and mobile phones have the means to communicate with the user through tactile feedback, enabling applications from simple notifications to sensory substitution for accessibility. Typically, they accomplish this using vibrotactile actuators, which are small electric vibration motors. However, designing a haptic system that…

Latest from Google AI – MetNet-2: Deep Learning for 12-Hour Precipitation Forecasting

Posted by Nal Kalchbrenner and Lasse Espeholt, Google Research Deep learning has successfully been applied to a wide range of important challenges, such as cancer prevention and increasing accessibility. The application of deep learning models to weather forecasts can be relevant to people on a day-to-day basis, from helping people plan their day to managing…

Latest from Google AI – RLiable: Towards Reliable Evaluation & Reporting in Reinforcement Learning

Posted by Rishabh Agarwal, Research Scientist and Pablo Samuel Castro, Staff Software Engineer, Google Research, Brain Team Reinforcement learning (RL) is an area of machine learning that focuses on learning from experiences to solve decision making tasks. While the field of RL has made great progress, resulting in impressive empirical results on complex tasks, such…

Latest from Google AI – Predicting Text Readability from Scrolling Interactions

Posted by Sian Gooding, Intern, Google Research Illiteracy affects at least 773 million people globally, both young and old. For these individuals, reading information from unfamiliar sources or on unfamiliar topics can be extremely difficult. Unfortunately, these inequalities have been further magnified by the global pandemic as a result of unequal access to education in…

Latest from Google AI – Permutation-Invariant Neural Networks for Reinforcement Learning

Posted by David Ha, Staff Research Scientist and Yujin Tang, Research Software Engineer, Google Research, Tokyo <!– “The brain is able to use information coming from the skin as if it were coming from the eyes. We don’t see with the eyes or hear with the ears, these are just the receptors, seeing and hearing…

Latest from Google AI – Decisiveness in Imitation Learning for Robots

Posted by Pete Florence, Research Scientist and Corey Lynch, Research Engineer, Robotics at Google Despite considerable progress in robot learning over the past several years, some policies for robotic agents can still struggle to decisively choose actions when trying to imitate precise or complex behaviors. Consider a task in which a robot tries to slide…

Latest from Google AI – Predicting Text Selections with Federated Learning

Posted by Florian Hartmann, Software Engineer, Google Research Smart Text Selection, launched in 2017 as part of Android O, is one of Android’s most frequently used features, helping users select, copy, and use text easily and quickly by predicting the desired word or set of words around a user’s tap, and automatically expanding the selection…

Latest from Google AI – MURAL: Multimodal, Multi-task Retrieval Across Languages

Posted by Aashi Jain, AI Resident and Yinfei Yang, Staff Research Scientist, Google Research For many concepts, there is no direct one-to-one translation from one language to another, and even when there is, such translations often carry different associations and connotations that are easily lost for a non-native speaker. In such cases, however, the meaning…

Latest from Google AI – RLDS: An Ecosystem to Generate, Share, and Use Datasets in Reinforcement Learning

Posted by Sabela Ramos, Software Engineer and Léonard Hussenot, Student Researcher, Google Research, Brain Team Most reinforcement learning (RL) and sequential decision making algorithms require an agent to generate training data through large amounts of interactions with their environment to achieve optimal performance. This is highly inefficient, especially when generating those interactions is difficult, such…

Latest from Google AI – Evaluating Syntactic Abilities of Language Models

Posted by Jason Wei, AI Resident and Dan Garrette, Research Scientist, Google Research In recent years, pre-trained language models, such as BERT and GPT-3, have seen widespread use in natural language processing (NLP). By training on large volumes of text, language models acquire broad knowledge about the world, achieving strong performance on various NLP benchmarks….