Google AR & VR – “The Mandalorian” in AR? This is the way.

In a galaxy far, far away, the Mandalorian and the Child continue their journey, facing enemies and rallying allies in the tumultuous era after the collapse of the Galactic Empire. But you don’t need a tracking fob to explore the world of the hit STAR WARS streaming series. Google and Lucasfilm have teamed up to…

Google AR & VR – Rediscover your city through a new Lens this summer

With warmer weather upon us and many places reopening in the U.K., it’s the perfect time to go out and reconnect with your surroundings. Whether it’s soaking up that panoramic view of a city skyline that you’ve really missed, or wondering what that interesting tree species was that you pass every day on your park…

Google AR & VR – A new audio guide for our Augmented Reality Galleries

Since we launched our first Pocket Gallery in 2018, people all over the world have used the augmented reality (AR) feature to explore virtual art galleries ranging from Vermeer to Indian miniatures. With many of us missing the opportunities to explore, we have now collaborated with cultural institutions including the Jean Pigozzi Collection and J….

Latest from Google AI – Decisiveness in Imitation Learning for Robots

Posted by Pete Florence, Research Scientist and Corey Lynch, Research Engineer, Robotics at Google Despite considerable progress in robot learning over the past several years, some policies for robotic agents can still struggle to decisively choose actions when trying to imitate precise or complex behaviors. Consider a task in which a robot tries to slide…

Latest from Google AI – RLiable: Towards Reliable Evaluation & Reporting in Reinforcement Learning

Posted by Rishabh Agarwal, Research Scientist and Pablo Samuel Castro, Staff Software Engineer, Google Research, Brain Team Reinforcement learning (RL) is an area of machine learning that focuses on learning from experiences to solve decision making tasks. While the field of RL has made great progress, resulting in impressive empirical results on complex tasks, such…

Latest from Google AI – MURAL: Multimodal, Multi-task Retrieval Across Languages

Posted by Aashi Jain, AI Resident and Yinfei Yang, Staff Research Scientist, Google Research For many concepts, there is no direct one-to-one translation from one language to another, and even when there is, such translations often carry different associations and connotations that are easily lost for a non-native speaker. In such cases, however, the meaning…

Latest from Google AI – Predicting Text Selections with Federated Learning

Posted by Florian Hartmann, Software Engineer, Google Research Smart Text Selection, launched in 2017 as part of Android O, is one of Android’s most frequently used features, helping users select, copy, and use text easily and quickly by predicting the desired word or set of words around a user’s tap, and automatically expanding the selection…

Latest from Google AI – RLDS: An Ecosystem to Generate, Share, and Use Datasets in Reinforcement Learning

Posted by Sabela Ramos, Software Engineer and Léonard Hussenot, Student Researcher, Google Research, Brain Team Most reinforcement learning (RL) and sequential decision making algorithms require an agent to generate training data through large amounts of interactions with their environment to achieve optimal performance. This is highly inefficient, especially when generating those interactions is difficult, such…

Latest from Google AI – An Open Source Vibrotactile Haptics Platform for On-Body Applications

Posted by Artem Dementyev, Hardware Engineer, Google Research Most wearable smart devices and mobile phones have the means to communicate with the user through tactile feedback, enabling applications from simple notifications to sensory substitution for accessibility. Typically, they accomplish this using vibrotactile actuators, which are small electric vibration motors. However, designing a haptic system that…

Latest from Google AI – Permutation-Invariant Neural Networks for Reinforcement Learning

Posted by David Ha, Staff Research Scientist and Yujin Tang, Research Software Engineer, Google Research, Tokyo <!– “The brain is able to use information coming from the skin as if it were coming from the eyes. We don’t see with the eyes or hear with the ears, these are just the receptors, seeing and hearing…