Latest from Google AI – TRILLsson: Small, Universal Speech Representations for Paralinguistic Tasks

Posted by Joel Shor, Staff Software Engineer, Google Research In recent years, we have seen dramatic improvements on lexical tasks such as automatic speech recognition (ASR). However, machine systems still struggle to understand paralinguistic aspects — such as tone, emotion, whether a speaker is wearing a mask, etc. Understanding these aspects represents one of the…

Latest from MIT : 3 Questions: Fotini Christia on racial equity and data science

Fotini Christia is the Ford International Professor in the Social Sciences in the Department of Political Science, associate director of the Institute for Data, Systems, and Society (IDSS), and director of the Sociotechnical Systems Research Center (SSRC). Her research interests include issues of conflict and cooperation in the Muslim world, and she has conducted fieldwork…

Latest from MIT : A new resource for teaching responsible technology development

Understanding the broader societal context of technology is becoming ever more critical as advances in computing show no signs of slowing. As students code, experiment, and build systems, being able to ask questions and make sense of hard problems involving social and ethical responsibility is as important as the technology they’re studying and developing. To…

Latest from Google AI – Using Deep Learning to Annotate the Protein Universe

Posted by Maxwell Bileschi, Staff Software Engineer and Lucy Colwell, Research Scientist, Google Research, Brain Team Proteins are essential molecules found in all living things. They play a central role in our bodies’ structure and function, and they are also featured in many products that we encounter every day, from medications to household items like…

Latest from MIT : The benefits of peripheral vision for machines

Perhaps computer vision and human vision have more in common than meets the eye? Research from MIT suggests that a certain type of robust computer-vision model perceives visual representations similarly to the way humans do using peripheral vision. These models, known as adversarially robust models, are designed to overcome subtle bits of noise that have…

Latest from Google AI – Co-training Transformer with Videos and Images Improves Action Recognition

Posted by Bowen Zhang, Student Researcher and Jiahui Yu, Senior Research Scientist, Google Research, Brain Team Action recognition has become a major focus area for the research community because many applications can benefit from improved modeling, such as video retrieval, video captioning, video question-answering, etc. Transformer-based approaches have recently demonstrated state-of-the-art performance on several benchmarks….

Latest from MIT : Study examines how machine learning boosts manufacturing

Which companies deploy machine intelligence (MI) and data analytics successfully for manufacturing and operations? Why are those leading adopters so far ahead — and what can others learn from them? MIT Machine Intelligence for Manufacturing and Operations (MIMO) and McKinsey and Company have the answer, revealed in a first-of-its-kind Harvard Business Review article. The piece chronicles…

Latest from Google AI – Federated Learning with Formal Differential Privacy Guarantees

Posted by Brendan McMahan and Abhradeep Thakurta, Research Scientists, Google Research In 2017, Google introduced federated learning (FL), an approach that enables mobile devices to collaboratively train machine learning (ML) models while keeping the raw training data on each user’s device, decoupling the ability to do ML from the need to store the data in…

Latest from Google AI – Constrained Reweighting for Training Deep Neural Nets with Noisy Labels

Posted by Abhishek Kumar and Ehsan Amid, Research Scientists, Google Research, Brain Team Over the past several years, deep neural networks (DNNs) have been quite successful in driving impressive performance gains in several real-world applications, from image recognition to genomics. However, modern DNNs often have far more trainable model parameters than the number of training…