Latest from Google AI – Chain-of-table: Evolving tables in the reasoning chain for table understanding

Posted by Zilong Wang, Student Researcher, and Chen-Yu Lee, Research Scientist, Cloud AI Team People use tables every day to organize and interpret complex information in a structured, easily accessible format. Due to the ubiquity of such tables, reasoning over tabular data has long been a central topic in natural language processing (NLP). Researchers in…

Latest from MIT Tech Review – LLMs become more covertly racist with human intervention

Since their inception, it’s been clear that large language models like ChatGPT absorb racist views from the millions of pages of the internet they are trained on. Developers have responded by trying to make them less toxic. But new research suggests that those efforts, especially as models get larger, are only curbing racist views that…

Latest from MIT Tech Review – An OpenAI spinoff has built an AI model that helps robots learn tasks like humans

In the summer of 2021, OpenAI quietly shuttered its robotics team, announcing that progress was being stifled by a lack of data necessary to train robots in how to move and reason using artificial intelligence.  Now three of OpenAI’s early research scientists say the startup they spun off in 2017, called Covariant, has solved that…

Latest from Google AI – Health-specific embedding tools for dermatology and pathology

Posted by Dave Steiner, Clinical Research Scientist, Google Health, and Rory Pilgrim, Product Manager Google Research There’s a worldwide shortage of access to medical imaging expert interpretation across specialties including radiology, dermatology and pathology. Machine learning (ML) technology can help ease this burden by powering tools that enable doctors to interpret these images more accurately…

Latest from MIT : Researchers enhance peripheral vision in AI models

Peripheral vision enables humans to see shapes that aren’t directly in our line of sight, albeit with less detail. This ability expands our field of vision and can be helpful in many situations, such as detecting a vehicle approaching our car from the side. Unlike humans, AI does not have peripheral vision. Equipping computer vision…

Latest from Google AI – Social learning: Collaborative learning with large language models

Posted by Amirkeivan Mohtashami, Research Intern, and Florian Hartmann, Software Engineer, Google Research Large language models (LLMs) have significantly improved the state of the art for solving tasks specified using natural language, often reaching performance close to that of people. As these models increasingly enable assistive agents, it could be beneficial for them to learn…

Latest from Google AI – Croissant: a metadata format for ML-ready datasets

Posted by Omar Benjelloun, Software Engineer, Google Research, and Peter Mattson, Software Engineer, Google Core ML and President, MLCommons Association Machine learning (ML) practitioners looking to reuse existing datasets to train an ML model often spend a lot of time understanding the data, making sense of its organization, or figuring out what subset to use…

Latest from MIT Tech Review – I used generative AI to turn my story into a comic—and you can too

Thirteen years ago, as an assignment for a journalism class, I wrote a stupid short story about a man who eats luxury cat food. This morning, I sat and watched as a generative AI platform called Lore Machine brought my weird words to life. I fed my story into a text box and got this…