Latest from MIT : Solving brain dynamics gives rise to flexible machine-learning models

Last year, MIT researchers announced that they had built “liquid” neural networks, inspired by the brains of small species: a class of flexible, robust machine learning models that learn on the job and can adapt to changing conditions, for real-world safety-critical tasks, like driving and flying. The flexibility of these “liquid” neural nets meant boosting…

Latest from MIT Tech Review – Why we need to do a better job of measuring AI’s carbon footprint

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. Lately I’ve lost a lot of sleep over climate change. It’s just over five weeks until Christmas, and last weekend in London, it was warm enough to have a pint outside without…

Latest from MIT Tech Review – We’re getting a better idea of AI’s true carbon footprint

Large language models (LLMs) have a dirty secret: they require vast amounts of energy to train and run. What’s more, it’s still a bit of a mystery exactly how big these models’ carbon footprints really are. AI startup Hugging Face believes it’s come up with a new, better way to calculate that more precisely, by…

Latest from MIT Tech Review – Best practices for bolstering machine learning security

Nearly 75% of the world’s largest companies have already integrated AI and machine learning (ML) into their business strategies. As more and more companies — and their customers — gain increasing value from ML applications, organizations should be considering new security best practices to keep pace with the evolving technology landscape.  Companies that utilize dynamic…

Latest from Google AI – ReAct: Synergizing Reasoning and Acting in Language Models

Posted by Shunyu Yao, Student Researcher, and Yuan Cao, Research Scientist, Google Research, Brain Team <!—-> Recent advances have expanded the applicability of language models (LM) to downstream tasks. On one hand, existing language models that are properly prompted, via chain-of-thought, demonstrate emergent capabilities that carry out self-conditioned reasoning traces to derive answers from questions,…

Latest from Google AI – Infinite Nature: Generating 3D Flythroughs from Still Photos

Posted by Noah Snavely and Zhengqi Li, Research Scientists, Google Research We live in a world of great natural beauty — of majestic mountains, dramatic seascapes, and serene forests. Imagine seeing this beauty as a bird does, flying past richly detailed, three-dimensional landscapes. Can computers learn to synthesize this kind of visual experience? Such a…

Latest from Google AI – Beyond Tabula Rasa: Reincarnating Reinforcement Learning

Posted by Rishabh Agarwal, Senior Research Scientist, and Max Schwarzer, Student Researcher, Google Research, Brain Team Reinforcement learning (RL) is an area of machine learning that focuses on training intelligent agents using related experiences so they can learn to solve decision making tasks, such as playing video games, flying stratospheric balloons, and designing hardware chips….

Latest from Google AI – Robots That Write Their Own Code

Posted by Jacky Liang, Research Intern, and Andy Zeng, Research Scientist, Robotics at Google <!—-><!—-> A common approach used to control robots is to program them with code to detect objects, sequencing commands to move actuators, and feedback loops to specify how the robot should perform a task. While these programs can be expressive, re-programming…

Latest from Google AI – Characterizing Emergent Phenomena in Large Language Models

Posted by Jason Wei and Yi Tay, Research Scientists, Google Research, Brain Team The field of natural language processing (NLP) has been revolutionized by language models trained on large amounts of text data. Scaling up the size of language models often leads to improved performance and sample efficiency on a range of downstream NLP tasks….

Latest from Google AI – Multi-layered Mapping of Brain Tissue via Segmentation Guided Contrastive Learning

Posted by Peter H. Li, Research Scientist, and Sven Dorkenwald, Student Researcher, Connectomics at Google Mapping the wiring and firing activity of the human brain is fundamental to deciphering how we think — how we sense the world, learn, decide, remember, and create — as well as what issues can arise in brain disease or…