Latest from MIT Tech Review – This super-realistic virtual world is a driving school for AI

Building driverless cars is a slow and expensive business. After years of effort and billions of dollars of investment, the technology is still stuck in the pilot phase. Raquel Urtasun thinks she can do better.  Last year, frustrated by the pace of the industry, Urtasun left Uber, where she led the ride-hailing firm’s self-driving research…

Latest from Google AI – Machine Learning for Mechanical Ventilation Control

Posted by Daniel Suo, Software Engineer and Elad Hazan, Research Scientist, Google Research, on behalf of the Google AI Princeton Team Mechanical ventilators provide critical support for patients who have difficulty breathing or are unable to breathe on their own. They see frequent use in scenarios ranging from routine anesthesia, to neonatal intensive care and…

Latest from Google AI – The Balloon Learning Environment

Posted by Joshua Greaves, Software Engineer and Pablo Samuel Castro, Staff Software Engineer, Google Research, Brain Team Benchmark challenges have been a driving force in the advancement of machine learning (ML). In particular, difficult benchmark environments for reinforcement learning (RL) have been crucial for the rapid progress of the field by challenging researchers to overcome…

Latest from MIT Tech Review – DeepMind’s AI can control superheated plasma inside a fusion reactor 

DeepMind’s streak of applying its world-class AI to hard science problems continues. In collaboration with the Swiss Plasma Center at EPFL—a university in Lausanne, Switzerland—the UK-based AI firm has now trained a deep reinforcement learning algorithm to control the superheated soup of matter inside a nuclear fusion reactor. The breakthrough, published in the journal Nature,…

Latest from Google AI – Good News About the Carbon Footprint of Machine Learning Training

Posted by David Patterson, Distinguished Engineer, Google Research, Brain Team Machine learning (ML) has become prominent in information technology, which has led some to raise concerns about the associated rise in the costs of computation, primarily the carbon footprint, i.e., total greenhouse gas emissions. While these assertions rightfully elevated the discussion around carbon emissions in…

O’Reilly Media – Intelligence and Comprehension

I haven’t written much about AI recently. But a recent discussion of Google’s new Large Language Models (LLMs), and its claim that one of these models (named Gopher) has demonstrated reading comprehension approaching human performance, has spurred some thoughts about comprehension, ambiguity, intelligence, and will. (It’s well worth reading Do Large Models Understand Us, a…

Latest from MIT : Research advances technology of AI assistance for anesthesiologists

A new study by researchers at MIT and Massachusetts General Hospital (MGH) suggests the day may be approaching when advanced artificial intelligence systems could assist anesthesiologists in the operating room. In a special edition of Artificial Intelligence in Medicine, the team of neuroscientists, engineers, and physicians demonstrated a machine learning algorithm for continuously automating dosing…

Latest from Google AI – An International Scientific Challenge for the Diagnosis and Gleason Grading of Prostate Cancer

Posted by Po-Hsuan Cameron Chen, Software Engineer, Google Health and Maggie Demkin, Program Manager, Kaggle In recent years, machine learning (ML) competitions in health have attracted ML scientists to work together to solve challenging clinical problems. These competitions provide access to relevant data and well-defined problems where experienced data scientists come to compete for solutions…

Latest from Google AI – Guiding Frozen Language Models with Learned Soft Prompts

Posted by Brian Lester, AI Resident and Noah Constant, Senior Staff Software Engineer, Google Research Large pre-trained language models, which are continuing to grow in size, achieve state-of-art results on many natural language processing (NLP) benchmarks. Since the development of GPT and BERT, standard practice has been to fine-tune models on downstream tasks, which involves…

Latest from Google AI – Nested Hierarchical Transformer: Towards Accurate, Data-Efficient, and Interpretable Visual Understanding

Posted by Zizhao Zhang, Software Engineer, Google Cloud In visual understanding, the Visual Transformer (ViT) and its variants have received significant attention recently due to their superior performance on many core visual applications, such as image classification, object detection, and video understanding. The core idea of ViT is to utilize the power of self-attention layers…