Latest from MIT Tech Review – These simple changes can make AI research much more energy efficient

Deep learning is behind machine learning’s most high-profile successes, such as advanced image recognition, the board game champion AlphaGo, and language models like GPT-3. But this incredible performance comes at a cost: training deep-learning models requires huge amounts of energy. Now, new research shows how scientists who use cloud platforms to train deep-learning algorithms can…

Latest from MIT : Startup lets doctors classify skin conditions with the snap of a picture

At the age of 22, when Susan Conover wanted to get a strange-looking mole checked out, she was told it would take three months to see a dermatologist. When the mole was finally removed and biopsied, doctors determined it was cancerous. At the time, no one could be sure the cancer hadn’t spread to other…

Latest from Google AI – Identifying Disfluencies in Natural Speech

Posted by Dan Walker and Dan Liebling, Software Engineers, Google Research People don’t write in the same way that they speak. Written language is controlled and deliberate, whereas transcripts of spontaneous speech (like interviews) are hard to read because speech is disorganized and less fluent. One aspect that makes speech transcripts particularly difficult to read…

Latest from Google AI – Minerva: Solving Quantitative Reasoning Problems with Language Models

Posted by Ethan Dyer and Guy Gur-Ari, Research Scientists, Google Research, Blueshift Team Language models have demonstrated remarkable performance on a variety of natural language tasks — indeed, a general lesson from many works, including BERT, GPT-3, Gopher, and PaLM, has been that neural networks trained on diverse data at large scale in an unsupervised way…

Latest from MIT : Building explainability into the components of machine-learning models

Explanation methods that help users understand and trust machine-learning models often describe how much certain features used in the model contribute to its prediction. For example, if a model predicts a patient’s risk of developing cardiac disease, a physician might want to know how strongly the patient’s heart rate data influences that prediction. But if…

Latest from MIT : Exploring emerging topics in artificial intelligence policy

Members of the public sector, private sector, and academia convened for the second AI Policy Forum Symposium last month to explore critical directions and questions posed by artificial intelligence in our economies and societies. The virtual event, hosted by the AI Policy Forum (AIPF) — an undertaking by the MIT Schwarzman College of Computing to…

Latest from MIT Tech Review – Materials with nanoscale components will change what’s possible

In the 24 years I’ve worked as a materials scientist, I’ve always been inspired by hierarchical patterns found in nature that repeat all the way down to the molecular level. Such patterns induce remarkable properties—they strengthen our bones without making them heavy, give butterfly wings their color, and make a spiderweb silk both durable and…

Latest from MIT Tech Review – AI’s progress isn’t the same as creating human intelligence in machines

The term “artificial intelligence” really has two meanings. AI refers both to the fundamental scientific quest to build human intelligence into computers and to the work of modeling massive amounts of data. These two endeavors are very different, both in their ambitions and in the amount of progress they have made in recent years. Scientific…

Latest from MIT : Taking the guesswork out of dental care with artificial intelligence

When you picture a hospital radiologist, you might think of a specialist who sits in a dark room and spends hours poring over X-rays to make diagnoses. Contrast that with your dentist, who in addition to interpreting X-rays must also perform surgery, manage staff, communicate with patients, and run their business. When dentists analyze X-rays,…

UC Berkeley – FIGS: Attaining XGBoost-level performance with the interpretability and speed of CART

FIGS (Fast Interpretable Greedy-tree Sums): A method for building interpretable models by simultaneously growing an ensemble of decision trees in competition with one another. Recent machine-learning advances have led to increasingly complex predictive models, often at the cost of interpretability. We often need interpretability, particularly in high-stakes applications such as in clinical decision-making; interpretable models…