Latest from MIT : Researchers reduce bias in AI models while preserving or improving accuracy

Machine-learning models can fail when they try to make predictions for individuals who were underrepresented in the datasets they were trained on. For instance, a model that predicts the best treatment option for someone with a chronic disease may be trained using a dataset that contains mostly male patients. That model might make incorrect predictions…

Latest from MIT : Study: Some language reward models exhibit political bias

Large language models (LLMs) that drive generative artificial intelligence apps, such as ChatGPT, have been proliferating at lightning speed and have improved to the point that it is often impossible to distinguish between something written through generative AI and human-composed text. However, these models can also sometimes generate false statements or display a political bias….

O’Reilly Media – Generative Logic

Alibaba’s latest model, QwQ-32B-Preview, has gained some impressive reviews for its reasoning abilities. Like OpenAI’s GPT-4 o1,1 its training has emphasized reasoning rather than just reproducing language. That seemed like something worth testing out—or at least playing around with—so when I heard that it very quickly became available in Ollama and wasn’t too large to…

Latest from MIT Tech Review – AI’s hype and antitrust problem is coming under scrutiny

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. The AI sector is plagued by a lack of competition and a lot of deceit—or at least that’s one way to interpret the latest flurry of actions taken in Washington.  Last…

Latest from MIT Tech Review – We saw a demo of the new AI system powering Anduril’s vision for war

One afternoon in late November, I visited a weapons test site in the foothills east of San Clemente, California operated by Anduril, a maker of AI-powered drones and missiles that recently announced a partnership with OpenAI. I went there to witness a new system it’s expanding today, which allows external parties to tap into its…

Latest from MIT : Enabling AI to explain its predictions in plain language

Machine-learning models can make mistakes and be difficult to use, so scientists have developed explanation methods to help users understand when and how they should trust a model’s predictions. These explanations are often complex, however, perhaps containing information about hundreds of model features. And they are sometimes presented as multifaceted visualizations that can be difficult…

Latest from MIT : Daniela Rus wins John Scott Award

Daniela Rus, director of MIT’s Computer Science and Artificial Intelligence Laboratory and MIT professor of electrical engineering and computer science, was recently named a co-recipient of the 2024 John Scott Award by the board of directors of City Trusts. This prestigious honor, steeped in historical significance, celebrates scientific innovation at the very location where American…

Latest from MIT Tech Review – How to use Sora, OpenAI’s new video generating tool

MIT Technology Review’s How To series helps you get things done.  Today, OpenAI released its video generation model Sora to the public. The announcement comes on the fifth day of the company’s “shipmas” event, a 12-day marathon of tech releases and demos. Here’s what you should know—and how you can use the video model right now. What…

Latest from MIT : Citation tool offers a new approach to trustworthy AI-generated content

Chatbots can wear a lot of proverbial hats: dictionary, therapist, poet, all-knowing friend. The artificial intelligence models that power these systems appear exceptionally skilled and efficient at providing answers, clarifying concepts, and distilling information. But to establish trustworthiness of content generated by such models, how can we really know if a particular statement is factual,…