Latest from MIT : Enabling AI to explain its predictions in plain language

Machine-learning models can make mistakes and be difficult to use, so scientists have developed explanation methods to help users understand when and how they should trust a model’s predictions. These explanations are often complex, however, perhaps containing information about hundreds of model features. And they are sometimes presented as multifaceted visualizations that can be difficult…

Latest from MIT : Daniela Rus wins John Scott Award

Daniela Rus, director of MIT’s Computer Science and Artificial Intelligence Laboratory and MIT professor of electrical engineering and computer science, was recently named a co-recipient of the 2024 John Scott Award by the board of directors of City Trusts. This prestigious honor, steeped in historical significance, celebrates scientific innovation at the very location where American…

Latest from MIT Tech Review – How to use Sora, OpenAI’s new video generating tool

MIT Technology Review’s How To series helps you get things done.  Today, OpenAI released its video generation model Sora to the public. The announcement comes on the fifth day of the company’s “shipmas” event, a 12-day marathon of tech releases and demos. Here’s what you should know—and how you can use the video model right now. What…

Latest from MIT : Citation tool offers a new approach to trustworthy AI-generated content

Chatbots can wear a lot of proverbial hats: dictionary, therapist, poet, all-knowing friend. The artificial intelligence models that power these systems appear exceptionally skilled and efficient at providing answers, clarifying concepts, and distilling information. But to establish trustworthiness of content generated by such models, how can we really know if a particular statement is factual,…

Latest from MIT : What do we know about the economics of AI?

For all the talk about artificial intelligence upending the world, its economic effects remain uncertain. There is massive investment in AI but little clarity about what it will produce. Examining AI has become a significant part of Nobel-winning economist Daron Acemoglu’s work. An Institute Professor at MIT, Acemoglu has long studied the impact of technology…

Latest from MIT : Study: Browsing negative content online makes mental health struggles worse

People struggling with their mental health are more likely to browse negative content online, and in turn, that negative content makes their symptoms worse, according to a series of studies by researchers at MIT. The group behind the research has developed a web plug-in tool to help those looking to protect their mental health make…

Latest from MIT Tech Review – The US Department of Defense is investing in deepfake detection

The US Department of Defense has invested $2.4 million over two years in deepfake detection technology from a startup called Hive AI. It’s the first contract of its kind for the DOD’s Defense Innovation Unit, which accelerates the adoption of new technologies for the US defense sector. Hive AI’s models are capable of detecting AI-generated…

Latest from MIT : Want to design the car of the future? Here are 8,000 designs to get you started.

Car design is an iterative and proprietary process. Carmakers can spend several years on the design phase for a car, tweaking 3D forms in simulations before building out the most promising designs for physical testing. The details and specs of these tests, including the aerodynamics of a given car design, are typically not made public….

Latest from MIT : MIT delegation mainstreams biodiversity conservation at the UN Biodiversity Convention, COP16

For the first time, MIT sent an organized engagement to the global Conference of the Parties for the Convention on Biological Diversity, which this year was held Oct. 21 to Nov. 1 in Cali, Colombia. The 10 delegates to COP16 included faculty, researchers, and students from the MIT Environmental Solutions Initiative (ESI), the Department of…

Latest from MIT Tech Review – OpenAI’s new defense contract completes its military pivot

At the start of 2024, OpenAI’s rules for how armed forces might use its technology were unambiguous.  The company prohibited anyone from using its models for “weapons development” or “military and warfare.” That changed on January 10, when The Intercept reported that OpenAI had softened those restrictions, forbidding anyone from using the technology to “harm…