EmTech Next, MIT Technology Review’s signature digital transformation conference, is June 13-15, 2023. This year’s event looks at the game-changing power of generative AI, the technology, and the legal implications of generated content. Leaders from OpenAI, Google, Meta, NVIDIA, and more are expected to discuss the future of AI.
Similar Posts
Latest from Google AI – HEAL: A framework for health equity assessment of machine learning performance
Posted by Mike Schaekermann, Research Scientist, Google Research, and Ivor Horn, Chief Health Equity Officer & Director, Google Core Health equity is a major societal concern worldwide with disparities having many causes. These sources include limitations in access to healthcare, differences in clinical treatment, and even fundamental differences in the diagnostic technology. In dermatology for…
O’Reilly Media – Whistle-Blowing Models
Anthropic released news that its models have attempted to contact the police or take other action when they are asked to do something that might be illegal. The company’s also conducted some experiments in which Claude threatened to blackmail a user who was planning to turn it off. As far as I can tell, this…
Latest from Google AI – Formation of Robust Bound States of Interacting Photons
Posted by Alexis Morvan and Trond Andersen, Research Scientists, Google Quantum AI When quantum computers were first proposed, they were hoped to be a way to better understand the quantum world. With a so-called “quantum simulator,” one could engineer a quantum computer to investigate how various quantum phenomena arise, including those that are intractable to…
Latest from MIT : MIT researchers develop an efficient way to train more reliable AI agents
Fields ranging from robotics to medicine to political science are attempting to train AI systems to make meaningful decisions of all kinds. For example, using an AI system to intelligently control traffic in a congested city could help motorists reach their destinations faster, while improving safety or sustainability. Unfortunately, teaching an AI system to make…
Latest from MIT : Study: Transparency is often lacking in datasets used to train large language models
In order to train more powerful large language models, researchers use vast dataset collections that blend diverse data from thousands of web sources. But as these datasets are combined and recombined into multiple collections, important information about their origins and restrictions on how they can be used are often lost or confounded in the shuffle….
Latest from MIT Tech Review – What’s next for AI in 2024
This time last year we did something reckless. In an industry where nothing stands still, we had a go at predicting the future. How did we do? Our four big bets for 2023 were that the next big thing in chatbots would be multimodal (check: the most powerful large language models out there, OpenAI’s GPT-4…
