Latest from MIT : MIT-derived algorithm helps forecast the frequency of extreme weather

To assess a community’s risk of extreme weather, policymakers rely first on global climate models that can be run decades, and even centuries, forward in time, but only at a coarse resolution. These models might be used to gauge, for instance, future climate conditions for the northeastern U.S., but not specifically for Boston. To estimate…

O’Reilly Media – ChatGPT, Author of The Quixote

TL;DR LLMs and other GenAI models can reproduce significant chunks of training data.Specific prompts seem to “unlock” training data.We have many current and future copyright challenges: training may not infringe copyright, but legal doesn’t mean legitimate—we consider the analogy of MegaFace where surveillance models have been trained on photos of minors, for example, without informed…

Latest from MIT Tech Review – Meet the MIT Technology Review AI team in London

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. The UK is home to AI powerhouse Google DeepMind, a slew of exciting AI startups, and some of the world’s best universities. It’s also where I live, along with quite a…

Latest from MIT Tech Review – How Adobe’s bet on non-exploitative AI is paying off

Since the beginning of the generative AI boom, there has been a fight over how large AI models are trained. In one camp sit tech companies such as OpenAI that have claimed it is “impossible” to train AI without hoovering the internet of copyrighted data. And in the other camp are artists who argue that…

Latest from MIT Tech Review – The tech industry can’t agree on what open source AI means. That’s a problem.

Suddenly, “open source” is the latest buzzword in AI circles. Meta has pledged to create open-source artificial general intelligence. And Elon Musk is suing OpenAI over its lack of open-source AI models. Meanwhile, a growing number of tech leaders and companies are setting themselves up as open-source champions.  But there’s a fundamental problem—no one can…

Latest from MIT : Engineering household robots to have a little common sense

From wiping up spills to serving up food, robots are being taught to carry out increasingly complicated household tasks. Many such home-bot trainees are learning through imitation; they are programmed to copy the motions that a human physically guides them through. It turns out that robots are excellent mimics. But unless engineers also program them…

Latest from MIT : Large language models use a surprisingly simple mechanism to retrieve some stored knowledge

Large language models, such as those that power popular artificial intelligence chatbots like ChatGPT, are incredibly complex. Even though these models are being used as tools in many areas, such as customer support, code generation, and language translation, scientists still don’t fully grasp how they work. In an effort to better understand what is going…

Latest from MIT Tech Review – Apple researchers explore dropping “Siri” phrase & listening with AI instead

Researchers from Apple are probing whether it’s possible to use artificial intelligence to detect when a user is speaking to a device like an iPhone, thereby eliminating the technical need for a trigger phrase like “Siri,” according to a paper published on Friday. In a study, which was uploaded to Arxiv and has not been…

Latest from MIT : AI generates high-quality images 30 times faster in a single step

In our current age of artificial intelligence, computers can generate their own “art” by way of diffusion models, iteratively adding structure to a noisy initial state until a clear image or video emerges. Diffusion models have suddenly grabbed a seat at everyone’s table: Enter a few words and experience instantaneous, dopamine-spiking dreamscapes at the intersection…

UC Berkeley – Generating 3D Molecular Conformers via Equivariant Coarse-Graining and Aggregated Attention

<!– –>Figure 1: CoarsenConf architecture. <!– (I) The encoder $q_phi(z| X, mathcal{R})$ takes the fine-grained (FG) ground truth conformer $X$, RDKit approximate conformer $mathcal{R}$ , and coarse-grained (CG) conformer $mathcal{C}$ as inputs (derived from $X$ and a predefined CG strategy), and outputs a variable-length equivariant CG representation via equivariant message passing and point convolutions. (II)…