Programs like AlphaZero and GPT-3 are massive accomplishments: they represent years of sustained work solving a difficult problem. But these problems are squarely within the domain of traditional AI. Playing Chess and Go or building ever-better language models have been AI projects for decades. The following projects have a different flavor:

In February, PLOS Genetics published an article by researchers who are using GANs (Generative Adversarial Networks) to create artificial human genomes.Another group of researchers published an article about using NLP (natural language processing) to analyze viral genomes and, specifically, to predict the behavior of mutations. They were able to distinguish between errors in “syntax” (which make the gene non-viable), and changes in semantics (which result in a viable virus that functions differently).Yet another group of researchers modelled a small portion of a fruit fly’s brain (the part used for smell), and were able to train that to create a model for natural language processing. This new model appears to be orders of magnitude more efficient than state-of-the-art models like GPT-3.

The common thread through these advances is applying work in one field to another area that’s apparently unrelated—not sustained research at cracking a core AI problem. Using NLP to analyze mutations? That’s brilliant—and it’s one of those brilliant things that sounds so obvious once you think about it. And it’s an area where NLP may have a real significant advantage because it doesn’t actually understand language, any more than humans understand DNA.

The ability to create artificial human genomes is important in the short term because the human genome data available to researchers is limited by privacy laws. Synthetic genomes aren’t subject to privacy laws, because they don’t belong to any person. Data limitations aren’t a new problem; AI researchers frequently face the problem of finding sufficient data to train a model. So they have developed a lot of techniques for generating “synthetic” data: for example, cropping, rotating, or distorting pictures to get more data for image recognition. Once you’ve realized that it’s possible to create synthetic data, the jump to creating synthetic genomes isn’t far-fetched; you just have to make the connection. Asking where it might lead in the long term is even more important.

Related work from others:  Latest from MIT : An AI dataset carves new paths to tornado detection

It’s not hard to come up with more examples of surprising work that comes from bringing techniques from one field into another. DALL-E (which combines NLP with image analysis to create a new image from a description) is another example. So is ShadowSense, which uses image analysis to let robots determine when they are touched.

These results suggest that we’re at the start of something new. The world isn’t a better place because computers can play Go; but it may become a better place if we can understand how our genomes work. Using adversarial techniques outside of game play or NLP techniques outside of language will inevitably lead to solving the problems we actually need to solve.

Unfortunately, that’s really only half the story. While we may be on the edge of making great advances in applications, we aren’t making the same advances in fairness and justice. Here are some key indicators:

Attempts to train models to predict the pain that Black patients will suffer as a result of medical procedures have largely failed. Recently, research discovered that the models were more successful if they got their training data by actually listening to Black patients, rather than just using records from their doctors.A study by MIT discovered that training predictive crime models on crime reports rather than arrests doesn’t make them less racist.

Fortunately, the doctors modeling medical pain decided to listen to their Black patients; unfortunately, that kind of listening is still rare. Listening to Black patients shouldn’t be a breakthrough akin to using NLP to analyze DNA. Why weren’t we listening to the patients in the first place? And why are the patients’ assessments of their pain so different from the doctors’?  This is clearly progress, but more than that, it’s a sign of how much progress has yet to be made in treating minorities fairly.

Related work from others:  Latest from MIT : A creation story told through immersive technology

And I’m afraid that MIT has only discovered that there aren’t any historical data sources about crime that aren’t biased, something we already knew. If you look at so-called “white collar” crime, Midtown Manhattan is the most dangerous neighborhood in New York. But that’s not where the police are spending their time.  The only somewhat tongue-in-cheek paper accompanying the map of White Collar Crime Risk Zones suggests that their next step will be using “facial features to quantify the ‘criminality’ of the individual.”  That would clearly be a joke if such techniques weren’t already under development, and not just in China.

It looks like we’re at the cusp of some breakthroughs in AI—not new algorithms or approaches, but new ways to use the algorithms we already have. But the more things change, the more they stay the same. Our ability to think about our responsibilities of ethics and justice—and, more specifically, to put  in place mechanisms to redress harms caused by unfair decisions–are slow to catch up.

Similar Posts