It’s hard to ignore the discussion around the Open Letter arguing for a pause in the development of advanced AI systems. Are they dangerous? Will they destroy humanity? Will they condemn all but a few of us to boring, impoverished lives? If these are indeed the dangers we face, pausing AI development for six months is certainly a weak and ineffective preventive.

It’s easier to ignore the voices arguing for the responsible use of AI. Using AI responsibly requires AI to be transparent, fair, and where possible, explainable. Using AI means auditing the outputs of AI systems to ensure that they’re fair; it means documenting the behaviors of AI models and training data sets so that users know how the data was collected and what biases are inherent in that data. It means monitoring systems after they are deployed, updating and tuning them as needed because any model will eventually grow “stale” and start performing badly. It means designing systems that augment and liberate human capabilities, rather than replacing them. It means understanding that humans are accountable for the results of AI systems; “that’s what the computer did” doesn’t cut it.

The most common way to look at this gap is to frame it around the difference between current and long-term problems. That’s certainly correct; the “pause” letter comes from the “Future of Life Institute,” which is much more concerned about establishing colonies on Mars or turning the planet into a pile of paper clips than it is with redlining in real estate or setting bail in criminal cases.

But there’s a more important way to look at the problem, and that’s to realize that we already know how to solve most of those long-term issues. Those solutions all center around paying attention to the short-term issues of justice and fairness. AI systems that are designed to incorporate human values aren’t going to doom humans to unfulfilling lives in favor of a machine. They aren’t going to marginalize human thought or initiative. AI systems that incorporate human values are not going to decide to turn the world into paper clips; frankly, I can’t imagine any “intelligent” system determining that was a good idea. They might refuse to design weapons for biological warfare. And, should we ever be able to get humans to Mars, they will help us build colonies that are fair and just, not colonies dominated by a wealthy kleptocracy, like the ones described in so many of Ursula Leguin’s novels.

Related work from others:  Latest from MIT : Q&A: Gabriela Sá Pessoa on Brazilian politics, human rights in the Amazon, and AI

Another part of the solution is to take accountability and redress seriously. When a model makes a mistake, there has to be some kind of human accountability. When someone is jailed on the basis of incorrect face recognition, there needs to be a rapid process for detecting the error, releasing the victim, correcting their criminal record, and applying appropriate penalties to those responsible for the model. These penalties should be large enough that they can’t be written off as the cost of doing business. How is that different from a human who makes an incorrect ID? A human isn’t sold to a police department by a for-profit company. “The computer said so” isn’t an adequate response–and if recognizing that means that it isn’t economical to develop some kinds of applications can’t be developed, then perhaps those applications shouldn’t be developed. I’m horrified by articles reporting that police use face detection systems with false positive rates over 90%; and although those reports are five years old, I take little comfort in the possibility that the state of the art has improved.

Avoiding bias, prejudice, and hate speech is another critical goal that can be addressed now. But this goal won’t be achieved by somehow purging training data of bias; the result would be systems that make decisions on data that doesn’t reflect any reality. We need to recognize that both our reality and our history are flawed and biased. It will be far more valuable to use AI to detect and correct bias, to train it to make fair decisions in the face of biased data, and to audit its results. Such a system would need to be transparent, so that humans can audit and evaluate its results. Its training data and its design must both be well documented and available to the public. Datasheets for Datasets and Model Cards for Model Reporting, by Timnit Gebru, Margaret Mitchell, and others, are a starting point–but only a starting point. We will have to go much farther to accurately document a model’s behavior.

Related work from others:  Latest from MIT Tech Review - This new data poisoning tool lets artists fight back against generative AI

Building unbiased systems in the face of prejudiced and biased data will only be possible if women and minorities of many kinds, who are so often excluded from software development projects, participate. But building unbiased systems is only a start. People also need to work on countermeasures against AI systems that are designed to attack human rights, and on imagining new kinds of technology and infrastructure to support human well-being. Both of these projects, countermeasures, and new infrastructures, will almost certainly involve designing and building new kinds of AI systems.

I’m suspicious of a rush to regulation, regardless of which side argues for it. I don’t oppose regulation in principle. But you have to be very careful what you wish for. Looking at the legislative bodies in the US, I see very little possibility that regulation would result in anything positive. At the best, we’d get meaningless grandstanding. The worst is all too likely: we’d get laws and regulations that institute performative cruelty against women, racial and ethnic minorities, and LBGTQ people. Do we want to see AI systems that aren’t allowed to discuss slavery because it offends White people? That kind of regulation is already impacting many school districts, and it is naive to think that it won’t impact AI.

I’m also suspicious of the motives behind the “Pause” letter. Is it to give certain bad actors time to build an “anti-woke” AI that’s a playground for misogyny and other forms of hatred? Is it an attempt to whip up hysteria that diverts attention from basic issues of justice and fairness? Is it, as danah boyd argues, that tech leaders are afraid that they will become the new underclass, subject to the AI overlords they created?

Related work from others:  Latest from MIT Tech Review - The coolest thing about smart glasses is not the AR. It’s the AI.

I can’t answer those questions, though I fear the consequences of an “AI Pause” would be worse than the possibility of disease. As danah writes, “obsessing over AI is a strategic distraction more than an effective way of grappling with our sociotechnical reality.” Or, as Brian Behlendorf writes about AI leaders cautioning us to fear AI:

Being Cassandra is fun and can lead to clicks …. But if they actually feel regret? Among other things they can do, they can make a donation to, help promote, volunteer for, or write code for:

The Campaign to Stop Killer Robots Witness.org, who have developed tools, infrastructure, and messaging for countering AI-generated fake news built to attack human rightsThe Mozilla Foundation, who are driving hard on ethical AI research and related fields like data governance

A “Pause” won’t do anything except help bad actors to catch up or get ahead. There is only one way to build an AI that we can live with in some unspecified long-term future, and that is to build an AI that is fair and just today: an AI that deals with real problems and damages that are incurred by real people, not imagined ones.

Similar Posts