Major catastrophes can transform industries and cultures. The Johnstown Flood, the sinking of the Titanic, the explosion of the Hindenburg, the flawed response to Hurricane Katrina–each had a lasting impact.

Even when catastrophes don’t kill large numbers of people, they often change how we think and behave. The financial collapse of 2008 led to tighter regulation of banks and financial institutions. The Three Mile Island accident led to safety improvements across the nuclear power industry.

Sometimes a series of negative headlines can shift opinion and amplify our awareness of lurking vulnerabilities. For years, malicious computer worms and viruses were the stuff of science fiction. Then we experienced Melissa, Mydoom, and WannaCry. Cybersecurity itself was considered an esoteric backroom technology problem until we learned of the Equifax breach, the Colonial Pipeline ransomware attack, Log4j vulnerability, and the massive SolarWinds hack. We didn’t really care about cybersecurity until events forced us to pay attention.

AI’s “SolarWinds moment” would make it a boardroom issue at many companies. If an AI solution caused widespread harm, regulatory bodies with investigative resources and powers of subpoena would jump in. Board members, directors, and corporate officers could be held liable and might face prosecution. The idea of corporations paying huge fines and technology executives going to jail for misusing AI isn’t far-fetched–the European Commission’s proposed AI Act includes three levels of sanctions for non-compliance, with fines up to €30 million or 6% of total worldwide annual income, depending on the severity of the violation.

A couple of years ago, U.S. Sen. Ron Wyden (D-Oregon) introduced a bill requiring “companies to assess the algorithms that process consumer data to examine their impact on accuracy, fairness, bias, discrimination, privacy, and security.” The bill also included stiff criminal penalties “for senior executives who knowingly lie” to the Federal Trade Commission about their use of data. While it’s unlikely that the bill will become law, merely raising the possibility of criminal prosecution and jail time has upped the ante for “commercial entities that operate high-risk information systems or automated-decision systems, such as those that use artificial intelligence or machine learning.”

AI + Neuroscience + Quantum Computing: The Nightmare Scenario

Compared to cybersecurity risks, the scale of AI’s destructive power is potentially far greater. When AI has its “Solar Winds moment,” the impact may be significantly more catastrophic than a series of cybersecurity breaches. Ask AI experts to share their worst fears about AI and they’re likely to mention scenarios in which AI is combined with neuroscience and quantum computing. You think AI is scary now? Just wait until it’s running on a quantum coprocessor and connected to your brain. 

Here’s a more likely nightmare scenario that doesn’t even require any novel technologies: State or local governments using AI, facial recognition, and license plate readers to identify, shame, or prosecute families or individuals who engage in behaviors that are deemed immoral or anti-social. Those behaviors could range from promoting a banned book to seeking an abortion in a state where abortion has been severely restricted.

Related work from others:  Latest from Google AI - IndoorSim-to-OutdoorReal: Learning to navigate outdoors without any outdoor experience

AI is in its infancy, but the clock is ticking. The good news is that plenty of people in the AI community have been thinking, talking, and writing about AI ethics. Examples of organizations providing insight and resources on ethical uses of AI and machine learning include ​The Center for Applied Artificial Intelligence at the University of Chicago Booth School of Business, ​LA Tech4Good, The AI Hub at McSilver, AI4ALL, and the Algorithmic Justice League

There’s no shortage of suggested remedies in the hopper. Government agencies, non-governmental organizations, corporations, non-profits, think tanks, and universities have generated a prolific flow of proposals for rules, regulations, guidelines, frameworks, principles, and policies that would limit abuse of AI and ensure that it’s used in ways that are beneficial rather than harmful. The White House’s Office of Science and Technology Policy recently published the Blueprint for an AI Bill of Rights. The blueprint is an unenforceable document. But it includes five refreshingly blunt principles that, if implemented, would greatly reduce the dangers posed by unregulated AI solutions. Here are the blueprint’s five basic principles:

You should be protected from unsafe or ineffective systems.You should not face discrimination by algorithms and systems should be used and designed in an equitable way.You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used. You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.

It’s important to note that each of the five principles addresses outcomes, rather than processes. Cathy O’Neil, the author of Weapons of Math Destruction, has suggested a similar outcomes-based approach for reducing specific harms caused by algorithmic bias. An outcomes-based strategy would look at the impact of an AI or ML solution on specific categories and subgroups of stakeholders. That kind of granular approach would make it easier to develop statistical tests that could determine if the solution is harming any of the groups. Once the impact has been determined, it should be easier to modify the AI solution and mitigate its harmful effects.

Gamifying or crowdsourcing bias detection are also effective tactics. Before it was disbanded, Twitter’s AI ethics team successfully ran a “bias bounty” contest that allowed researchers from outside the company to examine an automatic photo-cropping algorithm that favored white people over Black people.

Related work from others:  Latest from MIT Tech Review - The Department of Defense is issuing AI ethics guidelines for tech contractors

Shifting the Responsibility Back to People

Focusing on outcomes instead of processes is critical since it fundamentally shifts the burden of responsibility from the AI solution to the people operating it.

Ana Chubinidze, founder of AdalanAI, a software platform for AI Governance based in Berlin, says that using terms like “ethical AI” and “responsible AI” blur the issue by suggesting that an AI solution–rather than the people who are using it–should be held responsible when it does something bad. She raises an excellent point: AI is just another tool we’ve invented. The onus is on us to behave ethically when we’re using it. If we don’t, then we are unethical, not the AI.

Why does it matter who–or what–is responsible? It matters because we already have methods, techniques, and strategies for encouraging and enforcing responsibility in human beings. Teaching responsibility and passing it from one generation to the next is a standard feature of civilization. We don’t know how to do that for machines. At least not yet.

An era of fully autonomous AI is on the horizon. Would granting AIs full autonomy make them responsible for their decisions? If so, whose ethics will guide their decision-making processes? Who will watch the watchmen?

Blaise Aguera y Arcas, a vice president and fellow at Google Research, has written a long, eloquent and well-documented article about the possibilities for teaching AIs to genuinely understand human values. His article, titled, Can machines learn how to behave? is worth reading. It makes a strong case for the eventuality of machines acquiring a sense of fairness and moral responsibility. But it’s fair to ask whether we–as a society and as a species–are prepared to deal with the consequences of handing basic human responsibilities to autonomous AIs.

Preparing for What Happens Next

Today, most people aren’t interested in the sticky details of AI and its long-term impact on society. Within the software community, it often feels as though we’re inundated with articles, papers, and conferences on AI ethics. “But we’re in a bubble and there is very little awareness outside of the bubble,” says Chubinidze. “Awareness is always the first step. Then we can agree that we have a problem and that we need to solve it. Progress is slow because most people aren’t aware of the problem.”

But rest assured: AI will have its “SolarWinds moment.” And when that moment of crisis arrives, AI will become truly controversial, similar to the way that social media has become a flashpoint for contentious arguments over personal freedom, corporate responsibility, free markets, and government regulation.

Despite hand-wringing, article-writing, and congressional panels, social media remains largely unregulated. Based on our track record with social media, is it reasonable to expect that we can summon the gumption to effectively regulate AI?

Related work from others:  Latest from MIT Tech Review - Advancing to adaptive cloud

The answer is yes. Public perception of AI is very different from public perception of social media. In its early days, social media was regarded as “harmless” entertainment; it took several years for it to evolve into a widely loathed platform for spreading hatred and disseminating misinformation. Fear and mistrust of AI, on the other hand, has been a staple of popular culture for decades.

Gut-level fear of AI may indeed make it easier to enact and enforce strong regulations when the tipping point occurs and people begin clamoring for their elected officials to “do something” about AI.

In the meantime, we can learn from the experiences of the EC. The draft version of the AI Act, which includes the views of various stakeholders, has generated demands from civil rights organizations for “wider prohibition and regulation of AI systems.” Stakeholders have called for “a ban on indiscriminate or arbitrarily-targeted use of biometrics in public or publicly-accessible spaces and for restrictions on the uses of AI systems, including for border control and predictive policing.” Commenters on the draft have encouraged “a wider ban on the use of AI to categorize people based on physiological, behavioral or biometric data, for emotion recognition, as well as dangerous uses in the context of policing, migration, asylum, and border management.”

All of these ideas, suggestions, and proposals are slowly forming a foundational level of consensus that’s likely to come in handy when people begin taking the risks of unregulated AI more seriously than they are today.

Minerva Tantoco, CEO of City Strategies LLC and New York City’s first chief technology officer, describes herself as “an optimist and also a pragmatist” when considering the future of AI. “Good outcomes do not happen on their own. For tools like artificial intelligence, ethical, positive outcomes will require an active approach to developing guidelines, toolkits, testing and transparency. I am optimistic but we need to actively engage and question the use of AI and its impact,” she says.

Tantoco notes that, “We as a society are still at the beginning of understanding the impact of AI on our daily lives, whether it is our health, finances, employment, or the messages we see.” Yet she sees “cause for hope in the growing awareness that AI must be used intentionally to be accurate, and equitable … There is also an awareness among policymakers that AI can be used for positive impact, and that regulations and guidelines will be necessary to help assure positive outcomes.”

Similar Posts