Latest from MIT Tech Review – Uber’s facial recognition is locking Indian drivers out of their accounts
One early evening in February last year, a 23-year-old Uber driver named Niradi Srikanth was getting ready to start another shift, ferrying passengers around the south Indian city of Hyderabad in his midsize sedan. He pointed the phone at his face to take a selfie to verify his identity. The process usually worked seamlessly. But this time he was unable to log in.
It didn’t take long for Srikanth to come up with a theory as to why. He had just returned from visiting the Hindu Tirupati temple, 350 miles away, where he had shaved his head and prayed for a prosperous life.
The Uber app prompted Srikanth to try again, so he waited a few minutes and took another picture. Rejected again.
“I was worried about bookings. We have daily targets where if we complete a certain number of bookings, we get incentives,” Srikanth says. “I was anxious to log in and start driving, and not waste any time.” So he tried once more. This time he used a second phone to pull up an image of himself from before he visited the temple. When he took a picture of it, Uber informed him that his account had been blocked.
Srikanth is not alone. In a survey conducted by MIT Technology Review of 150 Uber drivers in the country, almost half had been either temporarily or permanently locked out of their accounts as a result of problems with their selfie. Many suspected that a change in their appearance, such as facial hair, a shaved head, or a haircut, was to blame. Another quarter of them believe it was due to low lighting.
Srikanth thinks the split-second decision to take a picture of another phone cost him his livelihood: he went from earning over $500 a month to nothing. He spent months afterward trying to get his account reinstated, to no avail. Eventually he had to move back to his hometown, where he works a few different jobs and makes barely 10% of what he used to.
Srikanth is far from the only worker in India who must interact with facial recognition technology. In addition to the country’s 600,000 Uber drivers, many others work for the homegrown ride-sharing platform Ola and for startups such as Swiggy, Zomato, and Urban Company. All ask their platform workers to upload selfies for logins or verifications.
In other markets, gig workers have fought back against facial recognition. In the UK, for example, at least 35 Uber drivers claimed last year that their accounts were wrongly terminated. The Independent Workers’ Union of Great Britain has blamed a “racist algorithm.” Uber has faced at least two lawsuits in the UK because of the software.
Some countries and regions have moved to provide better protections for gig workers. The EU proposed a directive last year to improve working conditions and provide algorithmic transparency. And in September 2021, California court struck down Proposition 22, a ballot initiative that excluded gig workers from employee benefits under state law. These regulations recognize that algorithmic systems can “negatively impact the rights of workers,” says Divij Joshi, a lawyer and a PhD candidate at University College London. But India currently has few legal protections in place for gig workers, Joshi says: “These same transparency efforts are not being seen in India from a policy or regulatory lens.”
If problems persist—and protections remain limited—they could have an outsize effect, and not just on work. “Labor platforms in India are starting to become a key interface between the worker, the market, and the government—they enable loans for cars or even credit for larger household expenses,” says Aditi Surie, a senior researcher at the Indian Institute for Human Settlements, who has done research on gig work in India. In a country where such work can catapult someone from precarity to a middle-class existence (especially when estimates suggest that the majority of people worldwide who fell into poverty during the pandemic live in India), getting blocked from or kicked off a platform can have devastating consequences.
Uber checks that a driver’s face matches what the company has on file through a program called “Real-Time ID Check.” It was rolled out in the US in 2016, in India in 2017, and then in other markets. “This prevents fraud and protects drivers’ accounts from being compromised. It also protects riders by building another layer of accountability into the app to ensure the right person is behind the wheel,” Joe Sullivan, Uber’s chief security officer, said in a statement in 2017.
But the company’s driver verification procedures are far from seamless. Adnan Taqi, an Uber driver in Mumbai, ran into trouble with it when the app prompted him to take a selfie around dusk. He was locked out for 48 hours, a big dent in his work schedule—he says he drives 18 hours straight, sometimes as much as 24 hours, to be able to make a living. Days later, he took a selfie that locked him out of his account again, this time for a whole week. That time, Taqi suspects, it came down to hair: “I hadn’t shaved for a few days and my hair had also grown out a bit,” he says.
More than a dozen drivers interviewed for this story detailed instances of having to find better lighting to avoid being locked out of their Uber accounts. “Whenever Uber asks for a selfie in the evenings or at night, I’ve had to pull over and go under a streetlight to click a clear picture—otherwise there are chances of getting rejected,” said Santosh Kumar, an Uber driver from Hyderabad.
Others have struggled with scratches on their cameras and low-budget smartphones. The problem isn’t unique to Uber. Drivers with Ola, which is backed by SoftBank, face similar issues.
Some of these struggles can be explained by natural limitations in face recognition technology. The software starts by converting your face into a set of points, explains Jernej Kavka, an independent technology consultant with access to Microsoft’s Face API, which is what Uber uses to power Real-Time ID Check.
“With excessive facial hair, the points change and it may not recognize where the chin is,” Kavka says. The same thing happens when there is low lighting or the phone’s camera doesn’t have a good contrast. “This makes it difficult for the computer to detect edges,” he explains.
But the software may be especially brittle in India. In December 2021, tech policy researchers Smriti Parsheera (a fellow with the CyberBRICS project) and Gaurav Jain (an economist with the International Finance Corporation) posted a preprint paper that audited four commercial facial processing tools—Amazon’s Rekognition, Microsoft Azure’s Face, Face++, and FaceX—for their performance on Indian faces. When the software was applied to a database of 32,184 election candidates, Microsoft’s Face failed to even detect the presence of a face in more than 1,000 images, throwing an error rate of more than 3%—the worst among the four.
It could be that the Uber app is failing drivers because its software was not trained on a diverse range of Indian faces, Parsheera says. But she says there may be other issues at play as well. “There could be a number of other contributing factors like lighting, angle, effects of aging, etc.,” she explained in writing. “But the lack of transparency surrounding the use of such systems makes it hard to provide a more concrete explanation.”
Microsoft declined to comment in response to questions sent by MIT Technology Review.
The problems don’t end with the algorithm’s decision. Drivers say the grievance redress mechanism that Uber follows is tedious, time-consuming, frustrating, and mostly unhelpful. They say they sometimes spend weeks trying to get their issues resolved. “We have to keep calling their help line incessantly before they unlock our accounts, constantly telling us that the server is down,” said Taqi, with a tone of frustration—but mostly a sense of defeat—in his voice. “It’s like their server is always down.”
Uber did not respond to a request for comment.
Srikanth visited the Uber center at least three times a week for three months before he gave up and went back home. He stood in queues with some 80 to 100 other drivers. “The Uber people kept telling me my ID is permanently blocked and they can’t really do much,” he recalled. “They said I could go to the Bangalore [office] or just deploy another driver to drive my car.”
Elizabeth Anne Watkins, an organizational sociologist from Princeton University who has extensively studied the impact of facial recognition on Uber drivers in the US, would likely find this pattern familiar. “Prone to malfunction in variable conditions, the system places a heavy burden on workers who are left with little organizational support when facial recognition fails,” Hawkins, who is now a research scientist at Intel Labs, wrote in a 2020 paper. “Further, accountability for identity verification is shifted to the workers, who bear the consequences for systemic failures.”
Samantha Dalal, who studies how workers understand algorithmic systems, says there could be more transparency about how the AI made a decision. “Including some explanation that goes beyond ‘You are deactivated’” would help, says Dalal, a doctoral candidate at the University of Colorado Boulder. “Such capabilities exist.”
Absent any insight into what the mercurial, non-human boss wants, gig workers attempt a lot of trial and error while interacting with the apps, Dalal says. In the case of Srikanth, she explains that since he “couldn’t go back in time to before he had shaved his head, he did the next best thing and showed a picture of himself.”
It’s been over a year since Srikanth was locked out of Uber. Despite everything, he’s not hostile toward the company. He simply wants his old life back—one where he was able to make a life for himself in Hyderabad and build up some wealth. He can’t imagine returning to the city unless he can get behind the wheel again.
Varsha Bansal is a freelance journalist based in Bangalore. Reporting for this story was supported by Pulitzer Center’s AI Accountability Network.