General Data Protection Regulation (GDPR) , Standards, Regulations & Compliance

Food Delivery Services Face GDPR Fines Over AI Algorithms

Workers Faced Discriminatory AI at Both Deliveroo and Foodinho, Regulator Says
Food Delivery Services Face GDPR Fines Over AI Algorithms
A Deliveroo delivery rider (Photo: Deliveroo)

Italy's privacy regulator has slammed two of the country's biggest online food delivery firms - Deliveroo and Foodinho - with multimillion euro fines for using algorithms that discriminated against some "gig economy" workers.

See Also: 6 Healthcare Industry Prescriptions | Curing Your Contact Center Data Security Epidemic

The regulator said that workers could be penalized based on how artificial intelligence - aka machine learning - algorithms were being used to assess their work. But those algorithms remained secret, and workers had no way to appeal any such assessment. In addition, the regulator said, the firms could not prove that their algorithms were not being discriminatory.

As a result, Italy's data protection authority, known as the Garante, on Monday announced a 2.9 million euro ($3 million) fine against Deliveroo for violating the EU's General Data Protection Regulation.

The penalty follows the Garante announcing on July 5 that after an investigation into the Italian operations of Foodinho, it would be fining that online food delivery platform 2.6 million euros ($3.1 million) for violating GDPR. It also issued an injunction requiring specific improvements to be made.

"Both cases have important lessons for technology businesses in particular and show some of the conflicts between AI and GDPR," says attorney Jonathan Armstrong, a partner at London-based law firm Cordery.

Deliveroo did not immediately respond to a request for comment on the Garante's findings, nor did Foodinho, which reportedly has plans to appeal the fine.

Foodinho Probe

Foodinho, owned by Barcelona, Spain-based Glovoapp23, is an on-demand food delivery service that was raided over a two-day period in June 2019 by the Garante as part of a joint investigation with Spain's data protection authority, known as the AEPD. The AEPD's investigation into Foodinho's Spanish operations is ongoing. The business also operates in 22 other countries across Africa, Europe, Asia and Central and South America.

The Garante says it found that algorithms being used by the company to manage its Italian workers - for booking them shifts and assigning them deliveries - were violating those workers' rights.

The Garante reports that all delivery personnel or "riders" - typically, bicyclists or moped drivers - initially got scored with a default value, which subsequently was adjusted based on the following characteristics and weightings:

  • Customer feedback: a thumbs-up or thumbs-down - 15%;
  • Merchants - 5%;
  • Working in hours of high demand - 35%;
  • Orders delivered - 10%;
  • Productivity - 35%.

Workers with a higher score gained the ability to book new working slots before others. But the Garante found that the company's practices failed to honor its workers' rights.

"The company, for example, had not adequately informed the workers on the functioning of the system and did not guarantee the accuracy and correctness of the results of the algorithmic systems used for the evaluation of the riders," the regulator says. "Nor did it guarantee procedures to protect the right to obtain human intervention, express one's opinion and contest the decisions adopted through the use of the algorithms in question, including the exclusion of a part of the riders from job opportunities."

As a result, the Garante ordered Foodinho to modify its systems to verify that booking and assignment algorithms were not discriminating against them, as well as to modify some overly long personal data retention practices

The Garante has ordered the company to pay a 2.6 million euro fine, based not only on the AI problems, but also on the company failing to have in place a data protection officer and not keeping sufficient records. The amount of the fine, it said, also took into account "the limited collaboration offered by the company during the investigation and the high number of riders involved in Italy - about 19,000 at the time of the inspection."

Protections Against Automated Decision-Making

Cordery's Armstrong says the algorithms being used by Foodinho were found to be in violation of Article 22 of GDPR, which concerns "automated individual decision-making, including profiling."

"Under GDPR Article 22, individuals have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning them or similarly significantly affects them, unless certain exceptions apply and specific protections for those individuals are in place," the attorneys say.

"The investigation found that the platform's use of algorithms to automatically penalize riders by excluding them from job opportunities if their ratings fell below a certain level was discriminatory, and the fact that there was no opportunity for human review nor the ability to challenge the decision contravened GDPR."

Investigation Into Deliveroo

On Monday, meanwhile, the Garante announced its injunction against Deliveroo, dated July 22.

Founded in 2013, London-based Roofoods does business as Deliveroo, operating not just in Italy and the U.K., but also in the Netherlands, France, Belgium, Ireland and Spain as well as in Australia, Singapore, Hong Kong, the United Arab Emirates and Kuwait.

In June 2019, as part of its probe of food delivery businesses, the Garante raided Deliveroo's offices, gathering information and conducting interviews over a two-day period.

Based on the Garante's penalty notice, here's how the system worked: Deliveroo's Italian operation relied on a centralized system hosted in a data center in Ireland, which it was using to support 8,000 self-employed contractors for riders. Each rider signed an agreement with Deliveroo and then received access to an app they had to install on their mobile device that they used whenever on a shift.

For the Italian operation of Deliveroo, managers fed information into the Ireland-based central system, which would rate the riders' performance, but without revealing the logic that was being used. The Italian operation told regulators that "it has access only to the data it can influence, feeding the shared database, without deciding the logic of the processing."

The Garante found that information being used to rate riders included:

  • The rider's availability to work "critical time slots" on Friday, Saturday and Sunday evenings;
  • Whether the rider works shifts they've reserved or cancels after starting a shift;
  • How quickly the rider delivers orders.

More highly rated riders had access to busier and more lucrative shifts. But as with the Foodinho case, among other problems, the Garante said its investigation found multiple transparency and fairness issues surrounding how Deliveroo used algorithms to assign work.

For example, GDPR Article 5, "Principles relating to processing of personal data," states that personal data must be "processed lawfully, fairly and in a transparent manner in relation to the data subject."

"As part of that, a controller should be able to show that its algorithm is not discriminatory," Armstrong say. "Deliveroo said that it had changed platforms since the inspection but the Garante emphasized that it was incumbent on Deliveroo 'to verify, on a periodic basis, the correctness of the results of the algorithms to minimize the risk of distorted or discriminatory effects.'"

Best Practices for Algorithmic Management of Staff

Armstrong says that the two Italian cases offer lessons for any organization that uses "algorithmic management of staff." For starters, he recommends all such organizations must first carry out a data protection impact assessment, thoroughly test their algorithms for any signs of bias and inform employees about how algorithms are being used to make important decisions about them - while also explaining, in an easy-to-understand manner, how the algorithms function.

Understanding current norms for AI use can also help. "Ask around: Ethics committees and/or employee focus groups can be useful temperature checks for gauging whether measures are likely to be perceived as overly privacy-intrusive," Armstrong says.

Expect more privacy probes of algorithms in the workplace, Armstrong adds, noting that too many uses of "artificial intelligence" fail to live up to that billing.

"Most of the stuff we see is simple programming - a formula or algorithm which calculates something along the lines of, 'Someone with a score of 128 on this test is more likely to be a good employee who will stay,'" he says. "The issue here is that’s often prejudice or gut feel dressed up as science and then codified in the algorithm. Often this stuff isn’t justifiable and is discriminatory. For example, if I exclude everyone who refuses to work on a Saturday or Sunday in my algorithm, could this exclude some people on the basis of their religion?"


About the Author

Mathew J. Schwartz

Mathew J. Schwartz

Executive Editor, DataBreachToday & Europe, ISMG

Schwartz is an award-winning journalist with two decades of experience in magazines, newspapers and electronic media. He has covered the information security and privacy sector throughout his career. Before joining Information Security Media Group in 2014, where he now serves as the executive editor, DataBreachToday and for European news coverage, Schwartz was the information security beat reporter for InformationWeek and a frequent contributor to DarkReading, among other publications. He lives in Scotland.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing databreachtoday.com, you agree to our use of cookies.