magic starSummarize by Aili

Uber Eats courier's fight against AI bias shows justice under UK law is hard won | TechCrunch

🌈 Abstract

The article discusses the case of Uber Eats courier Pa Edrissa Manjang, who is Black, and received a payout from Uber after "racially discriminatory" facial recognition checks prevented him from accessing the app. The article raises questions about the adequacy of UK law in dealing with the rising use of AI systems, particularly the lack of transparency around automated systems and the challenges in obtaining redress for those affected by AI-driven bias.

🙋 Q&A

[01] Uber Eats Courier Case

1. What happened to the Uber Eats courier Pa Edrissa Manjang?

  • Pa Edrissa Manjang, a Black Uber Eats courier, received a payout from Uber after "racially discriminatory" facial recognition checks prevented him from accessing the app, which he had been using since November 2019 to pick up delivery jobs.

2. What were the key issues with Uber's facial recognition system in this case?

  • Uber's facial recognition system, based on Microsoft's technology, failed to correctly identify Manjang, leading to the suspension and termination of his account due to "continued mismatches" in the photos he had submitted.
  • Uber's system of human review as a safety net for automated decisions also failed in this case.

3. How was Manjang able to resolve the issue?

  • Manjang filed legal claims against Uber in October 2021, supported by the Equality and Human Rights Commission (EHRC) and the App Drivers & Couriers Union (ADCU).
  • After years of litigation, Uber offered a financial settlement, which Manjang accepted, though the terms were not disclosed.

4. What were the key takeaways from this case?

  • The case highlights the inadequacy of UK law in governing the use of AI systems, particularly the lack of transparency and the challenges in obtaining redress for those affected by AI-driven bias.
  • The case also raises questions about the effectiveness of the UK's data protection laws and the role of the Information Commissioner's Office (ICO) in investigating and enforcing these laws.

[02] Regulatory Landscape in the UK

1. How does the UK's regulatory framework address the use of AI systems?

  • The UK government has not introduced dedicated AI safety legislation, instead relying on existing laws and regulatory bodies to extend their oversight to cover AI risks.
  • The government has provided a small amount of additional funding (£10 million) for regulators to research AI risks and develop tools to examine AI systems, which is seen as an inadequate response.
  • The UK's data protection law (UK GDPR) should provide protections against opaque AI processes, but enforcement by the ICO has been lacking.

2. What are the concerns regarding the UK government's approach to AI regulation?

  • The government's plan to rely on existing laws and regulators to address AI risks is criticized as an "incredibly low level of additional resource" for already overstretched regulators.
  • There are concerns that the government's proposed approach still leaves gaps in the regulatory framework, as there is no dedicated oversight for AI harms that fall between the cracks of the existing patchwork of regulations.
  • The government's plan to dilute data protection law via a post-Brexit data reform bill is also seen as a concerning move that could further undermine legal protections against the harms of AI systems.
Shared by Daniel Chen ·
© 2024 NewMotor Inc.