UK Regulators Seek Input on Auditing Algorithms

UK-regulators-launch-investigation-into-the-use-of-algorithms-on-apps

The Competition and Markets Authority (CMA), Financial Conduct Authority (FCA), Information Commissioner’s Office (ICO), and Ofcom are seeking input on auditing algorithms, the current landscape, and the role of regulators, they announced in a new call for comment.

The announcement comes as the Digital Regulation Cooperation Forum (DRCF), which is formed by the four organisations, outlined priorities for the upcoming 18 months. These include tackling challenges such as children’s e-safety, the use of data in digital advertising, algorithmic transparency, as well as enabling innovation.

Commenting on the decision, DRCF Chief Executive Gill Whitehead described the projects for 2022 and 2023 as “significant”.

According to a paper submitted by the UK government with the intention to provide robust oversight while scrutinising both the public and private sectors, algorithmic systems, particularly modern Machine Learning (ML) approaches, pose significant risks if deployed and managed without due care.

The paper elaborates on how such algorithms can amplify harmful biases that lead to discriminatory decisions or unfair outcomes that reinforce inequalities. They can be used to mislead consumers and distort competition, it notes. Further, the opaque and complex nature by which they collect and process large volumes of personal data can put people’s privacy rights in jeopardy. When it comes to addressing these risks, regulators have a variety of options available, such as producing instructive guidance, undertaking enforcement activity and, where necessary, issuing financial penalties for unlawful conduct and mandating new practices.

Key takeaways from the paper

  • Algorithms offer many benefits to individuals and society, and these benefits can increase with continued responsible innovation
  • Harms can occur both intentionally and inadvertently
  • Those procuring and/or using algorithms often know little about their origins and limitations
  • There is a lack of visibility and transparency in algorithmic processing, which can undermine accountability
  • A “human in the loop” is not a foolproof safeguard against harms
  • There are limitations to DRCF members’ current understanding of the risks associated with algorithmic processing