Legal News

The Legal Journal covers the most significant legal news in the UK

Looking for a solicitor?

Legal Directory

Immigration and discrimination: JCWI granted judicial review


Back in October 2019, the Joint Council for the Welfare of Immigrants (JCWI), began its legal case against the Home Office’s visa algorithm.

The charity, supported by Foxglove, an advocacy group for digital injustices, argued that the selective algorithm used in the processing of visa applications is discriminatory and unlawful.

Subsequently, they demanded the release of all technical details behind the streaming tool. Further to this, they called for the system to be suspended until a “substantive review” is conducted.

Working in collaboration, the JCWI and Foxglove, have now been granted a judicial review, with papers filed. The case has been dubbed “ground-breaking” and the first of its kind in British legal history to challenge the use of AI. It also raises important questions about digital ethics and the future of AI in government.

How does the streaming tool work?

The public first gained awareness of the algorithm, when it was presented to a group of lawyers 

at a visa processing centre in Sheffield. In attendance was the Law Society’s President, Christina Blacklaws. Immediately concerns were raised around the AI’s potential discrimination against visa applicants.

Speaking to the Financial Times, the Law Society’s President said that the software: “May well disadvantage certain groups of people based on generic markers such as age, country of origin or whether they have travelled before”. Meanwhile the Home Office said that the streaming process merely increases efficiency and cuts visa processing times.

But how does it work? Well, while the government has certainly scrimped on the details (to put it lightly), the streaming tool works by using an algorithm to allocate application cases with a “risk level”. Each case will either be categorised, green, yellow or red. If a case is awarded a green risk level, the algorithm will alert the human caseworker that the case is worthy of approval. Meanwhile, if a case is allocated with a red risk level, the algorithm informs the human caseworker that the application should be refused, or at least handled with suspicion and subjected to more rigorous scrutiny.

Digital hostile environment

Warning flags were raised back in 2016, in relation to the potentially discriminatory nature of the algorithm. Assessing the implications of an over-reliance on such streaming tools, the Independent Chief Inspector of Borders and Immigration, released a report. It stated that although segmenting applications to “manage them more efficiently is sensible” there is a risk that the algorithms will become “de facto decision-making tools”.

Years later, the JCWI argues that the algorithm has had a “major effect” on “who has the right to come here to work, study or see loved ones”. Further to this, Chai Patel, Legal Policy Director at the JCWI outlined that the way the streaming tool has been run “discriminates,” and ultimately  “[singles] out some people as ‘suspect’ and others as somehow more trustworthy,” based on the country they come from. This he says, creates a “digital hostile environment”.

Ultimately, the JCWI claims that the algorithm makes allocation decisions based on race. Moreover, the charity argues that there are three distinct channels that applicants are funnelled into. It says this includes a fast lane, which permits “speedy boarding for white people” .

Demand for the release of technical details

The Home Office was explicitly advised to “demystify” the “cryptic” computer programme, in the most recent investigation report on the Network Consolidation Programme. In its recommendations, the report stated that the Home Office should: “Including as much detail as possible, publish an explanation of how the Streaming Tool works, avoiding jargon and opaque language, and establish an auditable review and assurance system that covers all three RAG ratings, using the outputs to build stakeholder confidence in the Streaming Tool and the way it is used”.

Responding to this recommendation, the Home Office stated that providing full transparency around how the algorithm works would leave it open to “unscrupulous parties” that seek to “manipulate the immigration system”.

The same kind of dismissal was received by the JCWI and Foxglove, when they requested that the Home Office release the list of countries it deemed to be the “undesirable nations”. Instead the groups received a fully redacted list of nations.

Judicial review granted

The JCWI has now been granted a judicial review, and in its case will argue that the use of the streaming tool results in racial discrimination, and contravenes Section 4 of the Equality Act 2010. It will also argue that the “shadowy, computer-driven process,” is too secretive and lacking in transparency. Further to this, JCWI and Foxglove, will ask that the court declares the algorithm unlawful, and request that any further use of the system ceases until a “substantial review”.

Speaking about the need to scrap the algorithm, Martha Dark, a Director of Foxglove, said: “Algorithms aren’t neutral – they reflect the preferences of the people who build and use them. This visa algorithm didn’t suddenly create bias in the Home Office, but because of its feedback loop, it does accelerate and reinforce them. Now, when systemic racism is high on the public agenda, is the perfect time for [the] government to reassess this algorithm and all similar systems. The Home Office should scrap the streaming tool and set up a scheme that’s fair for everyone, regardless of colour or creed”.

The Home Office insists that no final decisions are made using the algorithm, and that it is only used to filter and direct applications. It has also claimed that the programme is fully compliant with equality legislation.

The digital ethics of AI

This case raises interesting and important questions around the impacts of the increasing use of AI. By 2025, AI is expected to be at the helm of technological innovations across all sectors. As a result, it’s essential to contemplate its digital ethics sooner, rather than later.

In February 2020, the European Commission published its white paper on AI, where it suggested creating “a new legislation specifically on AI … to make the EU legal framework fit for the current and anticipated technological and commercial developments”.

For “high risk” AI, like the visa algorithm used by the Home Office, the white paper suggests that rigorous ethical guidelines should be applied. It states that this would help governments assess whether certain AI may have discriminatory effects. After all, algorithms are created by a human programming team that may have either conscious or unconscious biases. These human biases could subsequently lead to the development of an AI that is inherently discriminatory.

Speaking about the dangers of this, JCWI’s Chai Patel, said: “Algorithms, and streaming tools are only ever as good as the data that goes into them: if discriminatory data and decisions go in, then that is what you will get out”.

Responding to the JCWI’s judicial review launch, a spokesperson for the Home Office said: “It would be inappropriate to comment whilst legal proceedings are ongoing”.

Article Created By Madaline Dunn

Add Your Law Firm

If your law firm is based in the UK, then a listing on The Legal Journal could really help your firm to reach new clients that are searching for legal services.

Add Your Law Firm