I spoke to data scientist Tariq Rashid about the pressing need for fairness, safety and ethics in emerging applications of AI and machine learning
Tariq Rashid is well known in Cornwall as the driving force behind the Data Science Cornwall and Algorithmic Art meetup groups.
But before moving here, he spent 20 years working in data-intensive organisations, most recently as a senior architect at the Home Office. He’s observed the emergence of machine learning and AI and sees a pressing need for frameworks for fairness, safety and ethics.
As a recent very public example, he points to the live facial recognition trials conducted by the Metropolitan Police in high streets across London.
In the trials, cameras mounted on police vans would capture images of the faces of largely unaware passers-by. A facial recognition algorithm would then compare them with images of people on the Met’s watchlist. If a face didn’t match anyone on the list, the system would discard it. But if the algorithm detected a match, officers could choose to stop and question the person.
False positives and poor transparency are cause for concern
One eye-opener for Rashid was the volume of false positives – instances when the system detected a match, but the person turned out not to be on the watchlist. “In the first iteration of that experiment, the failure rate was as high as 80%,” he says.
But what struck him more than the algorithm’s poor performance was the lack of transparency around the trials.
“They weren’t sufficiently open about what they were doing, why it was deemed necessary and proportionate, what happens to the collected biometric data, or the performance of the software,” he says. “If they’d been open about the quality of the facial recognition, and any biases they found, the public would have been much happier”.
Instead, the Met faced criticism for using a type of AI – live facial recognition – that has been shown to have difficulty identifying people of some ethnicities. It left many people fearing they might be treated unfairly if the algorithm wrongly singled them out.
Biased algorithms further entrench existing inequalities
For Rashid, the trials highlight the issues surrounding the use of AI by governments and businesses. Biased decisions often arise as a result of “training algorithms on data that is limited or biased, and therefore not representative of the scenarios in which the systems are going to be used,” he says.
That can be fixed to a certain extent by training algorithms on more diverse representative datasets, continuing to test their accuracy in real world use, and monitoring for unintended wider effects.

But a more insidious problem is when the organisation is unaware of its own biases, and ends up unconsciously embedding historical inequalities in its AI algorithms. One example is Amazon’s trial of automated recruitment software, which ended up favouring male candidates because it was trained using historical data on previous hires – most of whom were men.
“Every organisation infuses their experience, flaws, beliefs and biases into their automated decision making,” Rashid says. His new consultancy, Digital Dynamics, is helping organisations to assess the fairness, safety and ethics of their AI use, and to develop processes and governance that build public trust.
“if we give it some thought, we can design AI for the society we want to be, rather than the society we were”
Tariq Rashid, Founder, digital dynamics
“Nobody should pretend their algorithms are neutral,” he says. “But if we give it some thought, we can design AI for the society we want to be, rather than the society we were yesterday.”