Get in Touch
Aifreindorfoe

Artificial intelligence – friend or foe?

Back to Blogs
Blog Img

Artificial intelligence – friend or foe?

​We interviewed Samuel Rowe. Samuel Rowe is a Research and Policy Executive at Yoti. He spends his time developing innovative policy in response to complex domestic and international regulatory frameworks. He recently developed and now coordinates Yoti’s internal ethics working group, which works closely with Yoti’s Guardians Council to ensure that Yoti maintains its ethical steer. In addition to his work at Yoti, Sam is one of the four reviewers undertaking the independent review into the legal framework for the processing and governance of biometric data.

Do you think robots will eventually replace all human-based jobs? In which case, will there be any jobs left for humans in the future?

I’m optimistic that there’ll still be jobs for humans in the future.  As a species, we tend to value human input.  That holds as true if you’re talking about the arts as it does if you’re talking about the law.  However, there are jobs at risk from automation. That’s something that we, as a society, need to address now. 

Do you think AI/robots will develop the ability to feel emotions as we do as humans?

That’s a tricky question!  I think before we ask ‘will they?’ we should ask ‘should they?’.  On the one hand, maybe it would be reassuring to know that an automated system had gained the capacity to experience a range of human emotions.  That could be safeguard against it making a decision that we, as humans, might identify as unfair.  However, if you build an automated system that is capable of experiencing human emotions, how would we distinguish such a system from a human in any useful way?  And if we can’t distinguish the system from a human, why shouldn’t the system receive the same rights as a human?  That might force us to rearrange fundamentally how we organise society.

Do you think AI will eventually reduce human interaction altogether? Will we lose the ability to socialise or complete basic daily tasks?

I think there is a risk that AI will cause certain behaviours to become less frequent, which could cause a reduction in human interaction.  There is a school of thought that believes the deferral of certain behaviours to AI will lead to humans becoming more productive.  We won’t need to focus on certain basic things and will be able to devote ourselves to more complex tasks.  Humans are inherently social creatures and I doubt we will lose the ability to socialise.  I’d be more concerned by polarisation as a threat to socialisation.

Is there potential for AI to go bad? What are the worst possible outcomes?

There is potential for AI to go very, very bad.  The worst outcomes, in my view, are those which affect negatively human dignity, autonomy or health.  A good example of AI going bad is the use of machine learning to predict whether an inmate is likely to commit more offences once they’re released from jail, and therefore determining whether they should be let out early or not.  There are several studies which show these machine learning tools are unjustly biased against certain demographics.  This use of AI is unacceptable.  Similarly, one of the worst possible outcomes would be the deployment of Lethal Autonomous Weapons Systems (known as LAWS).  Allowing AI to decide autonomously who to kill in any scenario is a terrible outcome.  There are some organisations, like the Algorithmic Justice League, that are fighting to prevent these worst outcomes being realised.

How will we know when we can fully trust AI? Surely it can make mistakes like humans. For example, if they replace human doctors and are used to diagnose people, how will we know we can trust them?

Ultimately, there is no litmus test for absolute trust in AI.  AI makes mistakes.  Many AI systems work using a ‘black box’.  This makes it particularly hard to figure out how the AI has come to a certain conclusion.  Similarly, even when an AI is built so that its decision-making process is clear, there is a big difference between an AI making a decision in experimental conditions and in the real world.  I think designers of AI can make us trust them by letting us know what negative intended and unintended consequences they have foreseen being caused by the system, and let us know what measures they’ve put in place to mitigate those harmful outcomes.  They should do so publicly.  This is what Yoti has done with its age estimation algorithm, Yoti Age Scan.