Get in Touch
Allaboutaiwithrobertelloittsmith

All about AI – with Robert Elliott Smith

Back to Insights
Blog Img

All about AI – with Robert Elliott Smith

We had the opportunity to virtually sit down with Robert Elliot Smith – an expert in AI and machine learning with over 30 years’ experience. Robert spoke to us about what the term ‘AI’ truly means, as well as touching on his own career path and interests. As well as being a tech-expert and entrepreneur, Robert is also an author and has written a new novel ‘Rage Inside the Machine’ which challenges the long-held assumption that technology is an amoral force.

What is ‘AI’?
“The term ‘artificial intelligence’ does not have an agreed-upon or stable meaning. The term first appeared after an important scientific workshop in the 1950s, replacing the term ‘cybernetics,’ which has remained associated with systems engineering, which is how most people thought about automated machines at that time. It quickly transformed into a term that described all sorts of computing, and precisely what is meant by ‘AI’ has moved around ever since. From the 1970s, the term AI almost exclusively described techniques using rules and logic, that we would most likely describe as ‘complicated programming’ today. That strand of AI failed to live to expectations and proved to be commercially unviable by the ’80s, leading to what has been called ‘The AI Winter’, a time when just putting ‘AI’ on a research proposal could get it rejected.

All of that changed with the advent of the Internet, and the ubiquitous computation it brought into everyone’s lives. This had two main effects to revive AI as a description for a completely different technology. The first effect was the dramatic drop in the price of computer processing, and the second was the ubiquitous availability of data. These effects made machine learning a commercial goldmine for many companies. Now when people say AI, they almost always mean something involving ML.”

How have you forged your career path?

“My interests always spanned multiple domains, but I ended up getting three degrees in engineering, including a master's and PhD in Engineering Science - but my dissertation research was in artificial intelligence and machine learning. When I became a professor at The University of Alabama, I built my research agenda around engineering applications of these technologies. While teaching and researching at Alabama, I built a consulting business, mainly in aerospace and defense, but also some work in finance and other domains. I was awarded tenure in 1996, then took a sabbatical in the UK, which led to me becoming Director of a University Research Centre in AI, and my permanent residency in the UK. I continued consulting, but drawing on my experience as a consultant, I helped create a product and a business that became BOXARR, a business for which I was CTO, which grew from three guys with an idea to a business serving international blue-chip clients globally. As that business grew, I kept a part-time academic practice as a Senior Research Fellow of Computer Science at University College London, where I’ve advised many PhDs, primarily in financial applications of AI and social media analysis. In 2019, I wrote a book about AI and its effects on society, which was shortlisted for UK Business Book of the Year, and I started lecturing on ‘AI for good.’ In 2020, I decided to dedicate myself more fully to that mission. I’m now working with a portfolio of companies and organisations, including Mirza (which is using tech to help close the gender pay gap) and We and AI, a charity fighting racial bias in AI applications.”

How can you reduce AI-driven bias? 

“First, you have to understand that AI is always biased! Consider that in solving complex problems statistically, one has to choose a bias, or all hypotheses have equal value. Usually, the selected bias is something like ‘points in the numerical data we have, which are close together, are similar.’ The point is that the selection of what data to gather, how we gather it, how much we gather, how it’s represented numerically, what we mean by ‘close together,’ and other factors create a set of biases that are fundamental to the decision any AI makes. 

Machine learning algorithms are all about using such biases to generalise from existing data to unforeseen circumstances.  In any complex problem, particularly the complex problems of human beings, generalization, at its core, is a form of prejudice: that is, to judge before, based only on a generalisation of the data you’ve seen in the past. 

Biases introduced in data gathering and representation often align with current social biases, because of (often unconscious) assumptions on the part of AI/ML designers, and the inevitable reality that biases are necessary for purely mechanical, data-driven generalisation. Thus, the prejudices of algorithms often align with typical intolerances, for instance, those based on gender, race, religion, disabilities, etc.

So, how do we reduce AI-driven biases? First, we have to realise that while AI can be complex in scale and speed, it is actually quite simple in its purely numerical and statistical view of human reality. This is fundamentally different from human intelligence. 

Socially, this is not new. Consider the fact that even though systems of law have been developing for millennia, the law is described as a blunt instrument. For important decisions that affect people’s lives, our legal systems fall back on the decisions of vetted human beings, like judges and juries, to overcome that bluntness. When machines make important decisions that affect people’s lives, we must keep human beings in the loop, or at least have fallbacks to human decision making at every stage: from the selection of what data to use to adjudicating the final decisions generated by machines. We must also ensure the diversity of the people involved to avoid unconscious biases, which ensures the diversity of cognitive perspectives that are key to effective innovation in any field.

There’s a growing community of AI scientists, engineers, and ethicists who are designing frameworks for the design of human-machine systems that work to counter the blunt biases and prejudices of AI, and I’m trying to be a part of those efforts.”

Why did you decide to write Rage Inside the Machine?

“For years I had been gathering material on the history of AI, with the intention of writing a book, but it was never quite right. In parallel, I had been writing short fiction and some autobiographical stories, as a part of a performance prose group that I helped run for a number of years. As AI became a hot topic, more obviously affecting everyone’s lives, and instances of algorithmic prejudice became rife, I knew I had to pull something together. The final trigger came from my wife - she knew of a personal story I had written about how, while I was an elementary school student in Alabama, had witnessed federally-mandated anti-segregation bussing to bring African-American kids into my all white neighbourhood, which changed my life. 

She said that story should go in my book about AI, which I found bizarre - but when she explained that the book was really about prejudice, and that my whole, human perspective on that subject was essential to making the subject resonate, the book came together, and I was able to write it. Like the key to using AI more responsibly, the key to writing my book was firmly reintroducing a human element.”

If you would like to buy a copy of Robert’s latest book – you can do here.