Ethics in AI is shaping up to be one of the most complex tech challenges of the modern era. From in-built bias to job displacement and data privacy, are there too many pitfalls to contend with? And is truly ethical A.I. even possible?
Job Displacement Fears
Job displacement fears aren’t nearly as unfounded as they once were. A report in CBS News claimed that AI contributed to 4,000 job losses across the US in May alone.
Whilst more jobs are likely to find themselves at risk, there is no denying the meteoric rise in AI investment and the opportunities it promises. Green tech is a prime example – job losses in brown sectors (Coal, Oil, Steel) could be counteracted by the growing opportunities in the renewable energy market, spurred on by developments in generative AI.
The ‘Companion or Replacement’ debate for AI is highly contextual, and one that decision-makers must explore the morality behind. Should the focus instead be on workplace augmentation, retention plans, and human/AI collaboration? According to a post from CNBC, 64% of C-suite leaders (from 1400 survey respondents) are planning on hiring as a result of adopting generative AI.
For those hoping to break into an exciting, impactful, era-defining line of tech work, the fast-moving AI sector could be the place to start looking.
The cybercrime underworld is evolving. Lower barriers to entry, better tech, accessible targets, and a growing goldmine of data. Naturally, security concerns are high on the agenda for those in the AI space, or rather, they should be.
Despite the evolving global frameworks, AI is notoriously difficult to regulate, partly because it lives at the cutting edge of technology and it’s tough to know what the future risk/reward payoff looks like.
The onus lies with the decision-makers responsible for storing, handling, and securing that sensitive data. Accountability has been historically lacking for data violations, and besides, many corporations have proven their willingness to pay the various GDPR fines without breaking a sweat.
That said, a paradigm shift could be on the way. In 2022, the FTC (Federal Trade Commission) took action against Drizly CEO, James Cory Rellas. They issued the sanctions alongside a warning for other tech CEOs who ‘take shortcuts on security.’ Is it good news for the future of ethical AI?
Breaking the Bias
Bias in AI can range from the irksome to the downright deadly. It’s one of the most complex and visible challenges in ethical AI today, and a solution will demand a constant, intentional mitigation process.
From supervised learning to the building of more diverse tech teams, there are reliable methods for reducing bias in AI, but it’s not a quick fix.
At Trust in SODA, we’re committed to reshaping recruitment through the lens of diversity. We can help you find a fulfilling role in the AI space, one that enables you to make a difference.
If you want to learn more about our diversity, equity, inclusion, and belonging approach to hiring, our community-led services, or you’d just like to chat about the current shape of the tech talent market, let us know, we’re here to help. Contact the team here today.