Episode 286: Ethics and Bias in Artificial Intelligence (AI) Technology

This episode of the Workology Podcast is part of our Future of Work series powered by PEAT, the Partnership on Employment & Accessible Technology. PEAT works to start conversations around how emerging workplace technology trends are impacting people with disabilities.

Episode 286:  Ethics and Bias in Artificial Intelligence (AI) Technology with Merve Hickok (@HickokMerve)

I spoke to Merve Hickok, founder of AIEthicist.org and a Business Process analyst at High Sierra Industries. Merve is an independent consultant, lecturer and speaker on AI ethics and bias and its implications on individuals, organizations and society. She is also a Senior Researcher at The Center for AI & Digital Policy and has over 15 years of global level senior experience with particular focus on HR technologies, recruitment and diversity & inclusion. She is a SHRM Certified Senior HR Professional and a certified HIPAA security expert. 

Merve has spent much of her career working in technology implementation, global recruitment technology, HRIS implementation, and became interested in AI when it was just becoming part of these programs and services. Because AI is in so much of what we do in HR and there seems to be a rush to adopt, but it’s important to consider how AI technology impacts all employees in a variety of ways.

Merve said that “we need to be careful about what these technologies are doing, we need to understand the logic behind these technologies. Not all AI technologies are bad; there are some that are doing really great things, but there are some that are based in pseudo-science. What does it mean that AI can predict someone’s future success by analyzing their facial expressions, tone, pitch of voice? I think a lot of employers are adopting AI without understanding what’s behind it.”

“What does it mean that #AI can predict someone’s future success by analyzing their facial expressions, tone, pitch of voice?” @HickokMerve #WorkologyPodcast #EthicsAndBias #AIT Click To Tweet

“It’s impacting all of us, but there is an additional burden on people with disabilities and 50% of the world’s population have some kind of disability. A lot of these technologies are made by non-diverse teams,” said Merve. “One of my focus areas is helping HR professionals understand this, as well as looking at it from a fairness and equity perspective. What should concern us is that the recognition analysis models don’t work accurately for certain groups. We have seen a number of studies by government agencies, researchers and scholars that facial recognition software performs significantly worse for those with darker skin, Asians, women, and those with disabilities. It gets even worse when you’re at the intersection of these groups.” 

“Emotion analysis software claims to analyze your face to predict job success...the assumptions behind this are very problematic.” @HickokMerve #WorkologyPodcast #EthicsAndBias #AIT Click To Tweet

“A lot of these tools suggest that they can customize these models based on your current population. We know that in a lot of companies, people with disabilities are not equally represented to start with. Unfortunately, how a lot of employers measure success on things like longevity in the job, emotions, etc. What happens is when you’re not fitting into these norms to start with, you’re considered an outlier in these systems and constantly being subjected to technologies that still have serious shortcomings. These are the things that I am really concerned about when it comes to AI in the workplace in general, but specifically for people with disabilities.”

How AI Technology Can Hurt Your Company Culture

Merve explained what can happen at organizations that use AI to gather data on employees. “You can tear down the trust you have with employees when you start surveilling and collecting data [through AI]. Again, if we don’t question the science or ethics of these tools and we instead question employees about interactions and behavior, you reduce employees to data points. What happens then is that your culture turns from a cooperative one to a competitive one. Instead of working towards goals for your organization, employees start to game the system in order to protect themselves.”

I asked Merve how we can have conversations about AI and ethics with our company leaders. She said that “there is a direct correlation between HR practices and the bottom line, whether that is in the shape of the cost of bad hires, eliminated candidates (who might also be a consumer). First, as HR practitioners we need to articulate that connection well and understand our risks and benefits. We always talk about HR being a business partner; whether or not interest in AI technology is initiated by HR or the business, HR needs to see the bigger picture and how it’s impacting the organization and its culture. Are you bringing in less diverse people? Are you acquiring biases?”

Merve emphasized that “not all AI technology is bad – what I’m warning against is not to treat AI like an oracle that cannot be wrong. At the end of the day it’s technology that is created by humans. Company leaders need to understand that these tools could be creating biases and possibly discriminating against candidates. Do these outcomes align with your company values? It impacts the core of the company.”

Technology can be a bridge or it can be a fence. AI has come a long way in the past decade and we see it on our career sites with chatbots, in automated emails from our ATS, when we create candidate assessment tools…it’s the “grocery store self checkout,” but for HR. As much as we want to implement new tech that saves time in recruiting and hiring, we have to pause to consider what impact that technology will have on all employees, including those with disabilities. I really appreciate Merve’s insights and expertise on this special podcast episode for PEAT. 

Listen to the entire podcast episode to learn more about AI technology in HR, what to ask tech vendors about AI, and how AI impacts your employees’ privacy.

Connect with Merve Hickok

RECOMMENDED RESOURCES

– Merve Hickok on LinkedIn

– Merve Hickok on Twitter

– AIethicist.org

– AI Ethicist | Merve Hickok

– PEATWorks

– Episode 265: Why We Need More Ethics as Business Leaders

– Artificial Intelligence and How it Can Revolutionize Human Resources

– How to Leverage Business Ethics in Human Resources

– Episode Transcript

How to Subscribe to the Workology Podcast

Stitcher | PocketCast | iTunes | Podcast RSS | Google Play | YouTube | TuneIn

Find out how to be a guest on the Workology Podcast.

Posted in

Jessica Miller-Merrell

Jessica Miller-Merrell (@jmillermerrell) is a workplace change agent, author and consultant focused on human resources and talent acquisition living in Austin, TX. Recognized by Forbes as a top 50 social media influencer and is a global speaker. She’s the founder of Workology, a workplace HR resource and host of the Workology Podcast.

ON AIR WITH WORKOLOGY

Pin It on Pinterest