Episode 286: Ethics and Bias in Artificial Intelligence (AI) Technology

Scroll down to read more!

Episode 286: Ethics and Bias in Artificial Intelligence (AI) Technology

Scroll down to read more!

Table of Contents

Welcome to the Workology Podcast, a podcast for the disruptive workplace leader. Join host Jessica Miller-Merrell, founder of Workology.com, as she sits down and gets to the bottom of trends, tools and case studies for the business leader, H.R. and recruiting professional who is tired of the status quo. Now, here’s Jessica with this episode of Workology.

Episode 286:  Ethics and Bias in Artificial Intelligence (AI) Technology with Merve Hickok (@HickokMerve)

 

Jessica Miller-Merrell: [00:00:26.19] This Workology Podcast is sponsored by Ace the HR Exam and Upskill HR. This episode of the Workology Podcast is part of our Future of Work series powered by PEAT, the Partnership on Employment & Accessible Technology. PEAT works to start conversations around how emerging workplace technology trends are impacting people with disabilities. Today, I’m joined by Merve Hickok. She’s the founder of AIEthicist.org and a business process analyst at High Sierra Industries. Merve is an independent consultant, lecturer and speaker on AI ethics and bias and its implications on individuals, organizations and society. She’s also a senior researcher at the Center for AI and Digital Policy and has over 15 years of global level senior experience with a particular focus on H.R. technologies, recruitment and diversity and inclusion. She is a SHRM senior certified professional and a certified HIPPA security expert. Merve, welcome to the Workology Podcast.

Merve Hickok: [00:01:27.51] Thanks so much, Jessica.

Jessica Miller-Merrell: [00:01:28.92] I feel like you’re one of the few people that I’ve ever met that has experience in H.R., but is also really knowledgeable and experienced in artificial intelligence. I’m really excited for our conversation today.

Merve Hickok: [00:01:42.90] Likewise, I’m really excited to be here today. And you’re right, this is still a very small field. So I feel like I know just the handful people that are interested in or experienced on both sides as well.

Jessica Miller-Merrell: [00:01:57.24] Well, AI is touching so much of what we do in human resources and really the human capital space. I wanted to ask you a little bit more about your background. How did you get involved in the AI work and digital policy?

Merve Hickok: [00:02:11.61] Bit of a background. I was, I was working for Merrill Lynch in Turkey. I was country H.R. Manager after the acquisition by Bank of America. So I worked a lot, a lot on some of the technology implementations, platform changes. That was a very generalist role. But I had insight to a lot of the technologies. The company then asked me to move to London to take over this brand new role, Diversity Recruitment Manager for our graduates hires. So I had to build all the strategy, execute the strategy on diars and recruitment across the colleges in Europe, Middle East and Africa. That role also required me to be the admin for our recruitment technology, our ATS back at the time. So going around this colleges, talking to students, trying to build this partnerships for a more diverse future workforce, I was hearing a lot about what obstacles they were running into. You know, in addition to being a female, being a minority, people like students with disabilities, et cetera, you know, they were interested in the industry, but they were running into issues. And with some of these technologies and the practices that we had in place back then, you know, not only the bank, but, you know, in general our competitors as well.

Merve Hickok: [00:03:38.94] I started realizing that this, there had to be a more thoughtful approach to this. And I’m a technology optimist. I love technology. So when working with these technologies, with the students, I started going more into A.I. and it wasn’t very maybe there was a handful, if even then, H.R. recruitment technologies that were using AI, so I started sort of looking into those, looking more wider into AI and bias issues and impact on society, on social justice. And one thing led to another. Like you mentioned, I got SHRM certified, then I came to U.S. and I’m wearing multiple hats on the role of org I’m a business process and management analyst at High Sierra Industries. That organization is a 40 year+ nonprofit in Nevada developing and delivering learning systems for people with disabilities. And I’m also involved with the Center for AI and Digital Policies, is a senior researcher. So everything is coming together and has been coming together. I was able to interact with a lot of people with diverse perspectives, diverse backgrounds. That was probably the journey as well.

Jessica Miller-Merrell: [00:04:57.90] I love hearing your background experience. And you’re absolutely correct, there is so much use case for artificial intelligence, and I feel like it’s more part of our conversation as HR leaders. And I feel like so many HR technologies, these demos or briefings that I’m sitting on, they’re talking about A.I., but it’s really sort of a gray, fuzzy area for a lot of HR pros. I want to dive right into this topic and ask you, as H.R. pros are having these kind of conversations, talking the HR technologies, what should we be concerned about when it comes to people with disabilities and using AI technology in the work space?

Merve Hickok: [00:05:38.78] You’re absolutely right. I see new products coming out every single day. And look, this that’s the next shiny thing or brand, you know, brand new thing. But we really need to be careful about what these technologies are actually doing and above everything we need to understand the logic and science behind these technologies, you know, so. Not all A.I. technologies in recruitment or space are bad. There are some really good examples out there. So I don’t want to generalize this, but there are also some that are based on on pseudoscience, on like really flawed science or even science and technology that needs to develop further to actually work better. So, you know, what does it mean that they can predict a person’s success at a role by looking at their facial features or analyzing their sentiments? Like, how would you feel if someone made a decision about your character and future success? You know, just by analyzing the way you look or the tone or pitch of your voice. A lot of these companies come in to to employees. They’re often in the market promising a better, faster, cheaper practice. And I think a lot of the employers are jumping on the wagon without understanding what is actually behind it. But this technologies come with the definition of what is normal, what is acceptable, what is worthy. This goes for all candidates and employees, not only for people with disabilities. Right? So it’s impacting all of us.

Merve Hickok: [00:07:09.50] But there is an additional burden on people with disabilities. We forget that there’s now more than one billion people with a kind of disability in the world, 15 percent of the world’s population. But a lot of these technologies are made by not diverse teams. You know, they still try to fit people into their own assumptions, their own norms of what is normal, what is acceptable, and continued to refuse to see how each individual can contribute to the workplace. So that’s one of my focus areas, one trying to help H.R. professionals to help understand this and the possible issues and also looking at it from a fairness and equity perspective. Now, you mentioned what should concern us. First of all, we know the recognition analysis models don’t work accurately certain groups. You know, it’s we have seen a number of studies by government agencies, by different researchers and scholars that facial recognition perform significantly worse for those with darker skins, Asians, women and those with disabilities. If you’re not represented in the data set powering those models, you’re not accurately even recognized as a person, let alone, you know, having a proper recognition. It doesn’t even recognize you as a person. And it gets even worse when you’re at the intersection of these groups.

Merve Hickok: [00:08:37.22] So if you’re a woman with darker skin or an Asian with a disability, for example, or that you’re hitting a number of these groups, facial analysis is also what I mentioned as facial recognition, facial analysis itself as a whole totally pseudo-science, I could talk days about some of these technologies. What should concern about us use natural language processing so they analyze what you what you write, what you say, the, the language and the text behind it. But again, they’re not able to capture correctly if you’re speaking with an accent or if you have a speech impediment, et cetera, they’re also not good at yet. I don’t know if they will ever be an understanding the context of what is said or the nuances or analogies that you might have in in an interview. And then there is the emotion analysis software that claims to analyze your face, to predict job success, you know, whereas we know that there is no universal way to express our emotions or that we might be feeling one way or our face, say something else. So a lot of this technology is when you break it down to smaller pieces, you see the assumptions behind it are really problematic. They also try to extrapolate the samples we have to the wider community of people with disabilities, so even if you had some some some people with disabilities in your dataset, that’s not reflective of the range of possibilities.

Merve Hickok: [00:10:10.83] We know for a fact that some disabilities manifest themselves very differently across everyone. If you have time, I would like to come back to this again is about customizing the models according to your company, you know, to employers, a lot of these tools suggest that they can customize their models according to your own current population, employee population. So they ask, who do you define as successful employees and what features would you like to highlight and optimize? But we know that in a lot of companies, people with disabilities are not equally represented to start with. And the way that employers a little unfortunately, a lot of employers measure success to exact longevity in a job, no break in between jobs, promotions, etc, might not really translate to the realities of people who might have to take time off, say, for medical reasons. So what happens is when you’re not fitting into those norms to start with, you’re considered an outlier, an error in the systems, and also you’re constantly facing or being subject to these technologies which still have serious shortcomings. So those are the things that really concern me when it comes to AI technology in the workplace in general, but also for people with disabilities.

Jessica Miller-Merrell: [00:11:31.08] It’s quite the exhaustive list. You mentioned a lot of different types of tech. I feel like a lot of it was recruiting base. We have job matching, we have video interviewing. But what about other types of technologies that are using artificial intelligence that we maybe haven’t talked about other parts of I feel like H.R. or the workplace that you wanted me to call out and just make us aware that AI is being leveraged in these tech?

Merve Hickok: [00:11:58.02] Yeah, absolutely. For me, the most trouble some users other than recruitment are social media, background checks and employee surveillance tools. This is the social media background checks. Those are illegal in some states and in some countries, but, that they are legal in others. What happens is the employer and employer can run social media background check on employees, the current employees, as well as the candidate if they want and receive scores from these tools. These also are based on some of the text and sentiment, sentiment analysis. But what happens is you’re not only crossing a boundary with your employees that, you know, you’re you’re peaking into their private life outside of outside of the workspace, but also trusting these tools to be accurate assessors of a person’s tendencies, whether it be political or social. You might also find things about your employee, which you wouldn’t otherwise know, which might impact your judgment of that employee. So for me, that is a pretty like you’re activelly going after your employees to get more information. The other pieces, employee surveillance parts, we started to see practices off of this either in the workplace or for those working from home. You know, these might be cameras that monitor your every move at work and they might be tools that are monitoring your emails or chats and doing sentiment analysis of your interaction might be a keystroke loggers or screen capture technology that, you know, capture what you’re doing when you’re doing when you’re using a company device. It might be even in the form of a training software that logs are starting and finishing its training and how many times you interact it, etc.. Or it might be video conference.

Merve Hickok: [00:13:56.26] The latest was a video conference tool that was analyzing your face during the meetings and trying to get your engagement. What’s so problematic about this for me is, one, you’re not, like I said, you’re not respecting the employee’s right to privacy, which became an even more of an issue during the pandemic. But we now have managers as part of our homes interacting with our home environment and family. We are part of our children’s class, the teachers are being surveilled, and we are shaping behaviors because of these technologies. You know, someone is watching you and you don’t have any control. There’s a power imbalance between you and your employers, so we can’t really fight against it. So the next best thing is shaping the behavior on it. You also tear down your trust relationship with your employee when you start surveilling and collecting data. You know, we don’t again, we don’t question the science or relevancy or ethics of these tools, and we just question the employee who’s working to move your company forward. Now, you start terrifying this behavior. There’s, you know, interactions between the employees and reduced employees to just data points. What happens then is your culture turns from a cooperative one to a competitor one. You also risk the system to be gameified. You know your employees instead of working towards the goals of the organization, having teamwork, they start to try to game the system to protect themselves. So we really need to question why we’re using these technologies, what’s the end result and how we are shaping behaviors. And those two are, for me, some of the most problematic ones.

Break: [00:15:46.96] Let’s take a reset. This is Jessica Miller-Merrell, and you are listening to the Workology Podcast sponsored by Ace the HR Exam and Upskill HR. Today we’re talking with Merve Hickok about ethics and bias and A.I. technology. This podcast is part of our Future of Work series with PEAT. They’re the Partnership on Employment & Accessible Technology.

Break: [00:16:07.24] The Workology Podcast, Future of Work series is supported by PEAT. The Partnership on Employment and Accessible Technology. PEAT’s initiative is to foster collaboration and action around accessible technology in the workplace. PEAT is funded by the U.S. Department of Labor’s Office of Disability Employment Policy, ODEP. Learn more about PEAT at Peatworks.org. That’s Peatworks.org.

How AI Technology Can Hurt Your Company Culture

 

Jessica Miller-Merrell: [00:16:36.09] This has been so helpful and for me and I’m thinking of all the people who are listening now that are thinking about the different types of workplace technologies that they’re leveraging beyond recruitment. You said the social media background checks. I think most of us are doing employee surveys. And, you know, the world has shifted so quickly for us over the last 14 months or so. I wanted to ask you about how we should be talking to our H.R. technology vendors, maybe those that were already working with or considering working with and implementing maybe some new artificial intelligence technology for hiring in human resources. How do we make sense? Like what questions should we ask? And then how do we make sense if their actions or activities or technology is ethical?

Merve Hickok: [00:17:24.51] Thank you for that question, because I do this a lot with the companies that I consult with. This is where the proper due diligence comes into play. Right? So first of all, we need to remember that as employers, we’re still carrying the risk, even though you might have outsourced the process to a technology, you know, when you were on boarding this these tools, it’s not the vendor that is going to be in trouble. If the tool you selected is disparately impacting certain groups or has been built to make decisions on protected classes, for example, you as the employer still carry the liability. That’s why I cannot stress this enough that employers as clients have to be really diligent and ask the right questions. In terms of what questions to ask, I would say first thing first ask the vendor to explain the AI model or the decision making process to you. Is it a black box that they don’t even they don’t understand how the technology works? Or is it an explainable model that they know that, OK, these are the features that that we use to make a decision? The model is based on this criteria. This is how the data flows. And this is kind of data that we use and ask them to walk you through that process because and and don’t take, oh, it’s an IP. You know, it’s protected information for an answer. It’s not a trade secret. You need to feel comfortable that the vendor themselves can actually explain their model.

Merve Hickok: [00:19:00.39] Second is going back to what I said earlier, is a pseudo-science or not, you know. Why would you use something that has no science behind it or has full science behind it? Another question is, are the outcomes similar across different groups? How does the vendor ensure that they are, you know, is, say, white males getting a better results or better outcomes when they’re subject to this model than, say, a woman with a disability or what have you, just like looking at what are the outcomes, how are the outcomes spread across different groups, asking them what kind of quality assurance and what mechanisms and safeguards that they have, you know, how do they ensure their model is robust? I would ask I’m pretty enrolled in building an audit framework for these recruitment technologies. So I would ask them, look, are they being reviewed? Do they have third-party reviewers coming in and, you know, doing doing this checks, et cetera, et cetera? I mean, I can walk through these questions for a whole day. But the bottom line is employers, like I said, need to remember that they carried the risk, not the vendor. So if you don’t have the internal capacity to ask the right questions, then get external support for your procurement project, have a trusted partner to help you through that process, it’s not worth taking the huge risk and possibly alienate your candidates and contribute to injustices in society as well.

Jessica Miller-Merrell: [00:20:43.49] Thank you for for that last. I think an audit is is incredibly important. One of the other areas that I wanted to ask you about is that not only should we be talking to the HR technology companies, potential vendors, current vendors, but what about educating and kind of broaching these subjects on the dangers of artificial intelligence technology with our company leaders outside of H.R.? How do we have those conversations? What do you recommend?

Merve Hickok: [00:21:14.31] Oh, the million dollar question. There is a direct correlation between HR practices and bottom line, right? So whether that’s in the shape of cost of bad hires, alienated candidates who, by the way, for certain industries might be also your consumers. So you’re alienating a candidate as well as a consumer, or a portion of the laws or penalties, et cetera. First, as H.R. practitioners, we need to articulate that connection well and understand our risks as well as benefits. We always talk about H.R. being a business partner or partner to the business. You know, whether the interest in this AI technology is initiated by H.R. or the business. H.R. needs to see the bigger picture and how it’s impacting the organization on its culture. Are you bringing in less diverse people? Are you implying biases? Are you getting the best candidates with this technology? Not all AI technology is bad. What I’m warning against is don’t treat AI as an all knowing oraclehat cannot be wrong. At the end of the day, it uses data that is created by humans about humans and is caught up by humans. These tools usually promise that they are going to make your hiring better, faster, cheaper. They can certainly deliver on the faster and cheaper side, but when we were broaching the subject with the leaders, our company leaders, they need to understand that they might not always, these tools might not always be getting the better candidates, that they are possibly making a trade off between amplifying biases or possibly discriminating against candidates and decide accordingly. Is that a risk that you want to take? How is this going to impact your workplace culture? Does this align with your company’s values, the outcomes along with your company’s values? So it’s not only just about recruitment, it’s like a single technology that is sitting by itself independently of the company. It comes to the core. It impacts the core of the company. And I think touching on those points of risks, benefits, really alignment and in a long, long term impact, this is the crucial thing.

Jessica Miller-Merrell: [00:23:28.95] Awesome. Such good important points to have. The other part that I was thinking of as you were talking was training. Where can H.R. professionals go to educate themselves and their teams about ethical artificial intelligence and its potential for bias? Do you have any recommendations on on where they should go to be able to educate themselves, training, learning, reading, growing, any of those?

Merve Hickok: [00:23:55.45] I absolutely. I think the first thing I would say is don’t be intimidated by it. When we say AI, a lot of people think that’s technology, I’m not a computer engineer, I’m not a you know, and I’m not a computer scientist or an engineer. You don’t need to code or build AI model yourself to ask the right questions and understand the implications of this technology’s impact to you on a daily basis. It’s not only about H.R. So you really need to understand the impact of A.I. in general, what that means for you, for your family as well. So training and understanding this is really, really crucial. There are now a few online trainings about this topic that argued for towards non-technical audiences. I would say definitely follow some of the names in the field who are discussing these issues to get an initial understanding. If you have time, join soft advocacy groups working on those issues. There are, generally, AI, AI, groups that are working or discussing AI and bias, there are smaller groups that are working on HR and bias, like the ones that I’m involved in, but also I’ll do a shameless plug here. I have a whole website, AIethicist.org  that is built for those interested in topics, in these topics but don’t know where to start yet. That was one of my frustrations when I started getting into AI and bias, that I didn’t know where to start and I was going down the rabbit holes. There wasn’t any like any sites that would help. So I collected, I curated a number of papers that are rather non-technical that will help you start with that, with understanding some of these issues and debates. And I constantly update I also have a self-paced online learning, online training on AI and bias, an ethical decision making. But at the end of the day, when it comes to this, you know, don’t just look at this for HR. It impacts you 24/7, you and your family. And it’s crucial that we understand these technologies.

Jessica Miller-Merrell: [00:26:15.95] We will make sure to include in the show notes your website, which is AIethicist.org, correct?

Merve Hickok: [00:26:23.93] Correct. Yes.

Jessica Miller-Merrell: [00:26:25.19] AIethicist.org, will include that and also the self-paced learning I, I feel like this. Now is the time to educate yourself. I encourage HR leaders to become subject matter experts, not, you don’t have to know it all, but be strong and confident and knowledgeable in this area because you’re not only serving HRR, but you can be a point of contact for the entire organization as others have questions about artificial intelligence. And Merve has some really great resources that I encourage you to check out. One other question I wanted to make sure that we asked was what does a healthy balance of ethical policies and artificial intelligence look like for HR? Do you think we should have a written policy in our employee handbooks and on our website that talks about, our, the ethics around our use of artificial intelligence? Are we going in that direction? I would love to hear your thoughts.

Merve Hickok: [00:27:30.28] Jessica, for me, it’s always about walking the talk, you might have elegantly written policies, but the bottom line is what are you actually, like, are you actually practicing those? Do you have a policy about not not nondiscrimination? You might have a policy about non-discrimination, but have you implemented systems that result in discriminatory results? Because you haven’t done the due diligence first? You know, how is your hiring contributing to the company’s culture and composition? Like we mentioned, it’s really about, you know, vetting those policies with actual practices. It’s not only about recruitment. What are the other policies and practices that might result in either a growth environment or a toxic work culture like your compensation and promotion and development opportunities. I always say you can use AI in a very positive way to understand your company first. You know, start with that. Use your, use your data to understand, you know, are there any wage gaps or like who’s being, like what kind of people are being promoted? What kind of people are being given development opportunities? How is, what is the composition of your company and your applicants versus those who are exiting the company? You know, what are the reasons for that? And try to understand that first before you try to fix something or put something on top of what is what might already be a problematic issue. You know, we’ve seen a number of big tech companies, you know, looking at establishing their ethical AI policies. It’s it’s on their website you’re trying to run. And they are constantly collecting consumer data, platform data, and using it to manipulate those people, selling their data, whatever, to just having a policy is not enough. You really have to show that you are actually practicing that.

Jessica Miller-Merrell: [00:29:38.14] I love that. It really gets back to the training portion, educating yourself, educating your team, educating the organization and walking the talk and then following through with the policy and maybe the guidelines and the processes that you’ve put in place.

Merve Hickok: [00:29:55.78] Absolutely. I mean, it’s not only as you as an H.R. practitioner, you touch on a great point, be that person that people in your company can come to to ask these questions, you know, take that lead. But also, yeah, you know, this is important for you and your family. How is your kid being impacted at school? Is your, she being like excelled toward the great, like how are their grades assessed when this apply to a school or college? How are they being assessed in there when you apply for a credit? The news that you see so it’s you as a consumer, you as a citizen, you as a parent, you as you, not only as a HR practitioner. There’s impact of AI now across, like I said, 24/7. Even when you’re sleeping, your Fitbit might be collecting information about you that it shares with insurance companies that make decisions about you, you know, understand those consequences and advocate for a better world. Imagine a better world and advocate for it.

Jessica Miller-Merrell: [00:31:01.72] Well, Merve, thank you so much for taking the time to talk with us today. I’ve learned a lot. We have some really great resources that we’re going to share in the show notes. Where can people go to connect with you and learn about more about the work that you’re doing?

Merve Hickok: [00:31:16.63] I’m very active on LinkedIn, so if you want to connect with me on LinkedIn, Merve Hickok, more than happy. A lot of the work that I do, I publish there as well. If you want to listen to any of my previous work or read my previous work, all of those are also included on AIethicist.org. Happy to connect with, especially with professionals in this field.

Jessica Miller-Merrell: [00:31:43.15] Thank you so much. I love that you, you’re one of the few and hopefully soon to be more H.R. professionals that are experts in this area. And I mean, you have such an understanding, I think, that you can really speak to the work that we do every single day as leaders of our organization working in H.R. So thank you again.

Merve Hickok: [00:32:06.79] Thank you so much, Jessica. Thank you for the opportunity.

Closing: [00:32:10.12] Personal and professional development is essential for successful H.R. leaders. Join Upskill HR to access life training, community, and over one hundred on demand courses for that dynamic leader. H.R. recert credits available. Visit UpskillHR.com for more.

Closing: [00:32:25.99] Technology can be a bridge or it can be a fence. Artificial intelligence has come a long way in the past decade and we see it everywhere on our career sites with chat bots and automated emails from our ATS, candidate matching, candidate assessment tools. This AI is the grocery store self checkout but for HR. As much as we want to implement this new tech, I love me some tech, it saves time in hiring and recruiting, but I want to caution you, we have to pause to consider what impact that technology will have on our workplaces, including employees as well as people with disabilities. I really appreciate Merve’s insights and expertise on the special podcast episode for PEAT as part of our Future of Work series. Thank you to Merve. Thank you to PEAT. I hope you enjoyed.

Connect with Merve Hickok

RECOMMENDED RESOURCES

– Merve Hickok on LinkedIn

– Merve Hickok on Twitter

– AIethicist.org

– AI Ethicist | Merve Hickok

– PEATWorks

– HRCI Ethics Credit Course

– Episode 265: Why We Need More Ethics as Business Leaders

– Artificial Intelligence and How it Can Revolutionize Human Resources

– How to Leverage Business Ethics in Human Resources

How to Subscribe to the Workology Podcast

Stitcher | PocketCast | iTunes | Podcast RSS | Google Play | YouTube | TuneIn

Find out how to be a guest on the Workology Podcast.

Did you like this post? Share it!

A Word From Our Sponsors

Ads help make Workology resources free for everyone. We respect your privacy. To see our Privacy Policy click here.

Recommended Posts

27 Companies Who Hire Adults With Autism

List of companies who hire and employ adults who are neurodiverse. ...

The Costs of Form I-9 Software

Read the complexities of Form I-9 compliance software costs. We break down pricing structures, features & benefits to help HR professionals....

Episode 415: Registered Apprenticeship Programs in the Clean Energy Sector With Dr. Janell Hills

In this episode, we interview Dr. Janell Hills from IREC about developing registered apprenticeship programs in the clean energy sector....
Sanjay Sathé, Founder & CEO of SucceedSmart, is no stranger to disrupting established industries.

Q&A With Sanjay Sathé, Founder & CEO of SucceedSmart

Sanjay Sathé, Founder & CEO of SucceedSmart, is no stranger to disrupting established industries. ...
supporting caregivers: the sandwich generation at work

Supporting Caregivers: The ‘Sandwich Generation’ at Work

Are companies doing what they should to support employees who become caregivers? ...

Recruiting Reimagined: How Specialized Software Strengthens Hiring in 2024

Even during fluctuating economic crosswinds, the war for talent remains white-hot across most industries. To efficiently staff crucial openings while staying within strained budgets,...

Checkout Our Products

Ads help make Workology resources free for everyone. We respect your privacy. To see our Privacy Policy click here.

More From Workology

Recruiting Reimagined: How Specialized Software Strengthens Hiring in 2024

Click on read more to open this post on our blog.
supporting caregivers: the sandwich generation at work

Supporting Caregivers: The ‘Sandwich Generation’ at Work

Are companies doing what they should to support employees who become caregivers?
Sanjay Sathé, Founder & CEO of SucceedSmart, is no stranger to disrupting established industries.

Q&A With Sanjay Sathé, Founder & CEO of SucceedSmart

Sanjay Sathé, Founder & CEO of SucceedSmart, is no stranger to disrupting established industries.

Episode 415: Registered Apprenticeship Programs in the Clean Energy Sector With Dr. Janell Hills

In this episode, we interview Dr. Janell Hills from IREC about developing registered apprenticeship programs in the clean energy sector.