Episode 397: Equitable, Diverse, and Inclusive Extended Reality With Noble Ackerson

Summary:Workology Podcast interview with Noble Ackerson discussing the meaning of and equitable, diverse and inclusive extended reality.

Episode 397: Equitable, Diverse, and Inclusive Extended Reality With Noble Ackerson

Summary:Workology Podcast interview with Noble Ackerson discussing the meaning of and equitable, diverse and inclusive extended reality.

Table of Contents

So, for D&I or diversity and inclusion contexts, an organization should have clear guidelines on how they share diversity data as well, how they provide context and the choices that they give. Because, again, it’s not your data, alright? It is your customer’s data. And so that’s sort of the lens at which I look at it. And it’s sort of broadly reaching. It doesn’t, it’s just the human-centered way to talk about data privacy in the context of the people we serve, specifically protected classes, and being inclusive and equitable in how we implement some of these solutions.

Episode 397: Equitable, Diverse, and Inclusive Extended Reality With Noble Ackerson

 

Welcome to the Workology Podcast, a podcast for the disruptive workplace leader. Join host Jessica Miller-Merrell, founder of Workology.com as she sits down and gets to the bottom of trends, tools, and case studies for the business leader, HR, and recruiting professional who is tired of the status quo. Now here’s Jessica with this episode of Workology.

Jessica Miller-Merrell: [00:01:12.80] his episode of the Workology Podcast is part of our Future of Work series powered by PEAT, The Partnership on Employment and Accessible Technology. PEAT works to start conversations about how emerging workplace technology trends are impacting people with disabilities at work. This podcast is powered by Ace The HR Exam and Upskill HR. These are two courses that we offer here at Workology for certification prep and recertification for HR leaders. Before I introduce our guest, I want to hear from you. Please text the word “PODCAST” to 512-548-3005 to ask questions, leave comments and make suggestions for future guests. This is my community text number and I want to hear from you. Today I’m joined by Noble Ackerson, Director of Product for AIML with Ventera Corporation. He’s the Chief Technology Officer at the American Board of Design and Research and President of CyberXR Coalition. Noble is an award-winning product executive, an expert in AI, and an advocate for equitable, diverse, and inclusive XR. Noble, welcome to the Workology Podcast.

Noble Ackerson: [00:02:22.80] I’m so honored to be here. Thank you for having me.

Jessica Miller-Merrell: [00:02:25.65] Let’s talk a little bit about your background and how it led to the work you do now.

Noble Ackerson: [00:02:29.97] Yeah, thank you. I currently, as you mentioned, lead product for Ventera. We are a technology consulting firm based out of Reston, Virginia, and we serve federal customers and commercial customers to business units that I, my team, service. I like to say that, within the last few years, we’re now in an AI gold rush, an artificial intelligence gold rush, and quite a few startups, enterprises, consulting firms, what have you, are all selling shovels, right, to help capitalize on this AI trend. But, at Ventera, you know, I founded The Hive. We call it human-centered AI at Ventera with a lot of bee puns because, you know, I like puns, and I lead my teams to build safety equipment with this. If I were to keep this analogy going, safety equipment for my customers, because when things go bad with AI, it goes bad exponentially, potentially exponentially and, at scale, and could adversely impact, you know, our customers’ brand, trust, and of course, their bottom line. Before Ventera, I worked for the National Democratic Institute, which was one of the larger NGOs, non-governmental organizations, international development firms serving about 55 countries out of the U.S. with emerging technology solutions that my, my teams and I built. And this is where I cut my teeth with data privacy and becoming GDPR compliant, if you remember those days, natural language processing and machine learning and engineering solutions, and so on and so forth. So, I had sort of that practical technical experience and sort of delivering some of these solutions out in the world responsibly. And, as if that were not enough, as you mentioned, I also volunteer my time for CyberXR, and we focus a lot on extended reality. That is sort of the cumulation of augmented reality, mixed reality, and virtual reality experiences. But, with CyberXR Coalition, we bring organizations together, companies, content developers, and even legislators together to help build a safe and inclusive extended reality or XR or Metaverse, if I were to dare use the “M” word. Essentially, my background can be found at the intersection of product strategy, responsible emergent tech and data stewardship.

Jessica Miller-Merrell: [00:05:13.65] Thank you for that. I just wanted to kind of level-set so everybody can kind of understand your expertise as a technologist, really leading the forefront in things like XR and artificial intelligence. So, for our HR leadership audience, can you explain what equitable, diverse, and inclusive extended reality, also known as XR, consists of?

Noble Ackerson: [00:05:43.02] It’s a good question. So, a diverse and inclusive XR, I guess it would mean we are considering different abilities, backgrounds, cultures while creating these experiences and I say creating these experiences,  I also want to include the device manufacturers and how they build their devices to fit, say, a wider range of face types all the way to the people who create those experiences for the face computers that we wear, right? The VR devices or the AR glasses or the phones that we use, you know, and we want to build these things in a way that is accessible, that welcomes a wider range of individuals regardless of physical abilities or social economic status or even geographic location, right? So, internationalization of the experiences and localization of the experiences being examples. And it also makes business sense. You know, a few years ago I got inspired to help rethink how I will pass on my family history to, to my then, you know, five-year-old. She’s a little older now. And I built a VR experience to tell the story of her ancestors going all the way back to Ghana, West Africa. I had to pull that app off the App Store because a disproportionate amount of people got sick. There’s a lot of sort of motion sickness that comes, come with a lot of movement in VR, and I had to pull that out as an, as an example because I couldn’t really sort of have my daughter sort of travel from one part of the globe to another, which was the thing that was really making people sick because they were just being teleported and they were seeing the world underneath them.

Noble Ackerson: [00:07:39.78] I had to pull the app, right? So, it makes business sense, if you’re potentially harming someone, whether majorly or in small ways, it’s good to be responsible enough to sort of pivot and address some of the needs. So, for business to reach a wider audience, their users have to feel welcomed and valued. Their, their needs need to be considered and addressed in a practical way. So, when we talk about equitable, diverse or inclusion in extended reality, we also want to make sure that content developers and the device manufacturers alike, we’ll call them experience designers, employ and reward internally diverse cultures and diverse teams to to sort of address some of these, what they might consider an edge case, especially if they want to reach as many people as possible. And it’s, just, it just makes good business sense. No point in releasing a product that has disproportionate product failure for one group of people because you never thought about it, right?

Jessica Miller-Merrell: [00:08:48.48] Thank you. And I believe that XR is becoming more utilized in workforces every day. There are so many organizations that are using extended reality in training and development or orientation or even virtual meetings. This is an area that will continue to grow and evolve. I want to move over to another hot technology topic, and this is probably one that HR leaders are thinking more on a daily basis about. Can you talk a little bit about responsible artificial intelligence or AI and maybe how that’s different from a term that I’ve heard a lot called Ethical AI?

Noble Ackerson: [00:09:26.94] I love this question. I love questions where there isn’t one clear answer, right? Because it gets, you know, thought leaders out racing to try to create standards based on their research. Right. And, for me, AI ethics and responsible AI are tightly coupled. One depends on the other. So, start with AI ethics, right? AI ethics are how we adapt our negotiated societal norms into the AI tools we depend on, societal norms that are negotiated through things that we deem acceptable or that our legal frameworks have deemed as societally acceptable. Right? It’s also the guardrails by which these legal frameworks, like the New York AI audit law that got passed in 2021, which I think prohibits employers in New York or at least New York City, it’s a local law, from using artificial intelligence to, or these AEDTs, I believe the automated employment decision tools, to screen candidates or, you know, provide promotions for existing candidates. You know, if they want to do that, they would have to sort of conduct fairness audits or bias audits. And, and have systems in place to protect them. And again, that is based on societal norms that, that or ethical norms that, that we are attributing to the tools, the AI tools that we use. Since society agrees that that data used to decide who should be placed in a job should be free of bias, right? Because we don’t want to be in trouble with the law or we just want to treat everybody fairly, then AI ethics is basically a set of principles that will help us, you know, treat everyone fairly, not, rather than disproportionately benefiting one group versus another.

Noble Ackerson: [00:11:32.62] That’s AI ethics to me. It’s just sort of the principles by which we operate based on societal norms. Responsible AI, on the other hand, is more tactical for me, right? And it inherits from AI ethics or ethical AI. It’s more on how we build solutions based on societal accepted norms, societally accepted norms. So, at Ventera, where I work, I created the AI practice there. And my pillars for responsible AI are, sort of span data governance, making sure that the data that we collect and how we store the data, how we model, you know, how we understand the trained or learned models, predictions, are all understandable in terms, and clear of any bias or fairness issues so that, you know, we’re asking things like, did we test the model with respect to a specific group? And, if we did and if we didn’t need to, to pull in any protected classes, are there any secondary effects, meaning some proxy, there’s a proxy data or metrics that could get us in trouble down the road, right? Those are the things that we sort of think about. So, responsible AI, again, is more practical in how we build things. And, on my team and the teams that I work with and places that I consult and the different avenues that I do, it’s woven into how we build smarts or AI into software, right? Responsible AI is.

Noble Ackerson: [00:13:17.27] And so, just let me just put it simply what responsible AI is in sort of five pillars, right? For me, it is machine learning, usability, right? So, you know, you’ve integrated the machine learning model into a piece of software and now it might fail or it might give an incorrect answer. As a designer, how do you sort of allow the AI to fail gracefully for the user and afford the user an intuitive mechanism to provide feedback through the interface to further improve the solution? That’s sort of on the front end of, of responsible AI. And then, while, you know, while you’re sort of preparing your data for training, while you’re training and after you get your prediction, do you, number two, employ fairness testing, applying, do you apply debiasing algorithms, again, during pre-processing of your model, in-processing while you’re training or after the model has spat out its, its result? Right? And if the model spits out its result, you know, say, for example, hire this individual or don’t, this person is a no hire because of X, Y, and Z factors, do we have the mechanism to understand why the model has classified a group or an individual in the case of hiring, why it’s predicting a thing or deciding a thing? Do we have, what we call in the industry, explainability procedures to understand a model’s prediction? So that’s number three.

Noble Acfkerson: [00:15:01.83] Well, let’s go with number four. It’s back to the data, right? I call it the data supply chain. Do we have an understanding of the provenance of the data? Are we employing privacy-preserving tactics on top of the data to make sure that we’re not sweeping in unnecessary PII, which is essentially just noise for an AI system, and noise equals bad outcomes for your product, right? Because you need more signals, right? And also, want to protect, from a security standpoint. Do we have mechanisms, mechanisms to protect our machine learning model or our endpoints or our model endpoints from adversarial attack? And then the fifth one is, is more machine learning engineering and DevOps nerdy stuff where it’s, do I have a system that ties all of what I’ve just talked about together, right? And we call it ML Ops. Sometimes we call it Model Ops sometimes, and all that is, is this continuous integration of my explainability library or the privacy audit for when I get new data for my thing, or the fairness testing and stitching all that together into a pipeline that, you know, helps either semi-automate or I’ll say, just keep it, that semi-automate the entire process for you, because at scale, you know, it’s hard to have a human in the loop all the time, right? But before I let this question go, because I love this question so much, there’s actually a third term.

Noble Ackerson: [00:16:40.27] So you mentioned AI ethics and responsible AI, and hopefully I’ve beaten that horse all the way down. But there’s a third term that I hear a lot in, in my sort of responsible AI circles called trustworthy AI, right? And I define that as the sum of good AI ethics and the value I’m delivering, if I’m being responsible in, in delivering AI, responsible in the use of, of my AI tools for my users and the inevitable acceptance of my, of the consequences that can come out, right. So, trustworthy AI is essentially saying it’s the sum of applying ethical AI principles plus responsible AI, and if something goes wrong, you do that enough times and you’re transparent with what you’re doing, your audience, your users, your customers will accept the consequences because they know when things blow up, you will do right by them. An example of that would be a lot of large companies that have been very transparent, and I’m still using some of their tools because I know that, you know, once it’s on the Internet, something could go wrong, but I trust them, right? So that’s more data trust and how I equate that third piece called trustworthy AI.

Jessica Miller-Merrell: [00:18:01.33] Thank you for all the explanations and insights. You mentioned the NYC AI Audit law. We’re going to link to that in the show notes of this podcast as well as a really great resource, which is from the EEOC. It’s the ADA and the use of software algorithms and AI to assess job applicants and employees. The EEOC is really dialed into artificial intelligence this year, so there will be a lot more information in the last half of this year and 2024 and beyond. So check out the resources that we have listed on the show notes, too.

Break: [00:18:38.81] Let’s take a reset. This is Jessica Miller-Merrell and you are listening to the Workology Podcast powered by  Ace The HR Exam and Upskill HR. Today we are talking with Noble Ackerson, advocate for equitable, diverse, and inclusive XR and artificial intelligence. This podcast is powered by PEAT. It’s part of our Future of Work series with PEAT, the Partnership on Employment and Accessible Technology. Before we get back to the podcast, I want to hear from you. Text the word “PODCAST” to 512-548-3005. Ask me questions, leave comments, and make suggestions for future guests. This is my community text number and I want to hear from you.

Break: [00:19:18.14] The Workology Podcast Future of Work series is supported by PEAT, the Partnership on Employment and Accessible Technology. PEAT’s initiative is to foster collaboration and action around accessible technology in the workplace. PEAT is funded by the U.S. Department of Labor’s Office of Disability Employment Policy, ODEP. Learn more about PEAT at PEATWorks.org. That’s PEATWorks.org.

AI-Enabled Recruiting and Hiring Tools

 

Jessica Miller-Merrell: [00:19:46.88] I want to talk more about AI-enabled recruiting and hiring tools. So, let’s talk a little bit more about maybe some of the biggest challenges you see as we try to mitigate bias in AI when it comes to AI-enabled recruiting and hiring tools.

Noble Ackerson: [00:20:05.45] So, there are trade-offs when choosing between optimizing for bias, right, trade-offs between optimizing for bias and optimizing for performance and accuracy. So traditionally, typically machine, the machine learning objective is to solve an optimization problem, okay. And the goal is to minimize the error. The biggest challenge that I’ve seen so far when mitigating bias, is, in order to get, you can’t sort of separate bias and fairness, right? And so in order to get to fairness, the objective then becomes solving a constrained optimization problem. So, rather than say, you know, find a model in my class that minimizes the error, you’ll say find a model in my class that minimizes the error subject to the constraints that none of these seven racial categories, or whatever protected attribute you want to solve for, should have a false negative more than, I don’t know, 1% different than the other ones. Another way to sort of say what I’ve just said is, from what we’ve learned from our metrics, right, is our data model doing good things or bad things to people? Or, what is the likelihood of harm? You can get customers that come back.

Noble Ackerson: [00:21:31.52] It’s like, oh yeah, well we do this enterprise telemetry thing and we don’t collect, you know, protected class data. We don’t have any names, we don’t have it. So then I ask, are there any secondary effects? You know, because sometimes removing protected classes from your data set may not be enough. So, those are the tensions that I see when trying to mitigate bias. It’s like a squeeze toy, right? When you over-optimize for performance and accuracy, you often sacrifice bias and when you over-optimize for bias, you sacrifice performance. And so, I walk into the room and you’ve got, you know, CTOs that just want this thing, this, this image detection solution to consistently identify, you know, a melanoma in a thing. But then I let them know what is the likelihood of harm if your performance is just A-plus, right, whatever the metric is. But, for people with darker skin, you aren’t able to properly detect it like, you know, the pulse, the pulse oximeter problem with black people like me. Right. And so, those are the types of things that I’m having to, the tensions that I’m having to sort of educate folks about.

Jessica Miller-Merrell: [00:23:02.56] That’s really heavy, I feel like. And such a responsibility for the future of a technology that I feel like so many people are already using, not just once a day, but like multiple times a day. It’s, it’s everywhere in our lives. But then I think about how much we use it in HR, for assessments or job matching or interview, like just assessing like the use of words, or if bias was detected. There are so many different ways it’s already baked into our everyday lives as HR professionals and as leaders. Why should we be looking at new technologies using an intersectional perspective, for example, the intersection of diversity and disability?

Noble Ackerson: [00:24:05.87] Thanks for that question. So, I do a lot of speaking engagements, and one of the first icebreakers that I use, is I tend to ask the audience, you know, from the moment they were dead asleep to waking up and walking around their home or their place where they slept, when do they think they interacted with AI or their own data? And, you know, folks go, well, my Alexa woke me up. Or, you know, I sleep with a fitness band. And the whole thought experiment is to sort of show how ubiquitous our protected health information, our personally identifiable information, and the applications of both, and some of these newer technologies are. So, I always say, if AI is to be ubiquitous, if the data that we shed into these systems are to be ubiquitous to serve us, it better be fair. So for, you know, from a perspective of intersectionality, specifically like diversity and disabilities, I always point people to the work being done by Partnership, Partnership on Employment and Accessible Technology, PEAT. And they’ve released a lot of guidance here. One reason we should be looking at these new technologies, one reason we should be looking at, you know, being protective of user data, especially in the intersectional context, is that new technology is already ubiquitous, right? So, it has impacts on so many different groups, on people, groups of people depending on, you know, their identities, their cultures, different contexts. I’ve been on a tear for about seven years coaching organizations to ensure that these new technologies, the data that they use, comply with,

Noble Ackerson: [00:26:15.75] in the past it was, you know, GDPR. Then it became CCPA. And now every other day there’s another privacy law in the United States. And then there are more emerging tech laws, like AI-based laws around the world. So, you’re doing it not to check a box, a compliance box, but you’re also doing it just to be good stewards of the data that you use to grow your business. And it’s not your data, it is your customer’s data, especially if it’s first-party data. You don’t just use an AI tool that hasn’t been audited to screen out disadvantaged people with disabilities, whether it’s intentional or not. I can’t remember exactly what article this was from, but I think it was one of PEAT’s articles and one of the guidance that they provided was to also take an extra step to train staff on how to use new tools equitably, ethically, specifically, I would imagine most of the folks listening to this conversation, right? So, those that are making these often life-changing hiring decisions to understand the potential risks of protecting data or being good stewards of data and the benefits of using some of these emergent tools as well. So, two years ago, Federal Trade Commission released perhaps some of the strongest language that I’ve ever seen from the Federal government in the U.S. And they said something along the lines of, if your algorithm results in discrimination against a protected class, you will find yourself facing a complaint alleging the FTC, the ECOA Act.

Noble Ackerson: [00:27:57.01] I think the, the either the FTC Act and the ECOA Act, the ECOA Act or both. So see these, if these new technologies and the data that drive them are to be ubiquitous in our lives, right? The principles, the planning processes that we lean on to deliver these tools should be fair. They should be privacy-protecting. We should just, we should remove ourselves from the notion, that zero-sum notion, that I give you service and you give me data. It’s not zero-sum, it’s positive-sum and it’s not a checkbox because we have a rise in the use of big data. And, with that, we have a rise in data breaches which leads to harms. And thus, if you want, you know, legislators coming in and breathing down your neck and auditors breathing down your neck, then you will act accordingly and you’ll sort of apply some of these principled approaches to delivering responsible software. And so, yeah, that’s, that’s how and, you know, we should sort of look at these techniques as ways to deliver solutions that address diversity and, whether it be through disability, whether it be through protecting other protected classes, not just because it’s a, a thing that we legally have to sort of comply with, but just, because it’s just good business and it’s just being a good human to, to make these things fair for everyone.

Jessica Miller-Merrell: [00:29:37.88] We’re linking to the resources in the show notes. But, can you talk about data privacy in the context of diversity and inclusion, Noble?

Noble Ackerson: [00:29:46.95] Yeah. So, as one does when I, I’m sort of deep into the research of this for the last ten years or so, talking about data privacy issues, one creates their own framework because, you know, that’s what consultants do. And so, let’s first define what data privacy means in the Noble way, in my way, right? For me, data privacy is the sum of context, choice, and control. What do I mean by that? Context, meaning being able as an organization, being transparent in what data you’re collecting. Does it cross borders? What are you using it for, how long you collect it for? Choice means respecting your users enough to provide them the choice to provide you their personally identifiable information or not. And then, control means, if I provided you with my PII or PHI, do you provide an intuitive interface for me to then later revoke my consent in providing this, uh, this data? Put those three C’s together. You have respect. That means you’re being a good steward of data, right? And you can sort of just loosely use that as a definition for data privacy. So, three C’s equal, are respect. And the reason why I bring that up is that respecting and protecting personally identifiable information or sensitive information even, regardless of a user’s background or disability status, and being very transparent in how, if you have a justified reason to collect that data, in many cases, being transparent in how long you, you have to, you know, you retain that data for, and what the rules are,

Noble Ackerson: [00:32:00.24] means that we’re respecting the types of users that, you know, the users that we depend on in order to have a business, for ads or for whatever the intended benefit of the solution is. Respecting how we use our users’ data within the massive data sets that we have, that we depend on as, as you know, AI developers, for example. We need to understand and have processes in place to, to make sure that say, for example, we understand the lineage of where someone’s artwork came from. So, for example, if we’re going to use that into, in some AI tool, for example, that we’re able to sort of just track that back to easily compensate when data is being used by a person, regardless of their background, you know, especially, I would say, especially if they’re from, you know, struggling artists from, from sort of a lower income area. So, for D&I or diversity and inclusion context, an organization should have clear guidelines on how they share diversity data as well, how they provide context, and the choices that they give. Because again, it’s not your data alright? It is your customer’s data. And so, that’s sort of the lens at which I look at it. And it’s sort of broadly reaching. It doesn’t, it’s just the human-centered way to, to, to talk about data privacy in the context of the people we serve, specifically protected classes, and being inclusive and equitable in how we, we sort of implement some of these solutions.

Jessica Miller-Merrell: [00:33:50.64] Perfect. Well, I really think, like, to close our conversation, it’s important to end the, the conversation on the topic of inclusive design. Can you talk about what inclusive design is and why it’s important to both the future of XR and AI?

Noble Ackerson: [00:34:08.07] Yes. So, design, so we’re tool builders, right? We’ve been designing tools for millennia, since, since we humans were able to, to speak, right? Or communicate with each other. Everything that we do, whether it’s through digital, through sort of a digital lens or not, I consider design, whether it’s building a new tool or not. It is important for the future of any emergent tech, XR included or AI to clearly communicate what our AI can do. So, one of the clearest principles that guide design is information architecture and being able to sort of let your audience know contextually maybe what your system can do and what they can’t do. I am kind of disappointed that, in this new AI gold rush and the XR gold rush that came before it, that there are no legislative guardrails in the U.S. anyway, that prevent, prevent these companies from overstating what their AI solution can do. And so, what that means is, you know, you have users that come in thinking the system can do one and they either overtrust the solution which leads to harm. I’ll give you a great example of that. So say, just a fictional example, say I built an AI, an XR, or an augmented reality solution that is powered by AI, to detect what plants are poisonous and what plants are not. So, I go out with my daughter camping or hiking, and I pull out my phone to use this tool. It’s been sold to me as a revolutionary AR solution powered by the best AI that does no wrong, so I’m calibrated to overtrust it. And here I am.

Noble Ackerson: [00:36:35.38] It may be the system misclassifies a plant and I put myself or my loved one in danger. I’ve overtrusted the system and the system’s design without any feedback from the system that it had low confidence that this thing, this plant, was dangerous. On the inverse of that, design is important because of underthrusting. So naturally, if my solution isn’t inclusive or isn’t, doesn’t address, you know, diversity needs, ethical needs, accessibility needs, you’re not going to get the adoption. So you’re calibrating your system to be undertrusted by your customers. No one will use your thing. They might read the fancy headlines, download your app, uninstall it or never come back. So, there’s a happy medium that is often achieved through teams that are principled in how they deliver these types of solutions to design it with, you know, in a human-centered way. And that’s not just buzzword, in a way that helps us understand that we’re not just riding a new wave with all the bells and whistles that could potentially put someone to harm if they overtrust it. Nor are we building a system that is flawed in the sense that it’s not addressing all the disability needs through its experience or all the diversity needs of your users. Users are not going to use that tool. And so, that happy medium is achieved by people, by diverse teams that are building this thing, that have a voice in the room that can, you know, calibrate the trust between undertrusting and overtrusting. Hopefully, that makes sense in a way that I understand the question in inclusive co-design.

Jessica Miller-Merrell: [00:38:50.87] Amazing. Well, Noble, thank you so much for all your time and insights. I really appreciate it. We’re going to link to your Twitter and your LinkedIn as well as to some additional resources that you had mentioned. And then a really great article that I feel like it was from LinkedIn that you published that, or Medium, that, that has some more insights. I think it’s really important for HR leaders to talk directly to the technologists who are developing the product as they are developing, or people like Noble who are in the thick of it versus chatting directly to the salespeople or trying to sell us the tools because we need more people like you, Noble, to help partner for us to understand how to use the technology and then to have a dialogue about how it is being used and how we can make it equitable and trustworthy and responsible for everyone. So thank you again.

Noble Ackerson: [00:39:48.63] Thank you so much for having me.

Closing: [00:39:51.05] This was a great conversation and I appreciate Noble for taking the time to chat with us. Technology in the workplace has changed dramatically over the last few years, but we don’t have to fear it or let it overwhelm us. Certainly, all this talk about XR and AI is a lot for us in Human Resources. It’s important to highlight the positive elements around what we’ve learned and how we support employees and our efforts to recruit them. And, I know it’s a broad topic, but it really is about how willing we are to have difficult conversations with workplace centered around equity and inclusion as it pertains to technology. I really appreciate Noble’s insights and expertise on this important episode of the Workology Podcast, powered by  PEAT, and it is sponsored by  Upskill HR and Ace The HR Exam. One last thing, there are so many good resources in this podcast show notes, so please check them out. I will also link to an amazing article that Noble wrote on LinkedIn titled “Bias Mitigation Strategies for AIML, aka Adding Good Bias” is a lot of really good information and resources, including a reference to an IBM disparate impact remover. These are all things I think we need to know more about as the people leaders in our organizations, and being comfortable to talk about technology, whether it’s XR or AI, I think is incredibly important. Before I leave you, send me a text if you have a question or want to chat, text the word “PODCAST” to 512-548-3005. This is my community text number. Leave comments, make suggestions. I want to hear from you. Thank you for joining the Workology Podcast. We’ll talk again soon.

Connect with Noble Ackerson.

RECOMMENDED RESOURCES

 

– Noble Ackerson on LinkedIn

– Noble Ackerson on Twitter

– PEATWorks.org

– Civil Rights Standards for 21st Century Employment Selection Procedures | Center for Democracy and Technology (cdt.org)

– EEOC Guidance Doc (05/12/2022): “The ADA and the Use of Software, Algorithms, and AI to Assess Job Applicants and Employees

– PEAT AI & Disability Inclusion Toolkit:

Resource: “Nondiscrimination, Technology and the Americans with Disabilities Act (ADA)

Risks of Hiring Tools: “Risks of Bias and Discrimination”, “How Good Candidates Get Screened Out”, and “The Problems with Personality Tests” have good elements that play to the topic of intersectional bias risk and mitigation in employment.

– Generative AI: 5 Guidelines for Responsible Development | Salesforce News

– NYC Postpones Enforcement of AI Bias Law Until April 2023 and Revises Proposed Rules | Morgan Lewis

– Mitigating AI Bias, with…Bias | Noble Ackerson

– Episode 391: What Is Equity-Centered UX With Zariah Cameron From Ally

– Episode 378: Trust and Understanding in the Disability Disclosure Conversation With Albert Kim

– Episode 374: Digital Equity at Work and in Life With Bill Curtis-Davidson and Chris Wood

How to Subscribe to the Workology Podcast

Stitcher | PocketCast | iTunes | Podcast RSS | Google Play | YouTube | TuneIn

Find out how to be a guest on the Workology Podcast.

Did you like this post? Share it!

A Word From Our Sponsors

Ads help make Workology resources free for everyone. We respect your privacy. To see our Privacy Policy click here.

Recommended Posts

supporting caregivers: the sandwich generation at work

Supporting Caregivers: The ‘Sandwich Generation’ at Work

I’ve been told my whole life to enjoy being young and to grow old gracefully. To enjoy and soak in every moment, especially the...

Recruiting Reimagined: How Specialized Software Strengthens Hiring in 2024

Explore how specialized software is transforming recruitment strategies in 2024. We discuss the benefits of innovative tools for hiring....

How to Reduce Stress at Work (and Life) with Meditation

We can’t always limit stress or the amount of it in our lives, but we can arm ourselves with resources and tools to help...
Best HR Certification prep books

Best HR Certification Prep Books for SHRM and HRCI

Looking for additional reading to support your study prep for HRCI or SHRM? We've got a comprehensive list for you right here....
Your Global HR Certification: A Professional's Guide

Your Global HR Certification: A Professional’s Guide

Elevate your HR career with our guide to Global HR Certification. Explore benefits, prep tips, and vital resources for exam success!...

Resources for Session Attendees of Digitizing Talent

Resources for conference session attendees of Digitizing Talent: Creative Strategies for the Digital Recruiting Age....

Ways to Set up Self-Paced Studies in Higher Education

In the world of higher education, the winds of change are blowing. More than ever, students are looking for learning options that fit their...

HR Podcast Episode 9: 2024 Changes to HRCI and SHRM Exams

In this episode of the HR Certification Podcast, we are reviewing the latest changes in SHRM and HRCI exams....

Checkout Our Products

Ads help make Workology resources free for everyone. We respect your privacy. To see our Privacy Policy click here.

More From Workology

HR Podcast Episode 9: 2024 Changes to HRCI and SHRM Exams

In this episode of the HR Certification Podcast, we are reviewing the latest changes in SHRM and HRCI exams.

Ways to Set up Self-Paced Studies in Higher Education

Click on read more to open this post on our blog.

Resources for Session Attendees of Digitizing Talent

Resources for conference session attendees of Digitizing Talent: Creative Strategies for the Digital Recruiting Age.
Your Global HR Certification: A Professional's Guide

Your Global HR Certification: A Professional’s Guide

Elevate your HR career with our guide to Global HR Certification. Explore benefits, prep tips, and vital resources for exam success!