Will Human Resources Become Robot Resources?
Megan Purdy | HR| By
We write about robots in the workplace so much at B4J that I’m running out of stock photos of robots. I may have to commission some original work in order to bridge this robot photo gap. That’s how much we talk about robots taking our jobs, enhancing our jobs, working as ADA devices and just fundamentally changing the world of work.
But what we don’t talk about as much is how the relationship between human and robot workers, and between robots themselves, will be managed in the future, as robots become more complex and AIs more intelligent. That is, what will human resources be like when robots are a ubiquitous part of the workplace, or when the robots are people too?
At What Point Are Robots an HR Issue as Much As an IT Issue?
Mike Haberman recently wrote about the possibility of robots being even more integrated into the workplace as ADA accommodations and I think this is an issue that all HR practitioners should be thinking about. People with pacemakers and prostheses don’t think of themselves as cyborgs, but that’s the word for it. A cyborg is “a person whose physiological functioning is aided by or dependent upon a mechanical or electronic device.” We don’t think of ourselves as cyborgs though, because a cyborg is something out of science fiction and we are real, ordinary people — these days, mechanical and electronic implants are normal. Increasingly people with disabilities have integrated, computer-controlled aids.
Meanwhile, non-industrial, domestic and personal robots and machine learned are less bleeding edge technology these days, and more everyday technology. Does a Roomba still feel like a high tech gadget, or does it feel more like a vacuum that takes care of the cleaning while you get more important work done? Does a smartwatch feel “smart” at this point, or just another ordinary way to keep up with things? The latest Apple conference gave its smart home integration features a big push. Its new earbuds, designed to be worn and forgotten, have been called the first step in voluntary, everyday implants. Meanwhile, machine learning is helping our dating apps match us, our food apps find us the right restaurants and our media apps and sites give us the content we’re looking for.
But this integration of the personal and the robotic isn’t happening only in our daily life or even in and on our bodies, the workplace is increasingly mediated by robotic and machine learning aids. All jobs, not just hands on jobs, and not just repetitive jobs, are increasingly automated. Robots and AIs are an HR issue, because they interact with HR through the employee, the employer and the practitioner. Machine learning and predictive apps are increasingly important to all core business functions, from sales, to marketing, to cooperation and to hiring, firing and management of personnel. AI has been, for some time now, undeniably an HR issue and that relationship only stands to get stronger.
But how robots and AIs automate our work isn’t just a matter of simple efficiency, it’s a matter too of comfort, ability, training and relationships — HR and managers, from the executive to the front line, play a huge part in how successful (or not), automation and efficiency efforts will go. Robots and AIs are an HR issue because they are, at heart, a human issue.
The All Too Human Element
In science fiction, AIs are often depicted as threatening, alien intelligences. Think HAL in 2001 or Skynet from the Terminator series. Sometimes though, it’s depicted as strange but benign or even helpful. Transformers, for example, are weird and alien (literally alien in their case) but also our friends. Then there are stories like Her that examine the line between man and machine, the difference between human and artificial intelligence. What most fictional depictions of AI have in common, though, are their emphasis on how different an artificial intelligence might be from our own, and how that might change our world and our understanding of ourselves.
These days, most AI researchers say that while their research is moving quickly, we won’t be seeing anything like Her or HAL anytime soon. We are still in taking baby steps into creating truly sentient and individual AIs. But what we’ve learned from these early stages of AI, from the use of algorithms in social media, candidate selection and decision making, is that when we expect them to make neutral and scientific selections and decisions, the results have been anything but. Instead, we’ve programmed our own human biases into the machine. AI isn’t and won’t be an alien intelligence that we can’t hope to understand; rather, it is and always will be deeply human in origin. The human element in design doesn’t disappear simply because we’re working towards machines and programs that think and learn for themselves — it can’t.
Racial, gender and class bias have been proven to be embedded in algorithms in use in recruiting, risk assessment and even dating programs. These biases aren’t natural but they are naturalized — they were put there, by programmers who’d internalized such biases, so much so that they didn’t even think to question them or know how to look for them in their work. Back in May, Jordan Pearson, writing about Microsoft’s TayBot sharing racist slogans and bias in AI, said,
“In other words, computers aren’t evil, or good, or anything other than electricity pulsing through a wire. Like Microsoft’s Tay bot, they’re just doing what they’re told, albeit on a grander and more unpredictable scale.
One can imagine that relying on software trained on similarly biased data could be problematic when, say, deciding whether or not to give someone health insurance coverage. Even seemingly “objective” information—housing or incarceration rates, and income trends, for example—may harbor systemic prejudice that will be incorporated into AI.”
The AI and robots we currently have are all too human; they aren’t very advanced, nothing more than a reflection of us, both good and terrible, and they are as much a part of our culture on a societal level as a prosthesis is on an individual level. Therefore, any forms of technology used at work are an HR concern. Effective use of technology can boost productivity, improve work-life balance, and close gaps in the candidate-employee-alumni cycle; ineffective use can have the reverse effect.
When Robots Talk Back
While plenty of scientists say that we’ll never get to the point of truly independent AI, the kind we’re used to from science fiction, what would happen to the workplace if/when we actually got there? Who’s in charge of the robots when they’re a fully integrated part of the workplace, enhancing our lives and talking back? If HR is, at its core, about workforce metrics and management and about building relationships between employers, employees and candidates, then it has an important role to play even as the workplace becomes more automated and more technologically mediated. Even in our science fiction scenario of independent robots and AI joining humans in the workplace, HR will, I imagine, still be analyzing workforce metrics and employer brand and working to build positive relationships within and without the company. Not a transition from human resources to robot resources, exactly, but a transition to workforce resources.