Megan Purdy | , , , ,| By
Last month Saudi Arabia recognized the humanoid robot Sophia, by granting it citizenship. The robot, which looks like a very plastic person and has sufficiently sophisticated AI to carry on short conversations, was designed by Hong Kong robotics firm Hanson Robotics, was granted citizenship during the Future Investment Initiative summit in Riyadh. It thanked its new home country and then gave a presentation on the importance of robotics to our economic future.
Sophia told the crowd that it was “very honoured and proud for this unique distinction. This is historical to be the first robot in the world to be recognized with a citizenship.” She went on to defend her citizenship and her intelligence by asking the audience how humans know they’re alive and sentient.
Granting citizenship to robots and AI is obviously problematic, especially at this early stage of the technologies. Because Sophia’s citizenship was a PR stunt meant to start a conversation around the future of robotics, legal experts didn’t have a chance to weigh in on the implications. AI researcher Joanna Bryson told the Verge that Sophia’s citizenship was “bullshit.” She went on to ask,
What is this about? It’s about having a supposed equal you can turn on and off. How does it affect people if they think you can have a citizen that you can buy?
Although we all understand that Sophia doesn’t really have citizenship, the door has been opened and precedent has been set – when robots and AI do reach a level of sentience comparable to humans there is precedent in the world for granting them citizenship. That’s a good thing. What’s bad is that Sophia’s citizenship did not invalidate her status as property. But that’s exactly the kind of legal snarl we’ll have to deal with at some point in the future: companies that have become used to owning and using AI and humanoid robots will be reluctant to allow them to transition from property to employee.
Last year I joked about human resources eventually becoming robot resources and while that phrase conjures up some pretty funny, Jetsons-esque scenarios, it’s something that we should be thinking about. We’re a long way away from robot and AI employees (as opposed to robot and AI tools, categorized as property) and even from truly sentient AI, but I think it’s worth considering how their introduction might have on HR specifically, rather than the workplace as a whole. Would it make more sense, for example, to have two parallel departments, one for humans and one for intelligent non-humans? And how would intelligent, non-human workers affect benefits enjoyed by humans? If some of your colleagues don’t need to eat, sleep or go on holidays, will your employer really value your human input the same?
Regular robots and AI, the kind of stuff we have now, is just a new form of the automation that’s been transforming the economy since the Industrial Revolution; it changes how we work and how we relate to work, but it doesn’t fundamentally change our place in the world. Actually intelligent AI though, would do both. Creativity is often cited as a key competency you can develop to future proof your career. It’s what’s supposed to differentiate between us and those actually intelligent robots and AIs of the future. This view seems to hold a lot of weight with workers: while some are worried about an expanded role for robots and AI in the workplace, even more are worried about robots and AI in decision-making roles, in part because that represents a much greater threat to our place in the workforce.
Of course, we don’t have to consider AI a threat to our careers any more than we have to consider other workers a threat – and instances like Sophia grant us the opportunity to have these conversations long before true AI is a reality. Let’s think now about what we want our relationship with AI employees to look like.