top of page

Will a robot really take your job?

Humans have long been fascinated by the potential of robotics, proving their intelligence by creating something that could be as smart, or smarter, than themselves. In recent years, huge developments have been made: driverless cars, conversational Audrey Hepburn lookalikes, and robot receptionists, to name just a few! The questions and issues that the progress in robotics have raised, however, are becoming increasingly pressing, and scaremongering does battle with naïve optimism on the front pages in the clickbait age of mainstream media.

Will robots take the lion’s share of the jobs? Will they create new ones? Will their inclusion in increasing numbers of workplaces redefine what it means to be human and have a purpose?

Cathy Berrera, writing on ZipRecruiter, takes a balanced yet optimistic view. She opines that, although she believes there will never be a job that a robot cannot perform better than a human being, she also does not foresee any real clash between the two. Berrera instead predicts a happy compromise between human and robot labour, whereby humans and robots undertake the same jobs for different clientele.

One particularly interesting job sector in which to consider robots is that of recruitment.

The idea of a non-human interviewing or screening a human applicant for a vacancy seems, initially, ridiculous. How will a robot get a sense of how good a candidate is? Surely candidates will be offended by the impersonal nature of the recruitment process, and may not accept the job even if it is offered them, damaging the reputation of recruitment firms and their clients.

However, Eyal Gravesky believes there is a place in recruitment for robots. Gravesky’s company, Mya Systems, uses a chatbot (the eponymous Mya) to screen large numbers of hourly-wage job applicants. After an initial screening consisting of standardised questions from Mya, if the candidate is deemed to be a good match for the role, an interview is scheduled, at which point, human recruiters take over.

Large recruitment companies with wide and nonspecific location reach have welcomed the addition of robot recruiters, citing the time-consuming nature of the initial searching for and screening candidates, particularly for hourly-wage roles. Such jobs, especially seasonal vacancies, require potentially hundreds of new employees within a tight time-frame, so the appeal of an algorithm or chatbot sorting through the applicants takes the pressure off recruiters, who can then spend more time and manpower on filling more senior positions which directly benefit from the personalised and personable human touch of recruitment.

Meanwhile, many companies already use chatbots online, tasked with everything from fielding frequently asked questions and streamlining online takeaway orders. Recruitment, then, is just the latest sector to embrace the possibilities of bots.

Another element of recruitment, and one that robots are unlikely to dominate, is assessing a candidate's workplace fit. While algorithms and chatbots can be utilised to efficiently collect information from CVs and applicants, and subsequently categorise their basic suitability for an hourly-wage vacancy, they are unable to sense nuance and respond accordingly. Even Sophia, the most advanced humanoid robot who gives media interviews and learns from the conversations she has with humans, can only engage in human dialogue on a superficial and limited level, and she / it would be no use in deciding whether somebody met the requirements for something as nebulous and qualitative as a specific workplace culture.

In fact, when it comes to things ke nuance, robots are notoriously poor substitutes for human employees; AI bots have been caught out on several occasions by adopting human prejudices and inadvertently making racist and sexist statements.

Of course, the real cause of these offensive outbursts isn’t the robots’ own ingrained prejudices, because that’s impossible; in teaching the robots to be more like humans, humans themselves have just shown themselves up with their least pleasant tendencies. This raises further questions about the best way to teach an AI to develop its reach and depth, how to limit the type of people it encounters, and how safe it is as a replacement for humans in customer-facing jobs.

While the chatbot Tay.AI on Twitter became a PR nightmare after she / it began denying the Holocaust and spouting sexually graphic content, at least the tweets could be deleted and the bot deactivated without any direct harm inflicted on humans, but robots in legal, medical and administrative roles could do immeasurable brand damage and cause significant harm to human customers and clients, and that’s before you take into account any physical damage caused by malfunctions! Already, robotic-assisted surgical operations have been linked to 144 deaths in the past decade in the US, and driverless cars have not had promising starts to their initial trials either.

However, as Leslie Hook, writing in the Financial Times, points out: when comparing faulty robots with human error, it becomes incredibly difficult to pick a side. Are robot errors harder or easier to forgive than human errors? How much do we blame the humans behind the robots for any robotic mishaps anyway? And besides, if robots take the jobs, what will be left for the humans to do?

As yet, we are a long way away from finding satisfactory answers to these questions and many more, but one thing is for certain: the debates surrounding robotics in the workplace are not going away anytime soon.

Fortunately, the team here at KnowHow Experts are strictly of the non-robotic variety, and we can offer a bespoke, personalised approach to help you find great new staff and enhance your business. Get in touch today!

bottom of page