By Linda Nazareth, October 13, 2019
Have you had a bad manager who made bad decisions? For those of us who have (and congratulations if you have not, you are in a tiny, lucky, minority), trading in the boss for a nice, clear-thinking robot can seem pretty appealing at times.
Companies are starting to think that way, too. Although artificial intelligence (AI) and algorithms are now routinely used to sort data and to replicate basic tasks, we are rapidly heading to a place where automation will be given a promotion into higher-value decision-making. Both living with and managing that future will present their own set of challenges.
As Barry Libert and Megan Beck of the machine-learning company AIMatters put it in a recent piece in the MITSloan Management Review, we are now entering the age of “self-driving companies.” Their analogy rests on the idea that the self-driving car universe now encompasses a spectrum of progress, and that companies can be classified the same way. At one end are cars that run with some driver assistance, while at the other end there are those that are fully automated, with various iterations in between. In the case of companies, some are using machines with a lot of human oversight, while others are close to taking humans out of the equation when it comes to decision-making.
It can be difficult to discern the line between human and machine decision-making. Using automation to schedule appointments is straightforward and may not need a lot of value judgments, and the same can arguably be said for robo-advisers in finance that review investment portfolios according to metrics set out by humans.
But what about job-screening algorithms that are set up to scan for suitable candidates and end up inadvertently discriminating against some? Last fall, Amazon reportedly scrapped a computer model that was meant to find the best tech talent after realizing that because most top performers in tech were male (given that the industry is overwhelmingly male, the bottom performers likely were as well) the model was ditching all applicants it could discern to be female from their pool of prospects. More routinely, older applicants for jobs are often eliminated from the interview pool since their training, as set out by the date of their degrees, is so far in the past.
Of course, older workers or women could be screened out by humans, too, but human managers do have some skills that chatbots and algorithms do not. Emotional intelligence and other skills around empathy can guide decision-making in actual flesh-and-blood managers in ways that they cannot in machines. Indeed “Polanyi’s paradox” is often pointed out as one reason why human beings will never quite be replaced as workers. As the philosopher Michael Polanyi pointed out in the 1960s, human beings “know more than we can tell,” which is to say there are things we do when working or performing tasks that we understand intuitively but cannot articulate.
The idea that companies will become effectively self-driving means accepting the notion of autonomous decision-making for machines. Some of the fallout from that could be terrifying. A coalition of non-governmental organizations has been working since 2012 to ban fully autonomous weapons. The Campaign to Stop Killer Robots says such robots could ultimately do things that they were not originally programmed for, including inadvertently killing people who look like the enemy but are not.
Of course, most of what machines will decide will be decidedly less science-fiction-like and will more likely involve things along the lines of sales or marketing or investments. Maybe the machines can be trusted to those things, but managing those processes will have to be done carefully. Which decisions can be made without human oversight will be for senior managers to determine, which will mean they themselves must understand the processes well.
From an economic point of view, using machines where it is effective to do so gives us the chance to reap huge productivity benefits. While it is true that those who previously were trusted to make those calls may be anxious about their job prospects, the end game of all this should be to create circumstances where humans can be used more effectively, not eliminated. If that happens, the economic benefits could be huge. For those who are hoping to have their own boss replaced, it may never happen. Sadly, bad human managers are not likely to ever go away, but then again good ones are not likely to either.
Linda Nazareth is a senior fellow at Macdonald-Laurier Institute. Her book Work Is Not a Place: Our Lives and Our Organizations in the Post-Jobs Economy is now available.