Algorithms Need Management Training, Too

Automated systems are increasingly making decisions in the workplace. Here's how to curb the potential harms and abuses. 
Photo collage of a manager and employees algorithms the EU flag and an object balancing
Photo-illustration: WIRED Staff; Getty Images

The European Union is expected to finalize the Platform Work Directive, its new legislation to regulate digital labor platforms, this month. This is the first law proposed at the European Union level to explicitly regulate “algorithmic management”: the use of automated monitoring, evaluation, and decision-making systems to make or inform decisions including recruitment, hiring, assigning tasks, and termination.

However, the scope of the Platform Work Directive is limited to digital labor platforms—that is, to “platform work.” And while algorithmic management first became widespread in the labor platforms of the gig economy, the past few years—amid the pandemic—have also seen a rapid uptake of algorithmic management technologies and practices within traditional employment relationships.

Some of the most minutely controlling, harmful, and well-publicized uses have been in warehouse work and call centers. Warehouse workers, for example, have reported quotas so stringent that they don’t have time to use the bathroom and say they’ve been fired—by algorithm—for not meeting them. Algorithmic management has also been documented in retail and manufacturing; in software engineeringmarketing, and consulting; and in public-sector work, including health care and policing.

Human resource professionals often refer to these algorithmic management practices as “people analytics.” But some observers and researchers have developed a more pointed name for the monitoring software—installed on employees’ computers and phones—that it often relies on: “bossware.” It has added a new level of surveillance to work life: location tracking; keystroke logging; screenshots of workers’ screens; and even, in some cases, video and photos taken through the webcams on workers’ computers.

As a result, there is an emerging position among researchers and policy makers that the Platform Work Directive is not enough, and that the European Union should also develop a directive specifically regulating algorithmic management in the context of traditional employment.

It’s not hard to see why traditional organizations are using algorithmic management. The most obvious benefits have to do with improving the speed and scale of information processing. In recruiting and hiring, for example, companies can receive thousands of applications for a single open position. Résumé screening software and other automated tools can help sort through this huge quantity of information. In some cases, algorithmic management might help improve organizational performance, for example by more smartly pairing workers with work. And there are some potential, if so far mostly unrealized, benefits. Designed carefully, algorithmic management could reduce bias in hiring, evaluation, and promotion or improve employee well-being by detecting needs for training or support.

But there are clear harms and risks as well—to workers and to organizations. The systems aren’t always very good and sometimes make decisions that are obviously erroneous or discriminatory. They require lots of data, which means they often occasion newly pervasive and intimate surveillance of workers, and they are often designed and deployed with relatively little worker input. The result is that sometimes they make biased or otherwise bad management decisions; they cause privacy harms; they expose organizations to regulatory and public relations risks; and they can erode trust between workers and leadership. 

The current regulatory situation regarding algorithmic management in the EU is complex. Many bodies of law already apply. Data protection law, for example, provides some rights to workers and job candidates, as do national systems of labor and employment law, discrimination law, and occupational health and safety law. But there are still some missing pieces. For example, while data protection law creates an obligation for employers to ensure that data they store about employees and applicants is “accurate,” it’s not clear that there is an obligation for decision-making systems to make reasonable inferences or decisions based on that data. If a service worker is fired because of a bad customer review but that review was motivated by factors beyond the worker’s control, the data may be “accurate” in the sense of reflecting the customer’s unsatisfactory experience. The decision based on it may therefore be lawful—but still unreasonable and inappropriate.

This leads to a curious paradox. On the one hand, more protection is needed. On the other hand, the welter of already existing law creates unnecessary complexity for organizations trying to use algorithmic management responsibly. Confusing matters further, the algorithmic management provisions of the new Platform Work Directive mean that platform workers, long underprotected by law, are likely to have more protections against intrusive monitoring and error-prone algorithmic management than traditional employees. 

A broader Directive on algorithmic management—one that protects traditional employees too—needs to fulfill three tasks in particular. First, prevent the privacy violations that arise from unnecessarily extensive and intimate worker monitoring. Second, limit the extent to which algorithmic management widens existing information asymmetries between employers and workers. Employers already know more about workers, collectively, than workers know about themselves—like if one worker is being paid more than another for the same job. That information gap gives employers negotiating leverage, conferring more power than they already have. Algorithmic management gives employers even more information about workers—information that companies often don’t really need. As a 2022 German government report on workplace data protection put it, “It is necessary to prevent employers from knowing everything about their employees.” And third, ensure that human agency—especially but not only the agency of managers—is not lost at crucial points in workplace decision-making.

Our research at Oxford’s Bonavero Institute of Human Rights is based on the growing body of empirical research by investigative journalists, social scientists, and computer scientists documenting workers’ and organizations’ experiences with algorithmic management. We’ve found that these three goals can be achieved through a combination of four main strategies: prohibitions, requirements, rights, and protections.

Prohibitions. Data collection and processing in certain contexts, such as outside of work, in private spaces at work (such as in bathrooms and break areas), or in private communications such as with worker representatives, should be prohibited without exception. Collecting or processing any data for the purpose of emotional or psychological manipulation, or for the prediction of—or persuasion against—the exercise of legal rights, such as organizing, should also be prohibited. Finally, automated termination of the employment contract should be prohibited.

These prohibitions would protect against the privacy violations and risks to fundamental rights—like workplace organizing—created by the data collection required by the most data-hungry algorithmic management systems. They would also help slow the widening information asymmetry between workers and employers by declaring certain contexts off-limits for collecting worker data. And a prohibition on automated termination would ensure the exercise of human judgment during the most crucial—and potentially irrevocable—moment in the employment relationship.

Requirements. Algorithmic management systems should only be acceptable if they are necessary for hiring or for carrying out the employment contract; for complying with external legal obligations; or for protecting the vital interests (e.g., safety) of the worker or some other natural person. To protect both workers and organizations against “snake oil AI,” the law should require that the systems used be demonstrably capable of serving their intended purpose. Employers—or the vendors operating the systems—should also conduct and publish detailed impact assessments of the systems before, and regularly after, deployment.

Rights. The law should establish extensive transparency rights—that is, rights of access for workers both to general information about the systems being used and to data about individual decisions affecting them. It should also establish collective data access rights for worker representative bodies (e.g., work councils and trade unions), as appropriate under national labor laws. And the details of how employers deploy and operate algorithmic management systems should be clearly included in worker representatives’ rights to “information and consultation.”

Giving workers and their representatives the right to ask questions, get answers, and express their opinions about algorithmic management would improve transparency and accountability about both individual decisions and overall systems of automated decision-making. And in countries with stronger labor rights, such as co-determination, worker representatives should have explicit rights to participate in decisions about how to use algorithmic management in the first place. 

Protections. The law should protect the humans involved in overseeing algorithmic management systems. This includes not only protecting workers subjected to algorithmic decisions, and their representatives, from retaliation, but also protecting managers—who may wish to question an algorithmically produced decision but worry about the risk they run in doing so. The law should establish protections that counteract the idea that it’s safe to do what the computer says and risky to exercise one’s own human judgment.

These protections make the other regulatory elements work. If worker representatives, for example, have the right to be consulted about algorithmic management but aren’t protected from retaliation—such as dismissal—for asking management tough questions, the rights will be ineffective.

Not long ago, futurists, automotive executives, and investors were enthusiastically predicting the imminent arrival of self-driving cars. Professional human drivers would be automated out of their jobs; doctors, lawyers, and writers would follow shortly after. These visions have yet to become reality. Meanwhile, it’s not the drivers that have been automated, but the dispatch office. Robots haven’t replaced workers, but, unexpectedly, their bosses. Our new robot bosses aren’t very good—but then, neither were the human ones. Regulation that balances feasibility with real protections can help guide the maturation of these technologies and the whole algorithmic management industry—and maybe even (dare we hope?) improve management, and work, in the process. The EU in particular has an opportunity to build on its track record in both social and digital regulation and pass a directive on algorithmic management that does exactly that. It should do so.

This article is based on research that has received funding from the European Research Council under the European Union's Horizon 2020 research and innovation program (grant agreement no. 947806).