Working for an Algorithm Might Be an Improvement: Elaine OuWorking for an Algorithm Might Be an Improvement: Elaine Ou
The algorithmic assessment of productivity doesn’t have to be dehumanizing.
January 14, 2017
Bridgewater, the world’s largest hedge fund, has been portrayed as a bizarre, Moneyball-type machine in which employees’ every move is monitored and assessed, increasingly by computer algorithms. Awful as that may sound, what if it’s actually a step toward a happier and more prosperous world?
Granted, descriptions of the place — including a recent Wall Street Journal article to which founder Ray Dalio has taken vociferous offense — make it seem pretty dystopian. The firm, for example, amasses employee data to produce individual “Baseball Cards,” with scores and ratings on dozens of attributes. That’s great if the system turns you into a Honus Wagner card, but possibly demoralizing for anyone else.
Bridgewater isn’t alone. Offices around the country are deploying tools to continuously monitor and assess employee activity. Complaints about the dehumanizing nature of working for algorithmic bosses such as Uber and Amazon have inspired comparisons to “Taylorism,” a scientific management theory remembered primarily for its use of stopwatches and specialized slide rules (like the one pictured above).
Yet the theory’s namesake, Frederick Taylor, didn’t set out to maximize efficiency at the expense of employees’ sanity. Rather, he wanted to improve worker welfare. The Progressive movement was in its early days, and social and political activists wanted to stop industrialists from exploiting the working class. Taylor believed that his system for greater productivity would align the interests of employees and management.
Taylor emphasized the importance of linking pay to performance, but worried that workers would chase compensation to the point of fatigue and injury. The purpose of stopwatches and schedule micromanagement was not to push employees to their limits, but to pace them for long-term health. Taylor expected that the efficiency gained from scientific management would lead to higher wages, fewer working hours, and greater job satisfaction. The idea even helped inspire the creation of Harvard Business School’s first MBA program in 1908.
Unfortunately, managers quickly discovered that algorithmic performance measurement could be used to punish low productivity instead of encouraging initiative. If the only reward for good behavior is not getting fired, that’s no fun. Constant monitoring with no employee upside led workers to rebel against the use of timing devices.
Yet in the examples commonly used to illustrate the inhumanity of Taylorism, the fundamental flaw isn’t automated management — it’s the employer-employee relationship. Instead of using performance measurements to optimize employees for a lasting career, organizations try to extract as much productivity as possible before the employee quits. In many cases, there isn’t much time to do so: According to compensation researcher PayScale, Amazon employees have a median tenure of twelve months. A study commissioned by Uber found that as of 2013, nearly half of its drivers were dropping out within the first year. It’s unclear whether a high churn rate leads to disregard for long-term employee welfare or the other way around, but Uber makes no secret about its ambitions to replace drivers with autonomous cars as soon as possible.
Scientific management doesn’t have to be a dehumanizing experience. Dalio says that Bridgewater’s management systems are designed to improve decision-making, conflict resolution, and personal development. This doesn’t necessarily mean removing managers from the equation; it simply recognizes that humans are unique and fallible, and ideally enables them to develop unbiased solutions. Dalio says that Bridgewater employees have a turnover rate of less than 6 percent after the first two years of attrition. Assuming his employees haven’t been turned into cyborgs, he’s probably doing something right.
In other words, technology might help Taylorism work as intended — if human supervisors can ensure that the algorithms are designed to provide long-term benefits for employees and employers alike. If Dalio subjects himself to the same rigor he demands of his employees, perhaps he can make it happen.
This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
You May Also Like