I mentioned an increase in "rote" jobs in an earlier post, but I wasn’t too clear about exactly what I was referring to. I’ve been thinking about this a lot in terms of the questions around wages and productivity, inspired partly by Scott Alexander’s recent article, which is detailed and appropriately confusing. Luckily, the question at hand doesn’t require definitively asserting the link, but instead thinking about how that productivity increase happens in general.
For a manual output, a workers effort and competence entirely determine their productive output. There really aren’t many jobs that fall too much into this category, but as an example I saw a crew today resurfacing a section of road: a truck was slowly unloading material which a team were raking in to place. On an individual level, the output for each of those workers is going to be entirely determined by how much time they put in.
The next stage is what I mean by rote jobs. This is where a workers output is entirely mediated by technology. For example, imagine a supermarket worker manning a checkout who is moved to oversee a bank of self-checkout machines (note: I haven’t run one of these, so this is speculative!) This is an immediate increase in their productive output (in economic activity), but is totally mediated by the effectiveness of those machines. If the machines work well, the worker will be primarily getting people to the open machines, and resolving issues like the receipt printer running out of paper, and if the machines work less well they will be resolving entry errors and other malfunctions.
The best the human can do is operate the technology to the designed capacity: if someone sat down and analysed the throughput of the machine with a worker that performed that maintainance in the minimum of time, they would calculate a real ceiling for the amount of output any real world worker could generate. I think of the workers input as a sigmoid function: the best they can do is 100% of the machine capacity.
What makes this a rote job is that the training investment to get to that 100% of capacity is minimal, and when faithfully executed represents a very high percentage of mastery. That is, if I can take a relatively motivated, but otherwise average individual and provide them with a small amount of training, I can effectively operate near design capacity.
This sets up an incentive to optimise the machine capabilities, not the human capabilities, and encourages jobs to be more predictable, more rote, and have less opportunity for development or growth: though supermarket workers can and do develop, I assume they do so through transitioning into different roles, not in increasing capabilities with the self-checkout.
Compare this to a situation where the technology doesn’t mediate, but instead mixes with human capability. For this situation, imagine a construction worker who has been trained in operating a digger or backhoe. Like the self checkout there is a training required to develop competence, and operate the machine to its designed capacity. However, there is also significant scope for mastery: an experienced operator can expand the range of tasks their digger can accomplish beyond what the designer may have expected, and produce more output for the same (hour) input.
The digger is an open-ended tool which is operated in environments which are uncertain. As systemic as construction is now, there is still significant variability in conditions that require and reward the ability to adapt. It also requires that tools used have the ability to adapt. A supermarket is a significantly more constrained and designed system, in which the staff can be treated somewhat more mechanistically.
Businesses, of course, are trying to contain and constrain varibility and ambiguity. The more you can nail down a process, the easier it is to optimise, but the less scope there is for humans in that process to out-perform.
That said, there are some examples where adaptability and mastery have been integrated into more rote procedures. The Toyota Production System‘s emphasis on individuals adapting their jobs based on their insights on an assembly line is a great example: the innovation was subordinating the line to the individual worker, giving them the ability to halt it or suggest changes to procedures. Similar inversion of control has been successful in areas from software development to flying airliners.
Part of the challenge of implementing it in many modern organisations may be the impact of software based design of these systems. I wonder if Amazon warehouse workers have similar opportunities - can a worker stop the package line if something isn’t going right? If they can, how easy is it to adapt the (world‘s best) logistics system to their local insight?