Site in development...
The question, put bluntly, is this: as AI allows individual workers to produce five to ten times more output, what will actually determine who is valuable? I would argue that the binding constraint will not be access to tools, but the conscientiousness and intrinsic motivation of workers, and the extent to which firms can measure and reward quality rather than raw volume. We consider this claim along four dimensions:
how personality traits and intrinsic motivation already shape job performance;
how workers can respond behaviourally to large AI-driven productivity gains;
why some industries can tolerate low-quality mass output while others cannot; and
what this implies for hiring and incentive design.
A sensible starting point is to interrogate what we mean by “value” in work. In practice, firms care about a bundle of outputs: not just how much is produced, but whether it is accurate, reliable, compliant, on-brand, and delivered without excessive supervision. Empirically, conscientiousness—the tendency to be organised, responsible and effortful—is one of the most robust personality predictors of job performance across occupations. A classic meta-analysis of the Big Five finds conscientiousness consistently associated with better job proficiency, training success and personnel records across multiple job families. studylib.net
Intrinsic motivation matters as well. Self-determination theory suggests that when people perceive autonomy, competence and relatedness in their work, they tend to invest more effort and produce higher quality, more creative outputs, relative to those acting under purely extrinsic pressure. Self Determination Theory+1 In an AI context, this implies that two employees with identical tools may generate very different levels of value: one simply “getting through” tasks, the other using AI to explore, refine and challenge their own thinking.
We should also be explicit about a key assumption: AI does not eliminate the need for human judgement; it moves the locus of value. AI systems are increasingly capable on routine or pattern-based tasks, but reviews of AI in the workplace suggest that demand is rising for skills in problem framing, oversight, and socio-emotional interaction—capabilities closely tied to conscientiousness and self-regulation rather than raw cognitive horsepower. MDPI+1 The worker who reliably checks, contextualises and improves AI output is likely to become more valuable than the worker who merely prompts it.
We then need to ask: if AI can plausibly enable a 5–10x increase in task-level output, how do people actually use that slack? Existing experiments are more modest in scale than the popular narrative, but they point in the same direction. Randomised studies of generative AI for professional writing show substantial time savings and systematic improvements in measured quality, particularly for lower-skill workers. Science+1 Field evidence from a large customer-support operation finds that a generative AI assistant increases issues resolved per hour by around 14–15% on average, with especially large productivity gains for novice agents and improvements in customer satisfaction. NBER+1
These studies demonstrate that AI can act as a force multiplier. They do not, however, determine how human workers choose to spend the extra capacity. It seems likely that behavioural responses will fall into three broad camps:
Acceleration for leisure: keeping roughly the same volume of work, but finishing it far faster and reclaiming time.
Maximum throughput: using AI to produce as much as possible, accepting “first draft” outputs with minimal checking.
Quality-focused leverage: using AI to multiply drafts, options and perspectives, then investing human effort in review, integration and improvement.
We already have an analogue in the history of office ICT. Studies of digital work intensification show that when communication and documentation become cheaper and faster, many organisations choose to increase the pace and density of work—more emails, more reporting—rather than reduce hours. SpringerLink+1 At the same time, qualitative research on “digital workplace technology intensity” highlights phenomena such as hyperconnectivity and techno-overwhelm, suggesting that technology can encourage frantic volume-chasing behaviour unless deliberately constrained. Frontiers
By analogy, we should expect AI to widen the distribution of behaviour. Some individuals and firms will use it to create more space for deep work or non-work; others will push towards maximum output, potentially flooding markets with low-quality material. The choice among these strategies is not technologically determined; it is shaped by incentives, culture and, critically, the conscientiousness of individual workers.
Not all output is created equal. In some sectors, a marginal decline in quality is tolerable if cost and volume improve; in others, even a small erosion of quality can be catastrophic. Work on automation and AI as “routine-biased technological change on steroids” notes that many tasks in clerical work, basic content production and routine customer service can be automated with relatively little downside, provided the outputs meet minimum acceptable standards. MIT Press Direct Conversely, where tasks involve high stakes, ambiguity or tacit knowledge—medicine, law, complex engineering—quality, accountability and trust are much stronger differentiators.
Systematic reviews of AI in the workplace underscore this heterogeneity. They find, for example, that: MDPI+1
In media and basic content industries, AI can dramatically lower the marginal cost of articles, images and marketing copy, encouraging high-volume, low-marginal-quality strategies.
In professional services and safety-critical industries, AI is typically framed as decision support, with explicit expectations of human oversight, precisely because errors are costly and reputationally sensitive.
In many knowledge-intensive roles, value is tied to integrative judgement, client relationships and institutional context rather than single outputs.
This suggests a crucial trade-off. In low-stakes, highly competitive markets, it may be economically rational for firms to adopt a “quantity over quality” equilibrium, replacing many higher-quality workers with fewer AI-augmented operators who accept first-pass outputs. In quality-sensitive markets, the same strategy would destroy trust capital and invite regulatory or legal backlash. From this perspective, the “risk” you highlight—low-quality output at massive scale—is not evenly distributed; it is concentrated where short-term cost metrics dominate evaluation.
These dynamics naturally raise the question: how should firms select and reward workers in an AI-rich environment? Here the empirical literature is surprisingly clear. Meta-analytic work consistently finds conscientiousness to be one of the best non-cognitive predictors of job performance across a wide range of occupations and criteria, second only to general cognitive ability. studylib.net+1 If AI tools compress differences in baseline technical skill—by giving weaker performers access to high-quality templates and suggestions—it seems likely that conscientiousness and intrinsic motivation will explain a larger share of variance in realised performance.
However, conscientious workers only add value if the performance system recognises the behaviours they invest in. Research on performance measures stresses that firms tend to measure what is easy—volume, speed, click-throughs—and then (often unintentionally) pay people to optimise those metrics, even when they are poor proxies for true quality. IZA World of Labor+1 Holmström and Milgrom’s classic multitask principal–agent model formalises this: when workers perform multiple tasks, strong incentives on easily measured dimensions (such as volume) predictably distort effort away from hard-to-measure tasks (such as careful checking or maintaining long-term relationships). Duke People+1
In an AI context, this distortion risk is amplified. If a call centre, law firm or content studio primarily rewards throughput metrics—tickets closed, contracts reviewed, articles shipped—workers have a strong financial incentive to accept AI outputs with minimal scrutiny. By contrast, if organisations explicitly measure and reward dimensions such as error rates, client satisfaction, regulatory breaches avoided, or peer-assessed rigour, they give conscientious workers a reason to invest effort in supervising and improving AI output rather than merely amplifying it.
To summarise, we have argued that in an age of powerful AI assistance, the value of human labour will depend increasingly on conscientiousness and intrinsic motivation, and on organisational systems that reward quality rather than sheer quantity. Personality research suggests that conscientiousness is already a robust predictor of performance. studylib.net Experimental and field evidence on generative AI shows that it can materially raise productivity and, when used well, quality—but also opens the door to large volumes of unchecked, mediocre output. Science+1 Sectoral analyses of AI adoption make clear that the tolerance for such mediocrity is highly uneven across industries. MIT Press Direct+1
The limitation of this frame is that it focuses primarily on the firm and the individual worker. We have said relatively little about regulation, collective bargaining, or macroeconomic transitions, all of which will shape how these micro-level dynamics play out. The harder challenge, therefore, is not discovering whether AI can make people 5–10x more productive in some tasks—that seems increasingly plausible—but designing institutions, incentives and selection systems that ensure this additional capacity is spent on better work, not simply more of it.