The productivity paradox refers to the slowdown in productivity growth in the United States in the 1970s and 1980s despite rapid development in the field of information technology (IT) over the same period. The term was coined by Erik Brynjolfsson in a 1993 paper ("The Productivity Paradox of IT") inspired by a quip by Nobel Laureate Robert Solow "You can see the computer age everywhere but in the productivity statistics."[1] For this reason, it is also sometimes also referred to as the Solow paradox.
The productivity paradox inspired many research efforts at explaining the slowdown, only for the paradox to disappear with renewed productivity growth in the developed countries in the 1990s. However, issues raised by those research efforts remain important in the study of productivity growth in general, and became important again when productivity growth slowed around the world again from the 2000s to the present day. Thus the term "productivity paradox" can also refer to the more general disconnect between powerful computer technologies and weak productivity growth.[2]
The 1970s to 1980s productivity paradox has been defined as a perceived "discrepancy between measures of investment in information technology and measures of output at the national level."[3] Brynjolfsson documented that productivity growth slowed down at the level of the whole U.S. economy, and often within individual sectors that had invested heavily in IT, despite dramatic advances in computer power and increasing investment in IT.[4] Similar trends were seen in many other nations.[5] While the computing capacity of the U.S. increased a hundredfold in the 1970s and 1980s,[6] labor productivity growth slowed from over 3% in the 1960s to roughly 1% in the 1980s. This perceived paradox was popularized in the media by analysts such as Steven Roach and later Paul Strassman.
Many observers disagree that any meaningful "productivity paradox" exists and others, while acknowledging the disconnect between IT capacity and spending, view it less as a paradox than a series of unwarranted assumptions about the impact of technology on productivity. In the latter view, this disconnect is emblematic of our need to understand and do a better job of deploying the technology that becomes available to us rather than an arcane paradox that by its nature is difficult to unravel.
Some point to historical parallels with the steam engine and with electricity, where the dividends of a productivity-enhancing disruptive technology were reaped only slowly, with an initial lag, over the course of decades, due to the time required for the technologies to diffuse into common use, and due to the time required to reorganize around and master efficient use of the new technology.[7] As with previous technologies, an extremely large number of initial cutting-edge investments in IT were counterproductive and over-optimistic.[8] Some modest IT-based gains may have been difficult to detect amid the apparent overall slowing of productivity growth, which is generally attributed to one or more of a variety of non-IT factors, such as oil shocks, increased regulation or other cultural changes, a hypothetical decrease in labor quality, a hypothetical exhaustion or slowdown in non-IT innovation, and/or a coincidence of sector-specific problems.[9]
This phenomenon inspired a number of hypothesized explanations of the paradox.
The mismeasurement hypotheses of the productivity paradox center around the idea that real output estimates during this time overestimates inflation and understates productivity, because they do not take into account quality improvements of IT goods and goods in general. The US government measures productivity by comparing real output measurements from period to period, which they do by dividing the nominal output measurements from each period into an inflation component, and a real output component. The US government's calculations of real GDP does not take into account inflation directly, and during the 1970s and 1980s these calculations estimates inflation from observing the change in total spending and change in total units consumed for goods and services over time. This accurately represented inflation if the consumed goods and services in the output measurements remain relatively the same from period to period, but if goods and services improved from period to period the change in spending will characterize consumer spending for quality improvements as inflation, which overstates inflation and under estimates productivity growth. Later calculations of GDP partly compensates for this problem using hedonic regression methods, and these methods estimate that the true price of mainframe computers alone from 1950 to 1980s may have declined more than 20% per year. These estimated implicit price decreases are indications of the scale of productivity growth missing from the output measurements. These measurement issues, as well as measurement issues with new products, continues to affect output and productivity measurement today.[10] [4]
The "lags due to learning and adjustment" (lags) hypotheses explains the productivity paradox as the idea that output and productivity gains from investment in IT materializes well after the investment takes place, so any output and productivity observations of the 1970s and 1980s will not observe those gains. Surveys of executives as well as econometric studies indicated that it might take between two and five years for IT investments to have any impact on organizations that made IT investments. The lags in IT benefits might also slow down IT investments, as observations of short-term marginal costs and benefits of IT investments might seem irrational.[4] IT investments might also require complementary capital investments to be made to be fully productive.[11] Subsequent observations of productivity increases in 2000s may be due to lag effects of IT investments in the 1970s-1990s period.[12]
The redistribution and dissipation of profits hypotheses rely on the idea that firms might make IT investments that are productive for the firm by capturing more wealth available in their industry, but do not create more wealth in that industry. Some examples of these types of IT investments might be market research, marketing and advertisement investments. These investments help firms compete away market share from firms with less of these IT investments, while they do not improve the total output of the industry as a whole.[4]
The mismanagement of IT hypotheses suggests that IT investments are really not productive at a firm level, but that decision makers are making the investments nevertheless. These hypotheses suggest that firm level decision-makers make IT investments regardless of the cost and productivity benefits of the investments because of the difficulty in quantifying IT productivity gains.[4]
Other economists have made a more controversial charge against the utility of computers: that they pale into insignificance as a source of productivity advantage when compared to the Industrial Revolution, electrification, infrastructures (canals and waterways, railroads, highway system), Fordist mass production and the replacement of human and animal power with machines.[13] High productivity growth occurred from last decades of the 19th century until the 1973, with a peak from 1929 to 1973, then declined to levels of the early 19th century.[14] [15]
However, the hypothesis that IT was fundamentally unproductive weakened in the early 1990s, as total factor productivity growth in the United States accelerated. From 2000 through the most recent data in 2022, the information technology industry was among those to see the fastest productivity growth.[16]
Gordon J. Bjork points out that manufacturing productivity gains continued, although at a decreasing rate than in decades past; however, the cost reductions in manufacturing shrank the sector size. The services and government sectors, where productivity growth is very low, gained in share, dragging down the overall productivity number. Because government services are priced at cost with no value added, government productivity growth is near zero as an artifact of the way in which it is measured. Bjork also points out that manufacturing uses more capital per unit of output than government or services.[17]
By the late 1990s there were some signs that productivity in the workplace been improved by the introduction of IT, especially in the United States. In fact, Erik Brynjolfsson and his colleagues found a significant positive relationship between IT investments and productivity, at least when these investments were made to complement organizational changes.[18] [19] [20] [21] A large share of the productivity gains outside the IT-equipment industry itself have been in retail, wholesale and finance.[22] The 1990s IT-related productivity jump arguably resolving the original paradox in favor of the lag in productivity benefits explanations.
There was an additional slowdown in productivity growth in the United States and developed countries from the 2000s to 2020s; sometimes the newer slowdown is referred to as the productivity slowdown, the productivity puzzle, or the productivity paradox 2.0. The 2000s to 2020s productivity slowdown has been defined in terms of lower developed world productivity growth, especially in the US, in this period compared to the period between 1940s and 1970s, and the period between 1994 and 2004.[23] Sometimes this productivity slowdown is analyzed in the context of AI and other modern IT advancements similarly to the 1970s and 1980s productivity paradox. As well, many of the hypothesized explanations of the 1970 and 1980s productivity paradox remains relevant to the discussion of the modern productivity paradox.
New mismeasurement hypotheses are conceptually similar to the 1970s and 1980s mismeasurement hypotheses of the productivity paradox in that they still center around the idea that real output estimates overestimates inflation and understates productivity; however, the new mismeasurement hypotheses looks at additional sources of estimation error like the output effects of adding new, never-before-seen products. As in the 1970s and 1980s, modern, post-2000s US productivity measures are produced by comparing real output measurements from period to period, which they do by dividing the nominal output measurements from each period into an inflation component, and a real output component. As before, the US government's calculations of real GDP does not take into account inflation directly, and estimates inflation from observing the change in total spending and change in total units consumed for goods and services over time. These new inflation calculation methods, however, compensates for previously raised mismeasurement problems using Hedonic regression methods, but they still do not take into account the output-inflation effects of introducing new products. If existing goods and services improved from period to period, hedonic regression estimates could produce an estimate of what consumers would pay for the quality improvements and lower inflation estimates by those amounts. However, if new goods and services in a sector appear in one time period, the extra money that consumers would pay for creation of those new goods and services are not captured in the inflation estimate; the observed extra spending by consumers in that sector is measured as inflation and not attributed to the new goods and services in this case. Thus, the modern real output calculations will characterize consumer spending for new products and services, as well as any spending for quality improvements not captured by the hedonic regression models, as inflation, which overstates inflation and underestimates productivity growth.[10] [4]
New lag hypotheses are substantially the same as older lag hypotheses but focus on the lag effects of different new technology and different ways that technology can improve productivity. Productivity benefits from IT investments in the mid-1990s tend to come from their ability to improve supply-chain, back-office and end-to-end operations. Productivity benefits from IT investments post-2000s are expected to come from front-office operations and new product introductions.[24]
Acemoglu,, Dorn, Hanson & Price (2014) studied IT productivity benefits in manufacturing to find that "there is...little evidence of faster productivity growth in IT-intensive industries after the late 1990s. Second and more importantly, to the extent that there is more rapid growth of labor productivity...this is associated with declining output...and even more rapidly declining employment."[25] In fact, up to half of the growth of U.S. healthcare spending is attributable to technology costs.[26]
Computers and mobile phones are continually cited as the greatest reducers of workplace productivity by means of distraction.[27]
Despite high expectations for online retail sales, individual item and small quantity handling and transportation costs may offset the savings of not having to maintain bricks and mortar stores.[28] Online retail sales has proven successful in specialty items, collectibles and higher priced goods. Some airline and hotel retailers and aggregators have also witnessed great success.
Online commerce has been extremely successful in banking, airline, hotel, and rental car reservations, to name a few.