Microeconomics is the study of the behaviour of individuals and small impacting organisations in making decisions on the allocation of limited resources. The modern field of microeconomics arose as an effort of neoclassical economics school of thought to put economic ideas into mathematical mode.
Microeconomics descends philosophically from Utilitarianism and mathematically from the work of Daniel Bernoulli.
Utilitarianism as a distinct ethical position only emerged in the 18th century, usually credited to Jeremy Bentham, but there were earlier writers, such as Epicurus who presented similar theories.Bentham's An Introduction to the Principles of Morals and Legislation (1780)[1] begins by defining the principle of utility:
He also defined how pleasure can be measured:
A list of utilitarians also includes James Mill, Stuart Mill and William Paley.
Daniel Bernoulli wrote in 1738 this about risk:[2] [3]
He states that as an individual wealth increases so will his utility increase in inverse proportion to quantity of goods already possessed. This is called diminishing marginal utility in microeconomics textbooks. He also describes the following problem.
This is referred to in the literature as the St. Petersburg paradox.
See main article: Marginalism.
An early attempt of mathematizing economics was made by Antoine Augustine Cournot in Researches on the Mathematical Principles of the Theory of Wealth[4] (1838): he described mathematically the law of demand, monopoly, and the spring water duopoly that now bears his name. Later, William Stanley Jevons's Theory of Political Economy[5] (1871), Carl Menger's Principles of Economics[6] (1871), and Léon Walras's Elements of Pure Economics: Or the theory of social wealth (1874–77)[7] gave way to what was called the Marginal Revolution. Some common ideas behind those works were models or arguments characterized by rational economic agents maximising utility under a budget constraint. This arose as a necessity of arguing against the labour theory of value associated with classical economists such as Adam Smith, David Ricardo and Karl Marx, although the theory itself can traced back to earlier writers. Walras also went as far as developing the concept of general equilibrium of an economy.
Smith published The Wealth of Nations in 1776, his emphasis is on the labour saving function of money:[8]
Regarding value, Smith wrote:[9]
A labour theory of value can be understood as a theory that argues that economic value is determined by the amount of socially necessary labour time: this can be found in the theorization of Ricardo[10] who said "If the quantity of labour realized in commodities, regulate their exchangeable value, every increase of the quantity of labour must augment the value of that commodity on which it is exercised, as every diminution must lower it." and Marx[11] who said "A use-value, or useful article, therefore, has value only because human labour in the abstract has been embodied or materialised in it. How, then, is the magnitude of this value to be measured? Plainly, by the quantity of the value-creating substance, the labour, contained in the article. The quantity of labour, however, is measured by its duration, and labour-time in its turn finds its standard in weeks, days, and hours.". A subjective theory of value on the other hand derives value from subjective preferences.
Carl Menger, born in Galicia and considered by Hayek as the founder of the Austrian school of economics, distinguished goods in consumption goods (first order goods) and means of production (higher-order goods). He said:[12]
Jevons could be considered a follower of Bentham, as can be read in his commentary upon the paradox of value:[13]
Alfred Marshall's textbook, Principles of Economics was first published in 1890 and became the dominant textbook in England for a generation. His main point was that Jevons went too far in emphasising utility as an attempt to explain prices over costs of production. In the book he writes:[14]
In the same appendix he further states:
Marshall's idea of solving the controversy was that the demand curve could be derived by aggregating individual consumer demand curves, which were themselves based on the consumer problem of maximising utility. The supply curve could be derived by superimposing a representative firm supply curves for the factors of production and then market equilibrium would be given by the intersection of demand and supply curves. He also introduced the notion of different market periods: mainly short run and long run. This set of ideas gave way to what economists call perfect competition, now found in the standard microeconomics texts, even though Marshall himself had stated:[15]
Marshall also discussed how income affects consumption:[16]
This exception is called in microeconomics textbooks the consumption of a Giffen Good.
An early formulation of the concept of production functions is due to Johann Heinrich von Thünen,[17] which presented an exponential version of it. The standard Cobb–Douglas production function found in microeconomics textbooks refers to a collaborative paper between Charles Cobb and Paul Douglas published in 1928 in which they analyzed U.S. manufacturing data (1899-1922) using this function as the basis of a regression analysis for estimating the relationship between inputs (labour and capital) and output (product), discussing the problem using the concept of marginal productivity, the authors concluded:[18]
The mathematical form of the Cobb–Douglas function can be found in the prior work of Wicksell, Thünen, and Turgot.
Jacob Viner presented an early procedure for constructing cost curves in his "Cost Curves and Supply Curves" (1931),[19] the paper was an attempt to reconcile two streams of thought when dealing with this issue at the time: the idea that supplies of factors of production were given and independent of rate of remuneration (Austrian School) or dependent on rate of remuneration (English School, that is followers of Marshall). Viner argued that, "The differences between the two schools would not affect qualitatively the character of the findings," more specifically, "...that this concern is not of sufficient importance to bring about any change in the prices of the factors as a result of a change in its output."
In Viner's terminology—now considered standard—the short run is a period long enough to permit any desired output change that is technologically possible without altering the scale of the plant—but is not long enough to adjust the scale of the plant. He arbitrarily assumes that all factors can, for the short run, be classified in two groups: those necessarily fixed in amount, and those freely variable. Scale of plant is the size of the group of factors that are fixed in amount in the short-run, and each scale is quantitatively indicated by the amount of output that can be produced at the lowest average cost possible at that scale. Costs associated with the fixed factors are fixed costs. Those associated with the variable factors are direct costs. Note that fixed costs are fixed only in their aggregate amounts, and vary with output in their amount per unit, while direct costs vary in their aggregate amount as output varies, as well as in their amount per unit. The spreading of overhead is therefore a short-run phenomenon and not to be confused with the long-run.
He explains that if the law of diminishing returns holds that output per unit of variable factor falls as total output rises, and that if the prices of the factors remain constant—then average direct costs increase with output. Also, if atomistic competition prevails—that is, the individual firm output won't affect product prices—then the individual firm short-run supply curve equals the short run marginal cost curve. In the long run, the supply curve for industry can be constructed by summing individual marginal cost curves abscissas. He also explains that:
It should be made clear that these long-run results only hold if producer are rational actors, that is able to optimise their production so as to have an optimal scale of plant.
In 1929 Harold Hotelling published "Stability in Competition" addressing the problem of instability in the classic Cournout model: Bertrand criticized it for lacking equilibrium for prices as independent variables and Edgeworth constructed a dual monopoly model with correlated demand with also lacked stability. Hotteling proposed that demand typically varied continuously for relative prices, not discontinuously as suggested by the later authors.[20]
Following Sraffa he argued for "the existence with reference to each seller of groups who will deal with him instead of his competitors in spite of difference in price", he also noticed that traditional models that presumed the uniqueness of price in the market only made sense if the commodity was standardized and the market was a point: akin to a temperature model in physics, discontinuity in heat transfer (price changes) inside a body (market) would lead to instability. To show the point he built a model of market located over a line with two sellers in each extreme of the line, in this case maximizing profit for both sellers leads to a stable equilibrium. From this model also follows that if a seller is to choose the location of his store so as to maximize his profit, he will place his store the closest to his competitor: "the sharper competition with his rival is offset by the greater number of buyers he has an advantage". He also argues that clustering of stores is wasteful from the point of view of transportation costs and that public interest would dictate more spatial dispersion.
A new impetus was given to the field when Edward H. Chamberlin and Joan Robinson, published respectively, The Theory of Monopolistic Competition[21] (1923) and The Economics of Imperfect Competition[22] (1933), introducing models of imperfect competition. Although the monopoly case was already exposed in Marshall's Principles of Economics and Cournot had already constructed models of duopoly and monopoly in 1838, a whole new set of models grew out of this new literature. In particular the monopolistic competition model results in a non efficient equilibrium. Chamberlin defined monopolistic competition as, "...challenge to traditional viewpoint of economics that competition and monopoly are alternatives and that individual prices are to be explained in terms of one or the other." He continues, "By contrast it is held that most economic situations are composite of both competition and monopoly, and that, wherever this is the case, a false view is given by neglecting either one of the two forces and regarding the situation as made up entirely of the other."[23]
Later, some market models were built using game theory, particularly regarding oligopolies, which was being developed by John von Neumann at least from 1928.[24] Game theory was originally applied to le her and chess, both sequential games, economics as developed by Alfred Marshall on other hand, while adopting the Cartesian coordinate system, considered only static games. This may seem counterintuitive since Marshall considered that economic behavior might change in the long run and that "an equilibrium is stable; that is, the price, if displaced a little from it, will tend to return, as a pendulum oscillates about its lowest point",[25] Nicholas Kaldor was aware of this problem: economic equilibrium is not always stable and not always unique since it depends on the shapes of both the demand curve and the supply curve, he explained the situation in 1934 as follows:[26]
Regarding price adjustments, he said:
A good example of how microeconomics started to incorporate game theory, is the Stackelberg competition model published in that same year of 1934,[27] which can be characterised as a dynamic game with a leader and a follower, and then be solved to find a Nash Equilibrium, named after John Nash who gave a very general definition of it. Von Neumann's work culminated in the 1944 book Theory of Games and Economic Behavior, which was cowriten with Oskar Morgenstern. Regarding the use of mathematics in economics, the authors had this to say:[28]
A major problem discussed in this book if that of rational behavior under strategic situations involving other participants:
Game theory considers very general types of payoffs, therefore Von Neumann and Morgestein defined the axioms necessary for rational behavior using utility;
William Baumol provided in his 1977 paper the current formal definition of a natural monopoly where "an industry in which multiform production is more costly than production by a monopoly" (p. 810):[29] mathematically this equivalent to subadditivity of the cost function. He then sets out to prove 12 propositions related to strict economies of scale, ray average costs, ray concavity and transray convexity: in particular strictly declining ray average cost implies strict declining ray subadditivity, global economies of scale are sufficient but not necessary for strict ray subadditivity.
In 1982 paper[30] Baumol defined a contestable market as a market where "entry is absolutely free and exit absolutely costless", freedom of entry in Stigler sense: the incumbent has no cost discrimination against entrants. He states that a contestable market will never have an economic profit greater than zero when in equilibrium and the equilibrium will also be efficient. According to Baumol this equilibrium emerges endogenously due to the nature of contestable markets, that is the only industry structure that survives in the long run is the one which minimizes total costs. This is in contrast to the older theory of industry structure since not only industry structure is not exogenously given, but equilibrium is reached without add hoc hypothesis on the behaviour of firms, say using reaction functions in a duopoly. He concludes the paper commenting that regulators that seek to impede entry and/or exit of firms would do better to not interfere if the market in question resembles a contestable market.
In 1937, "The Nature of the Firm" was published by Coase introducing the notion of transaction costs (the term itself was coined in the fifties), which explained why firms have an advantage over a group of independent contractors working with each other.[31] The idea was that there were transaction costs in the use of the market: search and information costs, bargaining costs, etc., which give an advantage to a firm that can internalise the production process required to deliver a certain good to the market. A related result was published by Coase in his "The Problem of Social Cost" (1960), which analyses solutions of the problem of externalities through bargaining,[32] in which he first describes a cattle herd invading a farmer's crop and then discusses four legal cases: Sturges v Bridgman (externality: machinery vibration), Cooke v Forbes (externality: fumes from ammonium sulfate), Bryant v Lejever (externality: chimney smoke), and Bass v Gregory (externality: brewery ventilation shaft). He then states:
This then becomes relevant in context of regulations. He argues against the Pigovian tradition:[33]
This period also marks the beginning of mathematical modelling of public goods with Samuelson's "The Pure Theory of Public Expenditure" (1954), in it he gives a set of equations for efficient provision of public goods (he called them collective consumption goods), now known as the Samuelson condition.[34] He then gives a description of what is now called the free rider problem:
Charles Tiebout considered the same problem as Samuelson and while agreeing with him at the federal level, proposed a different solution:[35]
Around the 1970s the study of market failures again came into focus with the study of information asymmetry. In particular three authors emerged from this period: Akerlof, Spence, and Stiglitz. Akerlof considered the problem of bad quality cars driving good quality cars out of the market in his classic "The Market for Lemons" (1970) because of the presence of asymmetrical information between buyers and sellers.[36] Spence explained that signaling was fundamental in the labour market, because since employers can't know beforehand which candidate is the most productive, a college degree becomes a signaling device that a firm uses to select new personnel.[37] A synthesising paper of this era is "Externalities in Economies with Imperfect Information and Incomplete Markets" by Stiglitz and Greenwald:[38] the basic model consists of households that maximise a utility function, firms that maximise profit—and a government that produces nothing, collects taxes, and distributes the proceeds. An initial equilibrium with no taxes is assumed to exist, a vector x of household consumption and vector z of other variables that affect household utilities (externalities) are defined, a vector π of profits is defined along with a vector E of households expenditures. Since the envelope theorem holds, if the initial non taxed equilibrium is Pareto optimal then it follows that the dot products Π (between π and the time derivative of z) and B (between E and the time derivative of z) must equal each other. They state:
One application of this result is to the already mentioned Market for Lemons, which deals with adverse selection: households buy from a pool of goods with heterogeneous quality considering only average quality, since in general the equilibrium is not efficient, any tax that raises average quality is beneficial (in the sense of optimal taxation). Other applications were considered by the authors, such as tax distortions, signaling, screening, moral hazard in insurance, incomplete markets, queue rationing, unemployment and rationing equilibrium. The authors conclude:
Kahneman and Tversky published a paper in 1979 criticising expected utility hypothesis and the very idea of the rational economic agent.[39] The main point is that there is an asymmetry in the psychology of the economic agent that gives a much higher value to losses than to gains. This article is usually regarded as the beginning of behavioural economics and has consequences particularly regarding the world of finance. The authors summed the idea in the abstract as follows:
The paper deals also with the reflection effect, concerning risk seeking and risk aversion.
More recently, the Great Recession and the ongoing controversy on executive compensation brought the principal–agent problem again to the centre of debate, in particular regarding corporate governance and problems with incentive structures.[40]