Temperature | |||||||
Unit: | K | ||||||
Otherunits: | °C, °F, °R, °Rø, °Ré, °N, °D, °L, °W | ||||||
Dimension: | wikidata | ||||||
Intensive: | Yes | ||||||
Derivations: |
|
Temperature is a physical quantity that quantitatively expresses the attribute of hotness or coldness. Temperature is measured with a thermometer. It reflects the average kinetic energy of the vibrating and colliding atoms making up a substance.
Thermometers are calibrated in various temperature scales that historically have relied on various reference points and thermometric substances for definition. The most common scales are the Celsius scale with the unit symbol °C (formerly called centigrade), the Fahrenheit scale (°F), and the Kelvin scale (K), the latter being used predominantly for scientific purposes. The kelvin is one of the seven base units in the International System of Units (SI).
Absolute zero, i.e., zero kelvin or −273.15 °C, is the lowest point in the thermodynamic temperature scale. Experimentally, it can be approached very closely but not actually reached, as recognized in the third law of thermodynamics. It would be impossible to extract energy as heat from a body at that temperature.
Temperature is important in all fields of natural science, including physics, chemistry, Earth science, astronomy, medicine, biology, ecology, material science, metallurgy, mechanical engineering and geography as well as most aspects of daily life.
Many physical processes are related to temperature; some of them are given below:
See main article: Scale of temperature. Temperature scales need two values for definition: the point chosen as zero degrees and the magnitudes of the incremental unit of temperature.
The Celsius scale (°C) is used for common temperature measurements in most of the world. It is an empirical scale that developed historically, which led to its zero point being defined as the freezing point of water, and as the boiling point of water, both at atmospheric pressure at sea level. It was called a centigrade scale because of the 100-degree interval.[3] Since the standardization of the kelvin in the International System of Units, it has subsequently been redefined in terms of the equivalent fixing points on the Kelvin scale, so that a temperature increment of one degree Celsius is the same as an increment of one kelvin, though numerically the scales differ by an exact offset of 273.15.
The Fahrenheit scale is in common use in the United States. Water freezes at and boils at at sea-level atmospheric pressure.
At the absolute zero of temperature, no energy can be removed from matter as heat, a fact expressed in the third law of thermodynamics. At this temperature, matter contains no macroscopic thermal energy, but still has quantum-mechanical zero-point energy as predicted by the uncertainty principle, although this does not enter into the definition of absolute temperature. Experimentally, absolute zero can be approached only very closely; it can never be reached (the lowest temperature attained by experiment is 38 pK).[4] Theoretically, in a body at a temperature of absolute zero, all classical motion of its particles has ceased and they are at complete rest in this classical sense. Absolute zero, defined as, is exactly equal to, or .
Referring to the Boltzmann constant, to the Maxwell–Boltzmann distribution, and to the Boltzmann statistical mechanical definition of entropy, as distinct from the Gibbs definition,[5] for independently moving microscopic particles, disregarding interparticle potential energy, by international agreement, a temperature scale is defined and said to be absolute because it is independent of the characteristics of particular thermometric substances and thermometer mechanisms. Apart from absolute zero, it does not have a reference temperature. It is known as the Kelvin scale, widely used in science and technology. The kelvin (the unit name is spelled with a lower-case 'k') is the unit of temperature in the International System of Units (SI). The temperature of a body in a state of thermodynamic equilibrium is always positive relative to absolute zero.
Besides the internationally agreed Kelvin scale, there is also a thermodynamic temperature scale, invented by Lord Kelvin, also with its numerical zero at the absolute zero of temperature, but directly relating to purely macroscopic thermodynamic concepts, including the macroscopic entropy, though microscopically referable to the Gibbs statistical mechanical definition of entropy for the canonical ensemble, that takes interparticle potential energy into account, as well as independent particle motion so that it can account for measurements of temperatures near absolute zero.[5] This scale has a reference temperature at the triple point of water, the numerical value of which is defined by measurements using the aforementioned internationally agreed Kelvin scale.
Many scientific measurements use the Kelvin temperature scale (unit symbol: K), named in honor of the physicist who first defined it. It is an absolute scale. Its numerical zero point,, is at the absolute zero of temperature. Since May, 2019, the kelvin has been defined through particle kinetic theory, and statistical mechanics. In the International System of Units (SI), the magnitude of the kelvin is defined in terms of the Boltzmann constant, the value of which is defined as fixed by international convention.[6]
Since May 2019, the magnitude of the kelvin is defined in relation to microscopic phenomena, characterized in terms of statistical mechanics. Previously, but since 1954, the International System of Units defined a scale and unit for the kelvin as a thermodynamic temperature, by using the reliably reproducible temperature of the triple point of water as a second reference point, the first reference point being at absolute zero.
Historically, the temperature of the triple point of water was defined as exactly 273.16 K. Today it is an empirically measured quantity. The freezing point of water at sea-level atmospheric pressure occurs at very close to .
There are various kinds of temperature scale. It may be convenient to classify them as empirically and theoretically based. Empirical temperature scales are historically older, while theoretically based scales arose in the middle of the nineteenth century.[7] [8]
Empirically based temperature scales rely directly on measurements of simple macroscopic physical properties of materials. For example, the length of a column of mercury, confined in a glass-walled capillary tube, is dependent largely on temperature and is the basis of the very useful mercury-in-glass thermometer. Such scales are valid only within convenient ranges of temperature. For example, above the boiling point of mercury, a mercury-in-glass thermometer is impracticable. Most materials expand with temperature increase, but some materials, such as water, contract with temperature increase over some specific range, and then they are hardly useful as thermometric materials. A material is of no use as a thermometer near one of its phase-change temperatures, for example, its boiling-point.
In spite of these limitations, most generally used practical thermometers are of the empirically based kind. Especially, it was used for calorimetry, which contributed greatly to the discovery of thermodynamics. Nevertheless, empirical thermometry has serious drawbacks when judged as a basis for theoretical physics. Empirically based thermometers, beyond their base as simple direct measurements of ordinary physical properties of thermometric materials, can be re-calibrated, by use of theoretical physical reasoning, and this can extend their range of adequacy.
Theoretically based temperature scales are based directly on theoretical arguments, especially those of kinetic theory and thermodynamics. They are more or less ideally realized in practically feasible physical devices and materials. Theoretically based temperature scales are used to provide calibrating standards for practical empirically based thermometers.
In physics, the internationally agreed conventional temperature scale is called the Kelvin scale. It is calibrated through the internationally agreed and prescribed value of the Boltzmann constant,[6] referring to motions of microscopic particles, such as atoms, molecules, and electrons, constituent in the body whose temperature is to be measured. In contrast with the thermodynamic temperature scale invented by Kelvin, the presently conventional Kelvin temperature is not defined through comparison with the temperature of a reference state of a standard body, nor in terms of macroscopic thermodynamics.
Apart from the absolute zero of temperature, the Kelvin temperature of a body in a state of internal thermodynamic equilibrium is defined by measurements of suitably chosen of its physical properties, such as have precisely known theoretical explanations in terms of the Boltzmann constant. That constant refers to chosen kinds of motion of microscopic particles in the constitution of the body. In those kinds of motion, the particles move individually, without mutual interaction. Such motions are typically interrupted by inter-particle collisions, but for temperature measurement, the motions are chosen so that, between collisions, the non-interactive segments of their trajectories are known to be accessible to accurate measurement. For this purpose, interparticle potential energy is disregarded.
In an ideal gas, and in other theoretically understood bodies, the Kelvin temperature is defined to be proportional to the average kinetic energy of non-interactively moving microscopic particles, which can be measured by suitable techniques. The proportionality constant is a simple multiple of the Boltzmann constant. If molecules, atoms, or electrons[9] [10] are emitted from material and their velocities are measured, the spectrum of their velocities often nearly obeys a theoretical law called the Maxwell–Boltzmann distribution, which gives a well-founded measurement of temperatures for which the law holds.[11] There have not yet been successful experiments of this same kind that directly use the Fermi–Dirac distribution for thermometry, but perhaps that will be achieved in the future.[12]
The speed of sound in a gas can be calculated theoretically from the gas's molecular character, temperature, pressure, and the Boltzmann constant. For a gas of known molecular character and pressure, this provides a relation between temperature and the Boltzmann constant. Those quantities can be known or measured more precisely than can the thermodynamic variables that define the state of a sample of water at its triple point. Consequently, taking the value of the Boltzmann constant as a primarily defined reference of exactly defined value, a measurement of the speed of sound can provide a more precise measurement of the temperature of the gas.[13]
Measurement of the spectrum of electromagnetic radiation from an ideal three-dimensional black body can provide an accurate temperature measurement because the frequency of maximum spectral radiance of black-body radiation is directly proportional to the temperature of the black body; this is known as Wien's displacement law and has a theoretical explanation in Planck's law and the Bose–Einstein law.
Measurement of the spectrum of noise-power produced by an electrical resistor can also provide accurate temperature measurement. The resistor has two terminals and is in effect a one-dimensional body. The Bose-Einstein law for this case indicates that the noise-power is directly proportional to the temperature of the resistor and to the value of its resistance and to the noise bandwidth. In a given frequency band, the noise-power has equal contributions from every frequency and is called Johnson noise. If the value of the resistance is known then the temperature can be found.[14] [15]
Historically, till May 2019, the definition of the Kelvin scale was that invented by Kelvin, based on a ratio of quantities of energy in processes in an ideal Carnot engine, entirely in terms of macroscopic thermodynamics. That Carnot engine was to work between two temperatures, that of the body whose temperature was to be measured, and a reference, that of a body at the temperature of the triple point of water. Then the reference temperature, that of the triple point, was defined to be exactly . Since May 2019, that value has not been fixed by definition but is to be measured through microscopic phenomena, involving the Boltzmann constant, as described above. The microscopic statistical mechanical definition does not have a reference temperature.
A material on which a macroscopically defined temperature scale may be based is the ideal gas. The pressure exerted by a fixed volume and mass of an ideal gas is directly proportional to its temperature. Some natural gases show so nearly ideal properties over suitable temperature range that they can be used for thermometry; this was important during the development of thermodynamics and is still of practical importance today.[16] [17] The ideal gas thermometer is, however, not theoretically perfect for thermodynamics. This is because the entropy of an ideal gas at its absolute zero of temperature is not a positive semi-definite quantity, which puts the gas in violation of the third law of thermodynamics. In contrast to real materials, the ideal gas does not liquefy or solidify, no matter how cold it is. Alternatively thinking, the ideal gas law, refers to the limit of infinitely high temperature and zero pressure; these conditions guarantee non-interactive motions of the constituent molecules.[18] [19] [20]
The magnitude of the kelvin is now defined in terms of kinetic theory, derived from the value of the Boltzmann constant.
Kinetic theory provides a microscopic account of temperature for some bodies of material, especially gases, based on macroscopic systems' being composed of many microscopic particles, such as molecules and ions of various species, the particles of a species being all alike. It explains macroscopic phenomena through the classical mechanics of the microscopic particles. The equipartition theorem of kinetic theory asserts that each classical degree of freedom of a freely moving particle has an average kinetic energy of where denotes the Boltzmann constant. The translational motion of the particle has three degrees of freedom, so that, except at very low temperatures where quantum effects predominate, the average translational kinetic energy of a freely moving particle in a system with temperature will be .
Molecules, such as oxygen (O2), have more degrees of freedom than single spherical atoms: they undergo rotational and vibrational motions as well as translations. Heating results in an increase of temperature due to an increase in the average translational kinetic energy of the molecules. Heating will also cause, through equipartitioning, the energy associated with vibrational and rotational modes to increase. Thus a diatomic gas will require more energy input to increase its temperature by a certain amount, i.e. it will have a greater heat capacity than a monatomic gas.
As noted above, the speed of sound in a gas can be calculated from the gas's molecular character, temperature, pressure, and the Boltzmann constant. Taking the value of the Boltzmann constant as a primarily defined reference of exactly defined value, a measurement of the speed of sound can provide a more precise measurement of the temperature of the gas.[13]
It is possible to measure the average kinetic energy of constituent microscopic particles if they are allowed to escape from the bulk of the system, through a small hole in the containing wall. The spectrum of velocities has to be measured, and the average calculated from that. It is not necessarily the case that the particles that escape and are measured have the same velocity distribution as the particles that remain in the bulk of the system, but sometimes a good sample is possible.
Temperature is one of the principal quantities in the study of thermodynamics. Formerly, the magnitude of the kelvin was defined in thermodynamic terms, but nowadays, as mentioned above, it is defined in terms of kinetic theory.
The thermodynamic temperature is said to be absolute for two reasons. One is that its formal character is independent of the properties of particular materials. The other reason is that its zero is, in a sense, absolute, in that it indicates absence of microscopic classical motion of the constituent particles of matter, so that they have a limiting specific heat of zero for zero temperature, according to the third law of thermodynamics. Nevertheless, a thermodynamic temperature does in fact have a definite numerical value that has been arbitrarily chosen by tradition and is dependent on the property of particular materials; it is simply less arbitrary than relative "degrees" scales such as Celsius and Fahrenheit. Being an absolute scale with one fixed point (zero), there is only one degree of freedom left to arbitrary choice, rather than two as in relative scales. For the Kelvin scale since May 2019, by international convention, the choice has been made to use knowledge of modes of operation of various thermometric devices, relying on microscopic kinetic theories about molecular motion. The numerical scale is settled by a conventional definition of the value of the Boltzmann constant, which relates macroscopic temperature to average microscopic kinetic energy of particles such as molecules. Its numerical value is arbitrary, and an alternate, less widely used absolute temperature scale exists called the Rankine scale, made to be aligned with the Fahrenheit scale as Kelvin is with Celsius.
The thermodynamic definition of temperature is due to Kelvin. It is framed in terms of an idealized device called a Carnot engine, imagined to run in a fictive continuous cycle of successive processes that traverse a cycle of states of its working body. The engine takes in a quantity of heat from a hot reservoir and passes out a lesser quantity of waste heat to a cold reservoir. The net heat energy absorbed by the working body is passed, as thermodynamic work, to a work reservoir, and is considered to be the output of the engine. The cycle is imagined to run so slowly that at each point of the cycle the working body is in a state of thermodynamic equilibrium. The successive processes of the cycle are thus imagined to run reversibly with no entropy production. Then the quantity of entropy taken in from the hot reservoir when the working body is heated is equal to that passed to the cold reservoir when the working body is cooled. Then the absolute or thermodynamic temperatures, and, of the reservoirs are defined such that[21]
The zeroth law of thermodynamics allows this definition to be used to measure the absolute or thermodynamic temperature of an arbitrary body of interest, by making the other heat reservoir have the same temperature as the body of interest.
Kelvin's original work postulating absolute temperature was published in 1848. It was based on the work of Carnot, before the formulation of the first law of thermodynamics. Carnot had no sound understanding of heat and no specific concept of entropy. He wrote of 'caloric' and said that all the caloric that passed from the hot reservoir was passed into the cold reservoir. Kelvin wrote in his 1848 paper that his scale was absolute in the sense that it was defined "independently of the properties of any particular kind of matter". His definitive publication, which sets out the definition just stated, was printed in 1853, a paper read in 1851.[22] [23] [24] [25]
Numerical details were formerly settled by making one of the heat reservoirs a cell at the triple point of water, which was defined to have an absolute temperature of 273.16 K.[26] Nowadays, the numerical value is instead obtained from measurement through the microscopic statistical mechanical international definition, as above.
In thermodynamic terms, temperature is an intensive variable because it is equal to a differential coefficient of one extensive variable with respect to another, for a given body. It thus has the dimensions of a ratio of two extensive variables. In thermodynamics, two bodies are often considered as connected by contact with a common wall, which has some specific permeability properties. Such specific permeability can be referred to a specific intensive variable. An example is a diathermic wall that is permeable only to heat; the intensive variable for this case is temperature. When the two bodies have been connected through the specifically permeable wall for a very long time, and have settled to a permanent steady state, the relevant intensive variables are equal in the two bodies; for a diathermal wall, this statement is sometimes called the zeroth law of thermodynamics.[27] [28] [29]
In particular, when the body is described by stating its internal energy, an extensive variable, as a function of its entropy, also an extensive variable, and other state variables, with), then the temperature is equal to the partial derivative of the internal energy with respect to the entropy:[28] [29] [30]
Likewise, when the body is described by stating its entropy as a function of its internal energy, and other state variables, with, then the reciprocal of the temperature is equal to the partial derivative of the entropy with respect to the internal energy:[28] [30] [31]
The above definition, equation (1), of the absolute temperature, is due to Kelvin. It refers to systems closed to the transfer of matter and has a special emphasis on directly experimental procedures. A presentation of thermodynamics by Gibbs starts at a more abstract level and deals with systems open to the transfer of matter; in this development of thermodynamics, the equations (2) and (3) above are actually alternative definitions of temperature.[32]
Real-world bodies are often not in thermodynamic equilibrium and not homogeneous. For the study by methods of classical irreversible thermodynamics, a body is usually spatially and temporally divided conceptually into 'cells' of small size. If classical thermodynamic equilibrium conditions for matter are fulfilled to good approximation in such a 'cell', then it is homogeneous and a temperature exists for it. If this is so for every 'cell' of the body, then local thermodynamic equilibrium is said to prevail throughout the body.[33] [34] [35] [36] [37]
It makes good sense, for example, to say of the extensive variable, or of the extensive variable, that it has a density per unit volume or a quantity per unit mass of the system, but it makes no sense to speak of the density of temperature per unit volume or quantity of temperature per unit mass of the system. On the other hand, it makes no sense to speak of the internal energy at a point, while when local thermodynamic equilibrium prevails, it makes good sense to speak of the temperature at a point. Consequently, the temperature can vary from point to point in a medium that is not in global thermodynamic equilibrium, but in which there is local thermodynamic equilibrium.
Thus, when local thermodynamic equilibrium prevails in a body, the temperature can be regarded as a spatially varying local property in that body, and this is because the temperature is an intensive variable.
Temperature is a measure of a quality of a state of a material.[38] The quality may be regarded as a more abstract entity than any particular temperature scale that measures it, and is called hotness by some writers.[39] [40] [41] The quality of hotness refers to the state of material only in a particular locality, and in general, apart from bodies held in a steady state of thermodynamic equilibrium, hotness varies from place to place. It is not necessarily the case that a material in a particular place is in a state that is steady and nearly homogeneous enough to allow it to have a well-defined hotness or temperature. Hotness may be represented abstractly as a one-dimensional manifold. Every valid temperature scale has its own one-to-one map into the hotness manifold.[42] [43]
When two systems in thermal contact are at the same temperature no heat transfers between them. When a temperature difference does exist heat flows spontaneously from the warmer system to the colder system until they are in thermal equilibrium. Such heat transfer occurs by conduction or by thermal radiation.[44] [45] [46] [47] [48] [49] [50] [51]
Experimental physicists, for example Galileo and Newton,[52] found that there are indefinitely many empirical temperature scales. Nevertheless, the zeroth law of thermodynamics says that they all measure the same quality. This means that for a body in its own state of internal thermodynamic equilibrium, every correctly calibrated thermometer, of whatever kind, that measures the temperature of the body, records one and the same temperature. For a body that is not in its own state of internal thermodynamic equilibrium, different thermometers can record different temperatures, depending respectively on the mechanisms of operation of the thermometers.
For experimental physics, hotness means that, when comparing any two given bodies in their respective separate thermodynamic equilibria, any two suitably given empirical thermometers with numerical scale readings will agree as to which is the hotter of the two given bodies, or that they have the same temperature.[53] This does not require the two thermometers to have a linear relation between their numerical scale readings, but it does require that the relation between their numerical readings shall be strictly monotonic.[54] [55] A definite sense of greater hotness can be had, independently of calorimetry, of thermodynamics, and of properties of particular materials, from Wien's displacement law of thermal radiation: the temperature of a bath of thermal radiation is proportional, by a universal constant, to the frequency of the maximum of its frequency spectrum; this frequency is always positive, but can have values that tend to zero. Thermal radiation is initially defined for a cavity in thermodynamic equilibrium. These physical facts justify a mathematical statement that hotness exists on an ordered one-dimensional manifold. This is a fundamental character of temperature and thermometers for bodies in their own thermodynamic equilibrium.[7] [42] [43] [56] [57]
Except for a system undergoing a first-order phase change such as the melting of ice, as a closed system receives heat, without a change in its volume and without a change in external force fields acting on it, its temperature rises. For a system undergoing such a phase change so slowly that departure from thermodynamic equilibrium can be neglected, its temperature remains constant as the system is supplied with latent heat. Conversely, a loss of heat from a closed system, without phase change, without change of volume, and without a change in external force fields acting on it, decreases its temperature.[58]
While for bodies in their own thermodynamic equilibrium states, the notion of temperature requires that all empirical thermometers must agree as to which of two bodies is the hotter or that they are at the same temperature, this requirement is not safe for bodies that are in steady states though not in thermodynamic equilibrium. It can then well be that different empirical thermometers disagree about which is hotter, and if this is so, then at least one of the bodies does not have a well-defined absolute thermodynamic temperature. Nevertheless, any one given body and any one suitable empirical thermometer can still support notions of empirical, non-absolute, hotness, and temperature, for a suitable range of processes. This is a matter for study in non-equilibrium thermodynamics.
When a body is not in a steady-state, then the notion of temperature becomes even less safe than for a body in a steady state not in thermodynamic equilibrium. This is also a matter for study in non-equilibrium thermodynamics.
For the axiomatic treatment of thermodynamic equilibrium, since the 1930s, it has become customary to refer to a zeroth law of thermodynamics. The customarily stated minimalist version of such a law postulates only that all bodies, which when thermally connected would be in thermal equilibrium, should be said to have the same temperature by definition, but by itself does not establish temperature as a quantity expressed as a real number on a scale. A more physically informative version of such a law views empirical temperature as a chart on a hotness manifold.[42] [57] [59] While the zeroth law permits the definitions of many different empirical scales of temperature, the second law of thermodynamics selects the definition of a single preferred, absolute temperature, unique up to an arbitrary scale factor, whence called the thermodynamic temperature.[7] [42] [60] [61] [62] [63] If internal energy is considered as a function of the volume and entropy of a homogeneous system in thermodynamic equilibrium, thermodynamic absolute temperature appears as the partial derivative of internal energy with respect the entropy at constant volume. Its natural, intrinsic origin or null point is absolute zero at which the entropy of any system is at a minimum. Although this is the lowest absolute temperature described by the model, the third law of thermodynamics postulates that absolute zero cannot be attained by any physical system.
See also: Heat capacity and Calorimetry. When an energy transfer to or from a body is only as heat, the state of the body changes. Depending on the surroundings and the walls separating them from the body, various changes are possible in the body. They include chemical reactions, increase of pressure, increase of temperature and phase change. For each kind of change under specified conditions, the heat capacity is the ratio of the quantity of heat transferred to the magnitude of the change.[64]
For example, if the change is an increase in temperature at constant volume, with no phase change and no chemical change, then the temperature of the body rises and its pressure increases. The quantity of heat transferred,, divided by the observed temperature change,, is the body's heat capacity at constant volume:
If heat capacity is measured for a well-defined amount of substance, the specific heat is the measure of the heat required to increase the temperature of such a unit quantity by one unit of temperature. For example, raising the temperature of water by one kelvin (equal to one degree Celsius) requires 4186 joules per kilogram (J/kg).
See also: Timeline of temperature and pressure measurement technology and International Temperature Scale of 1990. Temperature measurement using modern scientific thermometers and temperature scales goes back at least as far as the early 18th century, when Daniel Gabriel Fahrenheit adapted a thermometer (switching to mercury) and a scale both developed by Ole Christensen Rømer. Fahrenheit's scale is still in use in the United States for non-scientific applications.
Temperature is measured with thermometers that may be calibrated to a variety of temperature scales. In most of the world (except for Belize, Myanmar, Liberia and the United States), the Celsius scale is used for most temperature measuring purposes. Most scientists measure temperature using the Celsius scale and thermodynamic temperature using the Kelvin scale, which is the Celsius scale offset so that its null point is =, or absolute zero. Many engineering fields in the US, notably high-tech and US federal specifications (civil and military), also use the Kelvin and Celsius scales. Other engineering fields in the US also rely upon the Rankine scale (a shifted Fahrenheit scale) when working in thermodynamic-related disciplines such as combustion.
The basic unit of temperature in the International System of Units (SI) is the kelvin. It has the symbol K.
For everyday applications, it is often convenient to use the Celsius scale, in which corresponds very closely to the freezing point of water and is its boiling point at sea level. Because liquid droplets commonly exist in clouds at sub-zero temperatures, is better defined as the melting point of ice. In this scale, a temperature difference of 1 degree Celsius is the same as a increment, but the scale is offset by the temperature at which ice melts .
By international agreement,[65] until May 2019, the Kelvin and Celsius scales were defined by two fixing points: absolute zero and the triple point of Vienna Standard Mean Ocean Water, which is water specially prepared with a specified blend of hydrogen and oxygen isotopes. Absolute zero was defined as precisely and . It is the temperature at which all classical translational motion of the particles comprising matter ceases and they are at complete rest in the classical model. Quantum-mechanically, however, zero-point motion remains and has an associated energy, the zero-point energy. Matter is in its ground state,[66] and contains no thermal energy. The temperatures and were defined as those of the triple point of water. This definition served the following purposes: it fixed the magnitude of the kelvin as being precisely 1 part in 273.16 parts of the difference between absolute zero and the triple point of water; it established that one kelvin has precisely the same magnitude as one degree on the Celsius scale; and it established the difference between the null points of these scales as being (= and =). Since 2019, there has been a new definition based on the Boltzmann constant,[67] but the scales are scarcely changed.
In the United States, the Fahrenheit scale is the most widely used. On this scale the freezing point of water corresponds to and the boiling point to . The Rankine scale, still used in fields of chemical engineering in the US, is an absolute scale based on the Fahrenheit increment.
See also: Conversion of scales of temperature. The following temperature scales are in use or have historically been used for measuring temperature:
The field of plasma physics deals with phenomena of electromagnetic nature that involve very high temperatures. It is customary to express temperature as energy in a unit related to the electronvolt or kiloelectronvolt (eV/kB or keV/kB). The corresponding energy, which is dimensionally distinct from temperature, is then calculated as the product of the Boltzmann constant and temperature,
E=kBT
When one measures the variation of temperature across a region of space or time, do the temperature measurements turn out to be continuous or discrete? There is a widely held misconception that such temperature measurements must always be continuous.[68] This misconception partly originates from the historical view associated with the continuity of classical physical quantities, which states that physical quantities must assume every intermediate value between a starting value and a final value.[69] However, the classical picture is only true in the cases where temperature is measured in a system that is in equilibrium, that is, temperature may not be continuous outside these conditions. For systems outside equilibrium, such as at interfaces between materials (e.g., a metal/non-metal interface or a liquid-vapour interface) temperature measurements may show steep discontinuities in time and space. For instance, Fang and Ward were some of the first authors to successfully report temperature discontinuities of as much as 7.8 K at the surface of evaporating water droplets.[70] This was reported at inter-molecular scales, or at the scale of the mean free path of molecules which is typically of the order of a few micrometers in gases[71] at room temperature. Generally speaking, temperature discontinuities are considered to be norms rather than exceptions in cases of interfacial heat transfer.[72] This is due to the abrupt change in the vibrational or thermal properties of the materials across such interfaces which prevent instantaneous transfer of heat and the establishment of thermal equilibrium (a prerequisite for having a uniform equilibrium temperature across the interface).[73] [74] Further, temperature measurements at the macro-scale (typical observational scale) may be too coarse-grained as they average out the microscopic thermal information based on the scale of the representative sample volume of the control system, and thus it is likely that temperature discontinuities at the micro-scale may be overlooked in such averages. Such an averaging may even produce incorrect or misleading results in many cases of temperature measurements, even at macro-scales, and thus it is prudent that one examines the micro-physical information carefully before averaging out or smoothing out any potential temperature discontinuities in a system as such discontinuities cannot always be averaged or smoothed out.[75] Temperature discontiuities, rather than merely being anomalies, have actually substantially improved our understanding and predictive abilities pertaining to heat transfer at small scales.
See also: Thermodynamic temperature. Historically, there are several scientific approaches to the explanation of temperature: the classical thermodynamic description based on macroscopic empirical variables that can be measured in a laboratory; the kinetic theory of gases which relates the macroscopic description to the probability distribution of the energy of motion of gas particles; and a microscopic explanation based on statistical physics and quantum mechanics. In addition, rigorous and purely mathematical treatments have provided an axiomatic approach to classical thermodynamics and temperature.[76] Statistical physics provides a deeper understanding by describing the atomic behavior of matter and derives macroscopic properties from statistical averages of microscopic states, including both classical and quantum states. In the fundamental physical description, the temperature may be measured directly in units of energy. However, in the practical systems of measurement for science, technology, and commerce, such as the modern metric system of units, the macroscopic and the microscopic descriptions are interrelated by the Boltzmann constant, a proportionality factor that scales temperature to the microscopic mean kinetic energy.
The microscopic description in statistical mechanics is based on a model that analyzes a system into its fundamental particles of matter or into a set of classical or quantum-mechanical oscillators and considers the system as a statistical ensemble of microstates. As a collection of classical material particles, the temperature is a measure of the mean energy of motion, called translational kinetic energy, of the particles, whether in solids, liquids, gases, or plasmas. The kinetic energy, a concept of classical mechanics, is half the mass of a particle times its speed squared. In this mechanical interpretation of thermal motion, the kinetic energies of material particles may reside in the velocity of the particles of their translational or vibrational motion or in the inertia of their rotational modes. In monatomic perfect gases and, approximately, in most gas and in simple metals, the temperature is a measure of the mean particle translational kinetic energy, 3/2 kBT. It also determines the probability distribution function of energy. In condensed matter, and particularly in solids, this purely mechanical description is often less useful and the oscillator model provides a better description to account for quantum mechanical phenomena. Temperature determines the statistical occupation of the microstates of the ensemble. The microscopic definition of temperature is only meaningful in the thermodynamic limit, meaning for large ensembles of states or particles, to fulfill the requirements of the statistical model.
Kinetic energy is also considered as a component of thermal energy. The thermal energy may be partitioned into independent components attributed to the degrees of freedom of the particles or to the modes of oscillators in a thermodynamic system. In general, the number of these degrees of freedom that are available for the equipartitioning of energy depends on the temperature, i.e. the energy region of the interactions under consideration. For solids, the thermal energy is associated primarily with the vibrations of its atoms or molecules about their equilibrium position. In an ideal monatomic gas, the kinetic energy is found exclusively in the purely translational motions of the particles. In other systems, vibrational and rotational motions also contribute degrees of freedom.
Maxwell and Boltzmann developed a kinetic theory that yields a fundamental understanding of temperature in gases.[77] This theory also explains the ideal gas law and the observed heat capacity of monatomic (or 'noble') gases.[78] [79] [80]
The ideal gas law is based on observed empirical relationships between pressure (p), volume (V), and temperature (T), and was recognized long before the kinetic theory of gases was developed (see Boyle's and Charles's laws). The ideal gas law states:[81]
pV=nRT,
This relationship gives us our first hint that there is an absolute zero on the temperature scale, because it only holds if the temperature is measured on an absolute scale such as Kelvin's. The ideal gas law allows one to measure temperature on this absolute scale using the gas thermometer. The temperature in kelvins can be defined as the pressure in pascals of one mole of gas in a container of one cubic meter, divided by the gas constant.
Although it is not a particularly convenient device, the gas thermometer provides an essential theoretical basis by which all thermometers can be calibrated. As a practical matter, it is not possible to use a gas thermometer to measure absolute zero temperature since the gases condense into a liquid long before the temperature reaches zero. It is possible, however, to extrapolate to absolute zero by using the ideal gas law, as shown in the figure.
The kinetic theory assumes that pressure is caused by the force associated with individual atoms striking the walls, and that all energy is translational kinetic energy. Using a sophisticated symmetry argument,[82] Boltzmann deduced what is now called the Maxwell–Boltzmann probability distribution function for the velocity of particles in an ideal gas. From that probability distribution function, the average kinetic energy (per particle) of a monatomic ideal gas is[79] [83]
Ek=
1 | |
2 |
2 | |
mv | |
rms |
=
3 | |
2 |
kBT,
where the Boltzmann constant is the ideal gas constant divided by the Avogadro number, and is the root-mean-square speed.[84] This direct proportionality between temperature and mean molecular kinetic energy is a special case of the equipartition theorem, and holds only in the classical limit of a perfect gas. It does not hold exactly for most substances.
See main article: Zeroth law of thermodynamics.
When two otherwise isolated bodies are connected together by a rigid physical path impermeable to matter, there is the spontaneous transfer of energy as heat from the hotter to the colder of them. Eventually, they reach a state of mutual thermal equilibrium, in which heat transfer has ceased, and the bodies' respective state variables have settled to become unchanging.[85] [86] [87]
One statement of the zeroth law of thermodynamics is that if two systems are each in thermal equilibrium with a third system, then they are also in thermal equilibrium with each other.[88] [89] [90]
This statement helps to define temperature but it does not, by itself, complete the definition. An empirical temperature is a numerical scale for the hotness of a thermodynamic system. Such hotness may be defined as existing on a one-dimensional manifold, stretching between hot and cold. Sometimes the zeroth law is stated to include the existence of a unique universal hotness manifold, and of numerical scales on it, so as to provide a complete definition of empirical temperature.[59] To be suitable for empirical thermometry, a material must have a monotonic relation between hotness and some easily measured state variable, such as pressure or volume, when all other relevant coordinates are fixed. An exceptionally suitable system is the ideal gas, which can provide a temperature scale that matches the absolute Kelvin scale. The Kelvin scale is defined on the basis of the second law of thermodynamics.
See main article: Second law of thermodynamics. As an alternative to considering or defining the zeroth law of thermodynamics, it was the historical development in thermodynamics to define temperature in terms of the second law of thermodynamics which deals with entropy. The second law states that any process will result in either no change or a net increase in the entropy of the universe. This can be understood in terms of probability.
For example, in a series of coin tosses, a perfectly ordered system would be one in which either every toss comes up heads or every toss comes up tails. This means the outcome is always 100% the same result. In contrast, many mixed (disordered) outcomes are possible, and their number increases with each toss. Eventually, the combinations of ~50% heads and ~50% tails dominate, and obtaining an outcome significantly different from 50/50 becomes increasingly unlikely. Thus the system naturally progresses to a state of maximum disorder or entropy.
As temperature governs the transfer of heat between two systems and the universe tends to progress toward a maximum of entropy, it is expected that there is some relationship between temperature and entropy. A heat engine is a device for converting thermal energy into mechanical energy, resulting in the performance of work. An analysis of the Carnot heat engine provides the necessary relationships. According to energy conservation and energy being a state function that does not change over a full cycle, the work from a heat engine over a full cycle is equal to the net heat, i.e. the sum of the heat put into the system at high temperature, qH > 0, and the waste heat given off at the low temperature, qC < 0.[91]
The efficiency is the work divided by the heat input:
where wcy is the work done per cycle. The efficiency depends only on |qC|/qH. Because qC and qH correspond to heat transfer at the temperatures TC and TH, respectively, |qC|/qH should be some function of these temperatures:
Carnot's theorem states that all reversible engines operating between the same heat reservoirs are equally efficient. Thus, a heat engine operating between T1 and T3 must have the same efficiency as one consisting of two cycles, one between T1 and T2, and the second between T2 and T3. This can only be the case if
q13=
q1q2 | |
q2q3 |
,
which implies
q13=f\left(T1,T3\right)=f\left(T1,T2\right)f\left(T2,T3\right).
Since the first function is independent of T2, this temperature must cancel on the right side, meaning f(T1, T3) is of the form g(T1)/g(T3) (i.e. = = =, where g is a function of a single temperature. A temperature scale can now be chosen with the property that
Substituting (6) back into (4) gives a relationship for the efficiency in terms of temperature:
For TC = 0K the efficiency is 100% and that efficiency becomes greater than 100% below 0K. Since an efficiency greater than 100% violates the first law of thermodynamics, this implies that 0K is the minimum possible temperature. In fact, the lowest temperature ever obtained in a macroscopic system was 20nK, which was achieved in 1995 at NIST. Subtracting the right hand side of (5) from the middle portion and rearranging gives[21] [91]
qH | |
TH |
+
qC | |
TC |
=0,
where the negative sign indicates heat ejected from the system. This relationship suggests the existence of a state function, S, whose change characteristically vanishes for a complete cycle if it is defined by
where the subscript indicates a reversible process. This function corresponds to the entropy of the system, which was described previously. Rearranging (8) gives a formula for temperature in terms of fictive infinitesimal quasi-reversible elements of entropy and heat:
For a constant-volume system where entropy S(E) is a function of its energy E, dE = dqrev and (9) gives
i.e. the reciprocal of the temperature is the rate of increase of entropy with respect to energy at constant volume.
Statistical mechanics defines temperature based on a system's fundamental degrees of freedom. Eq.(10) is the defining relation of temperature, where the entropy
S
S=kBln(W)
kB
When two systems with different temperatures are put into purely thermal connection, heat will flow from the higher temperature system to the lower temperature one; thermodynamically this is understood by the second law of thermodynamics: The total change in entropy following a transfer of energy
\DeltaE
\DeltaS=-(dS/dE)1 ⋅ \DeltaE+(dS/dE)2 ⋅ \DeltaE=\left(
1 | |
T2 |
-
1 | |
T1 |
\right)\DeltaE
and is thus positive if
T1>T2
From the point of view of statistical mechanics, the total number of microstates in the combined system 1 + system 2 is
N1 ⋅ N2
It is possible to extend the definition of temperature even to systems of few particles, like in a quantum dot. The generalized temperature is obtained by considering time ensembles instead of configuration-space ensembles given in statistical mechanics in the case of thermal and particle exchange between a small system of fermions (N even less than 10) with a single/double-occupancy system. The finite quantum grand canonical ensemble,[92] obtained under the hypothesis of ergodicity and orthodicity,[93] allows expressing the generalized temperature from the ratio of the average time of occupation
\tau1
\tau2
T=
| |||||||||
|
,
See main article: Negative temperature. On the empirical temperature scales that are not referenced to absolute zero, a negative temperature is one below the zero-point of the scale used. For example, dry ice has a sublimation temperature of which is equivalent to .[95] On the absolute Kelvin scale this temperature is . No body can be brought to exactly (the temperature of the ideally coldest possible body) by any finite practicable process; this is a consequence of the third law of thermodynamics.[96] [97] [98]
The internal kinetic theory temperature of a body cannot take negative values. The thermodynamic temperature scale, however, is not so constrained.
For a body of matter, there can sometimes be conceptually defined, in terms of microscopic degrees of freedom, namely particle spins, a subsystem, with a temperature other than that of the whole body. When the body is in its own state of internal thermodynamic equilibrium, the temperatures of the whole body and of the subsystem must be the same. The two temperatures can differ when, by work through externally imposed force fields, energy can be transferred to and from the subsystem, separately from the rest of the body; then the whole body is not in its own state of internal thermodynamic equilibrium. There is an upper limit of energy such a spin subsystem can attain.
Considering the subsystem to be in a temporary state of virtual thermodynamic equilibrium, it is possible to obtain a negative temperature on the thermodynamic scale. Thermodynamic temperature is the inverse of the derivative of the subsystem's entropy with respect to its internal energy. As the subsystem's internal energy increases, the entropy increases for some range, but eventually attains a maximum value and then begins to decrease as the highest energy states begin to fill. At the point of maximum entropy, the temperature function shows the behavior of a singularity, because the slope of the entropy as a function of energy decreases to zero and then turns negative. As the subsystem's entropy reaches its maximum, its thermodynamic temperature goes to positive infinity, switching to negative infinity as the slope turns negative. Such negative temperatures are hotter than any positive temperature. Over time, when the subsystem is exposed to the rest of the body, which has a positive temperature, energy is transferred as heat from the negative temperature subsystem to the positive temperature system.[99] The kinetic theory temperature is not defined for such subsystems.
See main article: Orders of magnitude (temperature).
Temperature | Peak emittance wavelength of black-body radiation | ||
---|---|---|---|
Kelvin | Celsius | ||
Absolute zero (precisely by definition) | |||
Blackbody temperature of the black hole at the centre of our galaxy, Sagittarius A*[100] | |||
Lowest temperature achieved[101] | |||
Coldest Bose–Einstein condensate | |||
One millikelvin (precisely by definition) | (radio, FM band) | ||
Cosmic microwave background (2013 measurement) | (millimeter-wavelength microwave) | ||
Water triple point (previously by definition) | (long-wavelength IR) | ||
Water boiling point | (mid-wavelength IR) | ||
Iron melting point | (far infrared) | ||
Incandescent lamp | ≈ | (near infrared) | |
Sun's visible surface | (green-blue light) | ||
Lightning bolt channel | (far ultraviolet light) | ||
Sun's core | 16 million °C | (X-rays) | |
Thermonuclear weapon (peak temperature) | 350 million °C | (gamma rays) | |
Sandia National Labs' Z machine | 2 billion °C | (gamma rays) | |
Core of a high-mass star on its last day | 3 billion °C | (gamma rays) | |
Merging binary neutron star system | 350 billion °C | (gamma rays) | |
Relativistic Heavy Ion Collider[102] | 1 trillion °C | (gamma rays) | |
CERN's proton vs nucleus collisions[103] | 10 trillion °C | (gamma rays) | |
"It is impossible by any procedure, no matter how idealized, to reduce the temperature of any system to zero temperature in a finite number of finite operations."