Heat capacity or thermal capacity is a physical property of matter, defined as the amount of heat to be supplied to an object to produce a unit change in its temperature.[1] The SI unit of heat capacity is joule per kelvin (J/K).
Heat capacity is an extensive property. The corresponding intensive property is the specific heat capacity, found by dividing the heat capacity of an object by its mass. Dividing the heat capacity by the amount of substance in moles yields its molar heat capacity. The volumetric heat capacity measures the heat capacity per volume. In architecture and civil engineering, the heat capacity of a building is often referred to as its thermal mass.
The heat capacity of an object, denoted by
C
C=\lim\Delta
\DeltaQ | |
\DeltaT |
,
\DeltaQ
\DeltaT
The value of this parameter usually varies considerably depending on the starting temperature
T
p
C(p,T)
The variation can be ignored in contexts when working with objects in narrow ranges of temperature and pressure. For example, the heat capacity of a block of iron weighing one pound is about 204 J/K when measured from a starting temperature T = 25 °C and P = 1 atm of pressure. That approximate value is adequate for temperatures between 15 °C and 35 °C, and surrounding pressures from 0 to 10 atmospheres, because the exact value varies very little in those ranges. One can trust that the same heat input of 204 J will raise the temperature of the block from 15 °C to 16 °C, or from 34 °C to 35 °C, with negligible error.
At constant pressure, heat supplied to the system contributes to both the work done and the change in internal energy, according to the first law of thermodynamics. The heat capacity is called
Cp
Cp=
\deltaQ | |
dT |
r|p
From the first law of thermodynamics follows
\deltaQ=dU+pdV
p
T
\deltaQ=\left(
\partialU | |
\partialT |
\right)pdT+\left(
\partialU | |
\partialp |
\right)Tdp+p\left[\left(
\partialV | |
\partialT |
\right)pdT+\left(
\partialV | |
\partialp |
\right)Tdp\right]
For constant pressure
(dp=0)
Cp=
\deltaQ | |
dT |
r|p=\left(
\partialU | |
\partialT |
\right)p+p\left(
\partialV | |
\partialT |
\right) | ||||
|
\right)p
where the final equality follows from the appropriate Maxwell relations, and is commonly used as the definition of the isobaric heat capacity.
A system undergoing a process at constant volume implies that no expansion work is done, so the heat supplied contributes only to the change in internal energy. The heat capacity obtained this way is denoted
CV.
CV
Cp.
CV
Cp.
Expressing the inner energy as a function of the variables
T
V
\deltaQ=\left(
\partialU | |
\partialT |
\right)VdT+\left(
\partialU | |
\partialV |
\right)TdV+pdV
For a constant volume (
dV=0
CV=
\deltaQ | |
dT |
r|V=\left(
\partialU | |
\partialT |
\right)V
The relation between
CV
Cp
Cp=CV+\left(\left(
\partialU | |
\partialV |
\right)T+p\right)\left(
\partialV | |
\partialT |
\right)p
Cp-CV=nR.
Cp/CV=\gamma,
where
n
R
\gamma
Using the above two relations, the specific heats can be deduced as follows:
CV=
nR | |
\gamma-1 |
,
Cp=\gamma
nR | |
\gamma-1 |
.
CV=nR
Nf | |
2 |
=nR
3+Ni | |
2 |
where
Nf
Ni=Nf-3
Cv=
3nR | |
2 |
No change in internal energy (as the temperature of the system is constant throughout the process) leads to only work done by the total supplied heat, and thus an infinite amount of heat is required to increase the temperature of the system by a unit temperature, leading to infinite or undefined heat capacity of the system.
Heat capacity of a system undergoing phase transition is infinite, because the heat is utilized in changing the state of the material rather than raising the overall temperature.
The heat capacity may be well-defined even for heterogeneous objects, with separate parts made of different materials; such as an electric motor, a crucible with some metal, or a whole building. In many cases, the (isobaric) heat capacity of such objects can be computed by simply adding together the (isobaric) heat capacities of the individual parts.
However, this computation is valid only when all parts of the object are at the same external pressure before and after the measurement. That may not be possible in some cases. For example, when heating an amount of gas in an elastic container, its volume and pressure will both increase, even if the atmospheric pressure outside the container is kept constant. Therefore, the effective heat capacity of the gas, in that situation, will have a value intermediate between its isobaric and isochoric capacities
Cp
CV
For complex thermodynamic systems with several interacting parts and state variables, or for measurement conditions that are neither constant pressure nor constant volume, or for situations where the temperature is significantly non-uniform, the simple definitions of heat capacity above are not useful or even meaningful. The heat energy that is supplied may end up as kinetic energy (energy of motion) and potential energy (energy stored in force fields), both at macroscopic and atomic scales. Then the change in temperature will depend on the particular path that the system followed through its phase space between the initial and final states. Namely, one must somehow specify how the positions, velocities, pressures, volumes, etc. changed between the initial and final states; and use the general tools of thermodynamics to predict the system's reaction to a small energy input. The "constant volume" and "constant pressure" heating modes are just two among infinitely many paths that a simple homogeneous system can follow.
The heat capacity can usually be measured by the method implied by its definition: start with the object at a known uniform temperature, add a known amount of heat energy to it, wait for its temperature to become uniform, and measure the change in its temperature. This method can give moderately accurate values for many solids; however, it cannot provide very precise measurements, especially for gases.
The SI unit for heat capacity of an object is joule per kelvin (J/K or J⋅K−1). Since an increment of temperature of one degree Celsius is the same as an increment of one kelvin, that is the same unit as J/°C.
The heat capacity of an object is an amount of energy divided by a temperature change, which has the dimension L2⋅M⋅T−2⋅Θ−1. Therefore, the SI unit J/K is equivalent to kilogram meter squared per second squared per kelvin (kg⋅m2⋅s−2⋅K−1).
Professionals in construction, civil engineering, chemical engineering, and other technical disciplines, especially in the United States, may use the so-called English Engineering units, that include the pound (lb = 0.45359237 kg) as the unit of mass, the degree Fahrenheit or Rankine (K, about 0.55556 K) as the unit of temperature increment, and the British thermal unit (BTU ≈ 1055.06 J),[2] [3] as the unit of heat. In those contexts, the unit of heat capacity is 1 BTU/°R ≈ 1900 J/K.[4] The BTU was in fact defined so that the average heat capacity of one pound of water would be 1 BTU/°F. In this regard, with respect to mass, note conversion of 1 Btu/lb⋅°R ≈ 4,187 J/kg⋅K[5] and the calorie (below).
In chemistry, heat amounts are often measured in calories. Confusingly, two units with that name, denoted "cal" or "Cal", have been commonly used to measure amounts of heat:
With these units of heat energy, the units of heat capacity are
1 cal/°C = 4.184 J/K
1 kcal/°C = 4184 J/K
Most physical systems exhibit a positive heat capacity; constant-volume and constant-pressure heat capacities, rigorously defined as partial derivatives, are always positive for homogeneous bodies.[6] However, even though it can seem paradoxical at first,[7] [8] there are some systems for which the heat capacity
Q
\DeltaT
\DeltaT
Q
\DeltaT
Q
According to the virial theorem, for a self-gravitating body like a star or an interstellar gas cloud, the average potential energy Upot and the average kinetic energy Ukin are locked together in the relation
Upot=-2Ukin.
The total energy U (= Upot + Ukin) therefore obeys
U=-Ukin.
If the system loses energy, for example, by radiating energy into space, the average kinetic energy actually increases. If a temperature is defined by the average kinetic energy, then the system therefore can be said to have a negative heat capacity.[10]
A more extreme version of this occurs with black holes. According to black-hole thermodynamics, the more mass and energy a black hole absorbs, the colder it becomes. In contrast, if it is a net emitter of energy, through Hawking radiation, it will become hotter and hotter until it boils away.
According to the second law of thermodynamics, when two systems with different temperatures interact via a purely thermal connection, heat will flow from the hotter system to the cooler one (this can also be understood from a statistical point of view). Therefore, if such systems have equal temperatures, they are at thermal equilibrium. However, this equilibrium is stable only if the systems have positive heat capacities. For such systems, when heat flows from a higher-temperature system to a lower-temperature one, the temperature of the first decreases and that of the latter increases, so that both approach equilibrium. In contrast, for systems with negative heat capacities, the temperature of the hotter system will further increase as it loses heat, and that of the colder will further decrease, so that they will move farther from equilibrium. This means that the equilibrium is unstable.
For example, according to theory, the smaller (less massive) a black hole is, the smaller its Schwarzschild radius will be, and therefore the greater the curvature of its event horizon will be, as well as its temperature. Thus, the smaller the black hole, the more thermal radiation it will emit and the more quickly it will evaporate by Hawking radiation.