In analytical chemistry, a standard solution (titrant or titrator) is a solution containing an accurately known concentration. Standard solutions are generally prepared by dissolving a solute of known mass into a solvent to a precise volume, or by diluting a solution of known concentration with more solvent.[1]
Standard solutions are used to determine the concentration of solutions with unknown concentration, such as solutions in titrations. The concentrations of standard solutions are normally expressed in units of moles per litre (mol/L, often abbreviated to M for molarity), moles per cubic decimetre (mol/dm3), kilomoles per cubic metre (kmol/m3), grams per milliliters (g/mL), or in terms related to those used in particular titrations (such as titres).
Preparing standard solutions require standards with known amount of analyte. Analytical standards can be categorized into Primary or Secondary standards.
Primary standards are compounds with known stoichiometry, high purity, and high stability. Standard solutions can be prepared using primary standards by accurately weighing a known quantity of the compound, followed by dilution to a precise volume.[2] For example, a weighed sample of 0.15 g sodium chloride contains 2.6 x 10-3 moles of sodium chloride. The following dilution of this sample in a 50-mL volumetric flask will result in a concentration of 0.51 M.
Secondary standards are compounds with a concentration determined by a primary standard. Secondary standards do not satisfy the requirements for a primary standard.
In titrations, the concentration of analyte in solution can be determined by titrating the standard solution against the analyte solution to determine the threshold of neutralization.[3] For example, to calculate the concentration of hydrogen chloride, a standard solution of known concentration, such as 0.5 M sodium hydroxide, is titrated against the hydrogen chloride solution.
Standard solutions are commonly used to determine the concentration of an analyte species via calibration curve. A calibration curve is obtained by measuring a series of standard solutions with known concentrations, which can be used to determine the concentration of an unknown sample using linear regression analysis.[4] For example, by comparing the absorbance values of a solution with an unknown concentration to a series of standard solutions with varying concentrations, the concentration of the unknown can be determined using Beer's Law.
Any form of spectroscopy can be used in this way so long as the analyte species has substantial absorbance in the spectra. The standard solution is a reference guide to discover the molarity of unknown species.
The matrix effect can negatively affect the efficiency of a calibration curve due to interactions between matrix and the analyte response. The matrix effect can be reduced by the addition of internal standards to the standard solutions, or by using the standard addition method.[5]
Suppose the concentration of glutamine in an unknown sample needs to be measured. To do so, a series of standard solutions containing glutamine is prepared to create a calibration curve. A table summarizing a method for creating these solutions is shown below:
Solution | Glutamine added (mL) | Dilute to mark with: | Resulting Concentration (g/mL) |
---|---|---|---|
1 (blank) | 0 | Deionized water in25 mL Volumetric Flask | 0 |
2 | 1 | 3.00 x 10-4 | |
3 | 2 | 6.00 x 10-4 | |
4 | 3 | 9.00 x 10-4 | |
5 | 4 | 1.20 x 10-3 |