Internal standard
In a
Selecting an appropriate internal standard accounts for random and systematic sources of uncertainty that arise during sample preparation or instrument fluctuation. This is because the ratio of analyte relative to the amount of internal standard is independent of these variations. If the measured value of the analyte is erroneously shifted above or below the actual value, the internal standard measurements should shift in the same direction.[1]
Ratio plot provides good way of compensation of detector sensitivity variation, but may be biased and should be replaced by Relative concentration/Relative calibration calculations if the reason of response variability is in different mass of analysed sample and traditional (not internal standard) calibration curve of any analyte is not linear through origin.[2]
History
The earliest recorded use of the internal standard method dates back to Gouy's
Applications
Nuclear magnetic resonance spectroscopy
In NMR spectroscopy, e.g. of the nuclei 1H, 13C and 29Si, frequencies depend on the magnetic field, which is not the same across all experiments. Therefore, frequencies are reported as relative differences to tetramethylsilane (TMS), an internal standard that George Tiers proposed in 1958 and that the International Union of Pure and Applied Chemistry has since endorsed.[5][6] The relative difference to TMS is called chemical shift.[7]
TMS works as an ideal standard because it is relatively inert and its identical methyl protons produce a strong upfield signal, isolated from most other protons.[7] It is soluble in most organic solvents and is removable via distillation due to its low boiling point.[1]
In practice, the difference between the signals of common solvents and TMS are known. Therefore, no TMS needs to be added to commercial deuterated solvents, as modern instruments are capable of detecting the small quantities of protonated solvent present. By specifying the lock solvent to be used, modern spectrometers are able to correctly reference the sample; in effect, the solvent itself serves as the internal standard.[1]
Chromatography
In chromatography, internal standards are used to determine the concentration of other analytes by calculating response factor. The selected internal standard should have a similar retention time and derivatization. It must be stable and not interfere with the sample components. This mitigates the uncertainty that can occur in preparatory steps such as sample injection.[1]
In gas chromatography-mass spectrometry (GC-MS), deuterated compounds with similar structures to the analyte commonly act as effective internal standards.[8] However, there are non-deuterated internal standards such as norleucine, which is popular in the analysis of amino acids because it can be separated from accompanying peaks.[9][10][11]
Selecting an internal standard for liquid chromatography-mass spectrometry (LC-MS) depends on the employed ionization method. The internal standard needs a comparable ionization response and
Inductively coupled plasma
Selecting an internal standard in inductively coupled plasma spectroscopy can be difficult, because signals from the sample matrix can overlap with those belonging to the analyte. Yttrium is a common internal standard that is naturally absent in most samples. It has both a mid-range mass and emission lines that don't interfere with many analytes. The intensity of the yttrium signal is what the signal from the analyte gets compared to.[1][14]
In
Example of internal standard method
One way to visualize the internal standard method is to create one calibration curve that doesn't use the method and one calibration curve that does. Suppose there are known concentrations of nickel in a set of calibration solutions: 0 ppm, 1.6 ppm, 3.2 ppm, 4.8 ppm, 6.4 ppm, and 8 ppm. Each solution also has 5 ppm yttrium to act as an internal standard. If these solutions are measured using ICP-OES, the intensity of the yttrium signal should be consistent across all solutions. If not, the intensity of the nickel signal is likely imprecise as well.
The calibration curve that does not use the internal standard method ignores the uncertainty between measurements. The coefficient of determination (R2) for this plot is 0.9985.
In the calibration curve that uses the internal standard, the y-axis is the ratio of the nickel signal to the yttrium signal. This ratio is unaffected by uncertainty in the nickel measurements, as it should affect the yttrium measurements in the same way. This results in a higher R2, 0.9993.