Internal standard

Source: Wikipedia, the free encyclopedia.

In a

chemical analysis, the internal standard method involves adding the same amount of a chemical substance to each sample and calibration solution. The internal standard responds proportionally to changes in the analyte and provides a similar, but not identical, measurement signal. It must also be absent from the sample matrix to ensure there is no other source of the internal standard present. Taking the ratio of analyte signal to internal standard signal and plotting it against the analyte concentrations in the calibration solutions will result in a calibration curve. The calibration curve can then be used to calculate the analyte concentration in an unknown sample.[1]

Selecting an appropriate internal standard accounts for random and systematic sources of uncertainty that arise during sample preparation or instrument fluctuation. This is because the ratio of analyte relative to the amount of internal standard is independent of these variations. If the measured value of the analyte is erroneously shifted above or below the actual value, the internal standard measurements should shift in the same direction.[1]

Ratio plot provides good way of compensation of detector sensitivity variation, but may be biased and should be replaced by Relative concentration/Relative calibration calculations if the reason of response variability is in different mass of analysed sample and traditional (not internal standard) calibration curve of any analyte is not linear through origin.[2]

History

The earliest recorded use of the internal standard method dates back to Gouy's

nuclear magnetic resonance (NMR) spectroscopy, chromatography, and inductively coupled plasma spectroscopy
.

Applications

Nuclear magnetic resonance spectroscopy

In NMR spectroscopy, e.g. of the nuclei 1H, 13C and 29Si, frequencies depend on the magnetic field, which is not the same across all experiments. Therefore, frequencies are reported as relative differences to tetramethylsilane (TMS), an internal standard that George Tiers proposed in 1958 and that the International Union of Pure and Applied Chemistry has since endorsed.[5][6] The relative difference to TMS is called chemical shift.[7]

TMS works as an ideal standard because it is relatively inert and its identical methyl protons produce a strong upfield signal, isolated from most other protons.[7] It is soluble in most organic solvents and is removable via distillation due to its low boiling point.[1]

In practice, the difference between the signals of common solvents and TMS are known. Therefore, no TMS needs to be added to commercial deuterated solvents, as modern instruments are capable of detecting the small quantities of protonated solvent present. By specifying the lock solvent to be used, modern spectrometers are able to correctly reference the sample; in effect, the solvent itself serves as the internal standard.[1]

Chromatography

In chromatography, internal standards are used to determine the concentration of other analytes by calculating response factor. The selected internal standard should have a similar retention time and derivatization. It must be stable and not interfere with the sample components. This mitigates the uncertainty that can occur in preparatory steps such as sample injection.[1]

In gas chromatography-mass spectrometry (GC-MS), deuterated compounds with similar structures to the analyte commonly act as effective internal standards.[8] However, there are non-deuterated internal standards such as norleucine, which is popular in the analysis of amino acids because it can be separated from accompanying peaks.[9][10][11]

Selecting an internal standard for liquid chromatography-mass spectrometry (LC-MS) depends on the employed ionization method. The internal standard needs a comparable ionization response and

fragmentation pattern to the analyte.[12] LC-MS internal standards are often isotopically analogous to the structure of the analyte, using isotopes such as deuterium (2H), 13C, 15N and 18O.[13]

Inductively coupled plasma

Selecting an internal standard in inductively coupled plasma spectroscopy can be difficult, because signals from the sample matrix can overlap with those belonging to the analyte. Yttrium is a common internal standard that is naturally absent in most samples. It has both a mid-range mass and emission lines that don't interfere with many analytes. The intensity of the yttrium signal is what the signal from the analyte gets compared to.[1][14]

In

ionization potential, change in enthalpy, and change in entropy are to the analyte.[15]

Inductively coupled plasma-optical emission spectroscopy (ICP-OES) internal standards can be selected by observing how the analyte and internal standard signals change with varying experimental conditions. This includes making adjustments to the sample matrix or instrumentation settings and evaluating whether the selected internal standard is reacting in the same way the analyte is.[16]

Example of internal standard method

Spreadsheet for worked-example of plotting nickel concentrations in calibration solutions. The calibration curve on the top does not use the internal standard method. The calibration curve on the bottom does use the internal standard method.

One way to visualize the internal standard method is to create one calibration curve that doesn't use the method and one calibration curve that does. Suppose there are known concentrations of nickel in a set of calibration solutions: 0 ppm, 1.6 ppm, 3.2 ppm, 4.8 ppm, 6.4 ppm, and 8 ppm. Each solution also has 5 ppm yttrium to act as an internal standard. If these solutions are measured using ICP-OES, the intensity of the yttrium signal should be consistent across all solutions. If not, the intensity of the nickel signal is likely imprecise as well.

The calibration curve that does not use the internal standard method ignores the uncertainty between measurements. The coefficient of determination (R2) for this plot is 0.9985.

In the calibration curve that uses the internal standard, the y-axis is the ratio of the nickel signal to the yttrium signal. This ratio is unaffected by uncertainty in the nickel measurements, as it should affect the yttrium measurements in the same way. This results in a higher R2, 0.9993.

References