Floatingpoint arithmetic
Floatingpoint formats 

IEEE 754 

Other 
Alternatives 
Tapered floating point 
In
However, unlike 12.345, 12.3456 is not a floatingpoint number in base ten with five digits of precision—it needs six digits of precision; the nearest floatingpoint number with only five digits is 12.346. In practice, most floatingpoint systems use base two, though base ten (decimal floating point) is also common.
Floatingpoint arithmetic operations, such as addition and division, approximate the corresponding real number arithmetic operations by rounding any result that is not a floatingpoint number itself to a nearby floatingpoint number.^{[1]}^{: 22 }^{[2]}^{: 10 } For example, in a floatingpoint arithmetic with five baseten digits of precision, the sum 12.345 + 1.0001 = 13.3451 might be rounded to 13.345.
The term floating point refers to the fact that the number's
A floatingpoint system can be used to represent, with a fixed number of digits, numbers of very different orders of magnitude — such as the number of meters between galaxies or between protons in an atom. For this reason, floatingpoint arithmetic is often used to allow very small and very large real numbers that require fast processing times. The result of this dynamic range is that the numbers that can be represented are not uniformly spaced; the difference between two consecutive representable numbers varies with their exponent.^{[3]}
Over the years, a variety of floatingpoint representations have been used in computers. In 1985, the IEEE 754 Standard for FloatingPoint Arithmetic was established, and since the 1990s, the most commonly encountered representations are those defined by the IEEE.
The speed of floatingpoint operations, commonly measured in terms of
A floatingpoint unit (FPU, colloquially a math coprocessor) is a part of a computer system specially designed to carry out operations on floatingpoint numbers.
Overview
Floatingpoint numbers
A
There are several mechanisms by which strings of digits can represent numbers. In standard mathematical notation, the digit string can be of any length, and the location of the
In scientific notation, the given number is scaled by a power of 10, so that it lies within a specific range—typically between 1 and 10, with the radix point appearing immediately after the first digit. As a power of ten, the scaling factor is then indicated separately at the end of the number. For example, the orbital period of Jupiter's moon Io is 152,853.5047 seconds, a value that would be represented in standardform scientific notation as 1.528535047×10^{5} seconds.
Floatingpoint representation is similar in concept to scientific notation. Logically, a floatingpoint number consists of:
 A signed (meaning positive or negative) digit string of a given length in a given base (or radix). This digit string is referred to as the significand, mantissa, or coefficient.^{[nb 1]} The length of the significand determines the precision to which numbers can be represented. The radix point position is assumed always to be somewhere within the significand—often just after or just before the most significant digit, or to the right of the rightmost (least significant) digit. This article generally follows the convention that the radix point is set just after the most significant (leftmost) digit.
 A signed integer exponent (also referred to as the characteristic, or scale),^{[nb 2]}which modifies the magnitude of the number.
To derive the value of the floatingpoint number, the significand is multiplied by the base raised to the power of the exponent, equivalent to shifting the radix point from its implied position by a number of places equal to the value of the exponent—to the right if the exponent is positive or to the left if the exponent is negative.
Using base10 (the familiar decimal notation) as an example, the number 152,853.5047, which has ten decimal digits of precision, is represented as the significand 1,528,535,047 together with 5 as the exponent. To determine the actual value, a decimal point is placed after the first digit of the significand and the result is multiplied by 10^{5} to give 1.528535047×10^{5}, or 152,853.5047. In storing such a number, the base (10) need not be stored, since it will be the same for the entire range of supported numbers, and can thus be inferred.
Symbolically, this final value is:
where s is the significand (ignoring any implied decimal point), p is the precision (the number of digits in the significand), b is the base (in our example, this is the number ten), and e is the exponent.
Historically, several number bases have been used for representing floatingpoint numbers, with base two (
A floatingpoint number is a rational number, because it can be represented as one integer divided by another; for example 1.45×10^{3} is (145/100)×1000 or 145,000/100. The base determines the fractions that can be represented; for instance, 1/5 cannot be represented exactly as a floatingpoint number using a binary base, but 1/5 can be represented exactly using a decimal base (0.2, or 2×10^{−1}). However, 1/3 cannot be represented exactly by either binary (0.010101...) or decimal (0.333...), but in base 3, it is trivial (0.1 or 1×3^{−1}) . The occasions on which infinite expansions occur depend on the base and its prime factors.
The way in which the significand (including its sign) and exponent are stored in a computer is implementationdependent. The common IEEE formats are described in detail later and elsewhere, but as an example, in the binary singleprecision (32bit) floatingpoint representation, , and so the significand is a string of 24 bits. For instance, the number π's first 33 bits are:
In this binary expansion, let us denote the positions from 0 (leftmost bit, or most significant bit) to 32 (rightmost bit). The 24bit significand will stop at position 23, shown as the underlined bit 0 above. The next bit, at position 24, is called the round bit or rounding bit. It is used to round the 33bit approximation to the nearest 24bit number (there are specific rules for halfway values, which is not the case here). This bit, which is 1 in this example, is added to the integer formed by the leftmost 24 bits, yielding:
When this is stored in memory using the IEEE 754 encoding, this becomes the significand s. The significand is assumed to have a binary point to the right of the leftmost bit. So, the binary representation of π is calculated from lefttoright as follows:
where p is the precision (24 in this example), n is the position of the bit of the significand from the left (starting at 0 and finishing at 23 here) and e is the exponent (1 in this example).
It can be required that the most significant digit of the significand of a nonzero number be nonzero (except when the corresponding exponent would be smaller than the minimum one). This process is called normalization. For binary formats (which uses only the digits 0 and 1), this nonzero digit is necessarily 1. Therefore, it does not need to be represented in memory, allowing the format to have one more bit of precision. This rule is variously called the leading bit convention, the implicit bit convention, the hidden bit convention,^{[1]} or the assumed bit convention.
Alternatives to floatingpoint numbers
The floatingpoint representation is by far the most common way of representing in computers an approximation to real numbers. However, there are alternatives:
 Fixedpoint representation uses integer hardware operations controlled by a software implementation of a specific convention about the location of the binary or decimal point, for example, 6 bits or digits from the right. The hardware to manipulate these representations is less costly than floating point, and it can be used to perform normal integer operations, too. Binary fixed point is usually used in specialpurpose applications on embedded processors that can only do integer arithmetic, but decimal fixed point is common in commercial applications.
 generalized logarithmrepresentation.
 Tapered floatingpoint representation, which does not appear to be used in practice.
 Some simple rational numbers (e.g., 1/3 and 1/10) cannot be represented exactly in binary floating point, no matter what the precision is. Using a different radix allows one to represent some of them (e.g., 1/10 in decimal floating point), but the possibilities remain limited. Software packages that perform bignum" arithmetic for the individual integers.
 Interval arithmetic allows one to represent numbers as intervals and obtain guaranteed bounds on results. It is generally based on other arithmetics, in particular floating point.
 can often handle irrational numbers like or in a completely "formal" way (symbolic computation), without dealing with a specific encoding of the significand. Such a program can evaluate expressions like "" exactly, because it is programmed to process the underlying mathematics directly, instead of using approximate values for each intermediate calculation.
History
In 1914, the Spanish engineer Leonardo Torres Quevedo published Essays on Automatics,^{[9]} where he designed a specialpurpose electromechanical calculator based on Charles Babbage's analytical engine and described a way to store floatingpoint numbers in a consistent manner. He stated that numbers will be stored in exponential format as n x 10, and offered three rules by which consistent manipulation of floatingpoint numbers by machines could be implemented. For Torres, "n will always be the same number of
In 1938, Konrad Zuse of Berlin completed the Z1, the first binary, programmable mechanical computer;^{[13]} it uses a 24bit binary floatingpoint number representation with a 7bit signed exponent, a 17bit significand (including one implicit bit), and a sign bit.^{[14]} The more reliable relaybased Z3, completed in 1941, has representations for both positive and negative infinities; in particular, it implements defined operations with infinity, such as , and it stops on undefined operations, such as .
Zuse also proposed, but did not complete, carefully rounded floatingpoint arithmetic that includes and NaN representations, anticipating features of the IEEE Standard by four decades.^{[15]} In contrast, von Neumann recommended against floatingpoint numbers for the 1951 IAS machine, arguing that fixedpoint arithmetic is preferable.^{[15]}
The first commercial computer with floatingpoint hardware was Zuse's Z4 computer, designed in 1942–1945. In 1946, Bell Laboratories introduced the Model V, which implemented decimal floatingpoint numbers.^{[16]}
The
The massproduced
The UNIVAC 1100/2200 series, introduced in 1962, supported two floatingpoint representations:
 Single precision: 36 bits, organized as a 1bit sign, an 8bit exponent, and a 27bit significand.
 Double precision: 72 bits, organized as a 1bit sign, an 11bit exponent, and a 60bit significand.
The
Initially, computers used many different representations for floatingpoint numbers. The lack of standardization at the mainframe level was an ongoing problem by the early 1970s for those writing and maintaining higherlevel source code; these manufacturer floatingpoint standards differed in the word sizes, the representations, and the rounding behavior and general accuracy of operations. Floatingpoint compatibility across multiple computing systems was in desperate need of standardization by the early 1980s, leading to the creation of the
In 1989, mathematician and computer scientist William Kahan was honored with the Turing Award for being the primary architect behind this proposal; he was aided by his student Jerome Coonen and a visiting professor, Harold Stone.^{[17]}
Among the x86 innovations are these:
 A precisely specified floatingpoint representation at the bitstring level, so that all compliant computers interpret bit patterns the same way. This makes it possible to accurately and efficiently transfer floatingpoint numbers from one computer to another (after accounting for endianness).
 A precisely specified behavior for the arithmetic operations: A result is required to be produced as if infinitely precise arithmetic were used to yield a value that is then rounded according to specific rules. This means that a compliant computer program would always produce the same result when given a particular input, thus mitigating the almost mystical reputation that floatingpoint computation had developed for its hitherto seemingly nondeterministic behavior.
 The ability of exceptional conditions (overflow, divide by zero, etc.) to propagate through a computation in a benign manner and then be handled by the software in a controlled fashion.
Range of floatingpoint numbers
A floatingpoint number consists of two fixedpoint components, whose range depends exclusively on the number of bits or digits in their representation. Whereas components linearly depend on their range, the floatingpoint range linearly depends on the significand range and exponentially on the range of exponent component, which attaches outstandingly wider range to the number.
On a typical computer system, a doubleprecision (64bit) binary floatingpoint number has a coefficient of 53 bits (including 1 implied bit), an exponent of 11 bits, and 1 sign bit. Since 2^{10} = 1024, the complete range of the positive normal floatingpoint numbers in this format is from 2^{−1022} ≈ 2 × 10^{−308} to approximately 2^{1024} ≈ 2 × 10^{308}.
The number of normal floatingpoint numbers in a system (B, P, L, U) where
 B is the base of the system,
 P is the precision of the significand (in base B),
 L is the smallest exponent of the system,
 U is the largest exponent of the system,
is .
There is a smallest positive normal floatingpoint number,
 Underflow level = UFL = ,
which has a 1 as the leading digit and 0 for the remaining digits of the significand, and the smallest possible value for the exponent.
There is a largest floatingpoint number,
 Overflow level = OFL = ,
which has B − 1 as the value for each digit of the significand and the largest possible value for the exponent.
In addition, there are representable values strictly between −UFL and UFL. Namely, positive and negative zeros, as well as subnormal numbers.
IEEE 754: floating point in modern computers
Floatingpoint formats 

IEEE 754 

Other 
Alternatives 
Tapered floating point 
The
The standard provides for many closely related formats, differing in only a few details. Five of these formats are called basic formats, and others are termed extended precision formats and extendable precision format. Three formats are especially widely used in computer hardware and languages:^{[}citation needed]
 Single precision (binary32), usually used to represent the "float" type in the C language family. This is a binary format that occupies 32 bits (4 bytes) and its significand has a precision of 24 bits (about 7 decimal digits).
 Double precision (binary64), usually used to represent the "double" type in the C language family. This is a binary format that occupies 64 bits (8 bytes) and its significand has a precision of 53 bits (about 16 decimal digits).
 Double extended, also ambiguously called "extended precision" format. This is a binary format that occupies at least 79 bits (80 if the hidden/implicit bit rule is not used) and its significand has a precision of at least 64 bits (about 19 decimal digits). The C99 and C11 standards of the C language family, in their annex F ("IEC 60559 floatingpoint arithmetic"), recommend such an extended format to be provided as "long double".^{[18]} A format satisfying the minimal requirements (64bit significand precision, 15bit exponent, thus fitting on 80 bits) is provided by the x86 architecture. Often on such processors, this format can be used with "long double", though extended precision is not available with MSVC.^{[19]} For alignment purposes, many tools store this 80bit value in a 96bit or 128bit space.^{[20]}^{[21]} On other processors, "long double" may stand for a larger format, such as quadruple precision,^{[22]} or just double precision, if any form of extended precision is not available.^{[23]}
Increasing the precision of the floatingpoint representation generally reduces the amount of accumulated roundoff error caused by intermediate calculations.^{[24]} Other IEEE formats include:
 Decimal64 and decimal128 floatingpoint formats. These formats (especially decimal128) are pervasive in financial transactions because, along with the decimal32 format, they allow correct decimal rounding.
 Quadruple precision (binary128). This is a binary format that occupies 128 bits (16 bytes) and its significand has a precision of 113 bits (about 34 decimal digits).
 Half precision, also called binary16, a 16bit floatingpoint value. It is being used in the NVIDIA Cg graphics language, and in the openEXR standard (where it actually predates the introduction in the IEEE 754 standard).^{[25]}^{[26]}
Any integer with absolute value less than 2^{24} can be exactly represented in the singleprecision format, and any integer with absolute value less than 2^{53} can be exactly represented in the doubleprecision format. Furthermore, a wide range of powers of 2 times such a number can be represented. These properties are sometimes used for purely integer data, to get 53bit integers on platforms that have doubleprecision floats but only 32bit integers.
The standard specifies some special values, and their representation: positive
Comparison of floatingpoint numbers, as defined by the IEEE standard, is a bit different from usual integer comparison. Negative and positive zero compare equal, and every NaN compares unequal to every value, including itself. All finite floatingpoint numbers are strictly smaller than +∞ and strictly greater than −∞, and they are ordered in the same way as their values (in the set of real numbers).
Internal representation
Floatingpoint numbers are typically packed into a computer datum as the sign bit, the exponent field, and the significand or mantissa, from left to right. For the IEEE 754 binary formats (basic and extended) which have extant hardware implementations, they are apportioned as follows:
Type  Bits  Exponent bias 
Bits precision 
Number of decimal digits  

Sign  Exponent  Significand  Total  
IEEE 7542008 )

1  5  10  16  15  11  ~3.3  
Single

1  8  23  32  127  24  ~7.2  
Double

1  11  52  64  1023  53  ~15.9  
x86 extended precision  1  15  64  80  16383  64  ~19.2  
Quad

1  15  112  128  16383  113  ~34.0 
While the exponent can be positive or negative, in binary formats it is stored as an unsigned number that has a fixed "bias" added to it. Values of all 0s in this field are reserved for the zeros and
In the IEEE binary interchange formats the leading 1 bit of a normalized significand is not actually stored in the computer datum. It is called the "hidden" or "implicit" bit. Because of this, the singleprecision format actually has a significand with 24 bits of precision, the doubleprecision format has 53, and quad has 113.
For example, it was shown above that π, rounded to 24 bits of precision, has:
 sign = 0 ; e = 1 ; s = 110010010000111111011011 (including the hidden bit)
The sum of the exponent bias (127) and the exponent (1) is 128, so this is represented in the singleprecision format as
 0 10000000 10010010000111111011011 (excluding the hidden bit) = 40490FDB^{}[27] as a hexadecimal number.
An example of a layout for 32bit floating point is
and the 64bit ("double") layout is similar.
Other notable floatingpoint formats
In addition to the widely used IEEE 754 standard formats, other floatingpoint formats are used, or have been used, in certain domainspecific areas.
 The Apple //, Commodore PET, Atari), Motorola 6800 (MITS Altair 680) and Motorola 6809 (TRS80 Color Computer). All Microsoft language products from 1975 through 1987 used the Microsoft Binary Format until Microsoft adopted the IEEE754 standard format in all its products starting in 1988 to their current releases. MBF consists of the MBF singleprecision format (32 bits, "6digit BASIC"),^{[28]}^{[29]} the MBF extendedprecision format (40 bits, "9digit BASIC"),^{[29]} and the MBF doubleprecision format (64 bits);^{[28]}^{[30]}each of them is represented with an 8bit exponent, followed by a sign bit, followed by a significand of respectively 23, 31, and 55 bits.
 The Bfloat16 format requires the same amount of memory (16 bits) as the IEEE 754 halfprecision format, but allocates 8 bits to the exponent instead of 5, thus providing the same range as a IEEE 754 singleprecision number. The tradeoff is a reduced precision, as the trailing significand field is reduced from 10 to 7 bits. This format is mainly used in the training of machine learning models, where range is more valuable than precision. Many machine learning accelerators provide hardware support for this format.
 The TensorFloat32^{[31]} format combines the 8 bits of exponent of the Bfloat16 with the 10 bits of trailing significand field of halfprecision formats, resulting in a size of 19 bits. This format was introduced by Nvidia, which provides hardware support for it in the Tensor Cores of its GPUs based on the Nvidia Ampere architecture. The drawback of this format is its size, which is not a power of 2. However, according to Nvidia, this format should only be used internally by hardware to speed up computations, while inputs and outputs should be stored in the 32bit singleprecision IEEE 754 format.^{[31]}
 The Hopper architecture GPUs provide two FP8 formats: one with the same numerical range as halfprecision (E5M2) and one with higher precision, but less range (E4M3).^{[32]}^{[33]}
Type  Sign  Exponent  Trailing significand field  Total bits 

FP8 (E4M3)  1  4  3  8 
FP8 (E5M2)  1  5  2  8 
Halfprecision  1  5  10  16 
Bfloat16  1  8  7  16 
TensorFloat32  1  8  10  19 
Singleprecision  1  8  23  32 
Representable numbers, conversion and rounding
By their nature, all numbers expressed in floatingpoint format are rational numbers with a terminating expansion in the relevant base (for example, a terminating decimal expansion in base10, or a terminating binary expansion in base2). Irrational numbers, such as π or √2, or nonterminating rational numbers, must be approximated. The number of digits (or bits) of precision also limits the set of rational numbers that can be represented exactly. For example, the decimal number 123456789 cannot be exactly represented if only eight decimal digits of precision are available (it would be rounded to one of the two straddling representable values, 12345678 × 10^{1} or 12345679 × 10^{1}), the same applies to nonterminating digits (.5 to be rounded to either .55555555 or .55555556).
When a number is represented in some format (such as a character string) which is not a native floatingpoint representation supported in a computer implementation, then it will require a conversion before it can be used in that implementation. If the number can be represented exactly in the floatingpoint format then the conversion is exact. If there is not an exact representation then the conversion requires a choice of which floatingpoint number to use to represent the original value. The representation chosen will have a different value from the original, and the value thus adjusted is called the rounded value.
Whether or not a rational number has a terminating expansion depends on the base. For example, in base10 the number 1/2 has a terminating expansion (0.5) while the number 1/3 does not (0.333...). In base2 only rationals with denominators that are powers of 2 (such as 1/2 or 3/16) are terminating. Any rational with a denominator that has a prime factor other than 2 will have an infinite binary expansion. This means that numbers that appear to be short and exact when written in decimal format may need to be approximated when converted to binary floatingpoint. For example, the decimal number 0.1 is not representable in binary floatingpoint of any finite precision; the exact binary representation would have a "1100" sequence continuing endlessly:
 e = −4; s = 1100110011001100110011001100110011...,
where, as previously, s is the significand and e is the exponent.
When rounded to 24 bits this becomes
 e = −4; s = 110011001100110011001101,
which is actually 0.100000001490116119384765625 in decimal.
As a further example, the real number π, represented in binary as an infinite sequence of bits is
 11.0010010000111111011010101000100010000101101000110000100011010011...
but is
 11.0010010000111111011011
when approximated by rounding to a precision of 24 bits.
In binary singleprecision floatingpoint, this is represented as s = 1.10010010000111111011011 with e = 1. This has a decimal value of
 3.1415927410125732421875,
whereas a more accurate approximation of the true value of π is
 3.14159265358979323846264338327950...
The result of rounding differs from the true value by about 0.03 parts per million, and matches the decimal representation of π in the first 7 digits. The difference is the discretization error and is limited by the machine epsilon.
The arithmetical difference between two consecutive representable floatingpoint numbers which have the same exponent is called a unit in the last place (ULP). For example, if there is no representable number lying between the representable numbers 1.45a70c22_{hex} and 1.45a70c24_{hex}, the ULP is 2×16^{−8}, or 2^{−31}. For numbers with a base2 exponent part of 0, i.e. numbers with an absolute value higher than or equal to 1 but lower than 2, an ULP is exactly 2^{−23} or about 10^{−7} in single precision, and exactly 2^{−53} or about 10^{−16} in double precision. The mandated behavior of IEEEcompliant hardware is that the result be within onehalf of a ULP.
Rounding modes
Rounding is used when the exact result of a floatingpoint operation (or a conversion to floatingpoint format) would need more digits than there are digits in the significand. IEEE 754 requires correct rounding: that is, the rounded result is as if infinitely precise arithmetic was used to compute the value and then rounded (although in implementation only three extra bits are needed to ensure this). There are several different rounding schemes (or rounding modes). Historically, truncation was the typical approach. Since the introduction of IEEE 754, the default method (round to nearest, ties to even, sometimes called Banker's Rounding) is more commonly used. This method rounds the ideal (infinitely precise) result of an arithmetic operation to the nearest representable value, and gives that representation as the result.^{[nb 8]} In the case of a tie, the value that would make the significand end in an even digit is chosen. The IEEE 754 standard requires the same rounding to be applied to all fundamental algebraic operations, including square root and conversions, when there is a numeric (nonNaN) result. It means that the results of IEEE 754 operations are completely determined in all bits of the result, except for the representation of NaNs. ("Library" functions such as cosine and log are not mandated.)
Alternative rounding options are also available. IEEE 754 specifies the following rounding modes:
 round to nearest, where ties round to the nearest even digit in the required position (the default and by far the most common mode)
 round to nearest, where ties round away from zero (optional for binary floatingpoint and commonly used in decimal)
 round up (toward +∞; negative results thus round toward zero)
 round down (toward −∞; negative results thus round away from zero)
 round toward zero (truncation; it is similar to the common behavior of floattointeger conversions, which convert −3.9 to −3 and 3.9 to 3)
Alternative modes are useful when the amount of error being introduced must be bounded. Applications that require a bounded error are multiprecision floatingpoint, and interval arithmetic. The alternative rounding modes are also useful in diagnosing numerical instability: if the results of a subroutine vary substantially between rounding to + and − infinity then it is likely numerically unstable and affected by roundoff error.^{[34]}
Binarytodecimal conversion with minimal number of digits
Converting a doubleprecision binary floatingpoint number to a decimal string is a common operation, but an algorithm producing results that are both accurate and minimal did not appear in print until 1990, with Steele and White's Dragon4. Some of the improvements since then include:
 David M. Gay's dtoa.c, a practical opensource implementation of many ideas in Dragon4.^{[35]}
 Grisu3, with a 4× speedup as it removes the use of bignums. Must be used with a fallback, as it fails for ~0.5% of cases.^{[36]}
 Errol3, an alwayssucceeding algorithm similar to, but slower than, Grisu3. Apparently not as good as an earlyterminating Grisu with fallback.^{}[37]
 Ryū, an alwayssucceeding algorithm that is faster and simpler than Grisu3.^{[38]}
 Schubfach, an alwayssucceeding algorithm that is based on a similar idea to Ryū, developed almost simultaneously and independently.^{[39]} Performs better than Ryū and Grisu3 in certain benchmarks.^{[40]}
Many modern language runtimes use Grisu3 with a Dragon4 fallback.^{[41]}
Decimaltobinary conversion
The problem of parsing a decimal string into a binary FP representation is complex, with an accurate parser not appearing until Clinger's 1990 work (implemented in dtoa.c).^{[35]} Further work has likewise progressed in the direction of faster parsing.^{[42]}
Floatingpoint operations
For ease of presentation and understanding, decimal radix with 7 digit precision will be used in the examples, as in the IEEE 754 decimal32 format. The fundamental principles are the same in any radix or precision, except that normalization is optional (it does not affect the numerical value of the result). Here, s denotes the significand and e denotes the exponent.
Addition and subtraction
A simple method to add floatingpoint numbers is to first represent them with the same exponent. In the example below, the second number (with the smaller exponent) is shifted right by three digits, and one then proceeds with the usual addition method:
123456.7 = 1.234567 × 10^5 101.7654 = 1.017654 × 10^2 = 0.001017654 × 10^5
Hence: 123456.7 + 101.7654 = (1.234567 × 10^5) + (1.017654 × 10^2) = (1.234567 × 10^5) + (0.001017654 × 10^5) = (1.234567 + 0.001017654) × 10^5 = 1.235584654 × 10^5
In detail:
e=5; s=1.234567 (123456.7) + e=2; s=1.017654 (101.7654)
e=5; s=1.234567 + e=5; s=0.001017654 (after shifting)  e=5; s=1.235584654 (true sum: 123558.4654)
This is the true result, the exact sum of the operands. It will be rounded to seven digits and then normalized if necessary. The final result is
e=5; s=1.235585 (final sum: 123558.5)
The lowest three digits of the second operand (654) are essentially lost. This is roundoff error. In extreme cases, the sum of two nonzero numbers may be equal to one of them:
e=5; s=1.234567 + e=−3; s=9.876543
e=5; s=1.234567 + e=5; s=0.00000009876543 (after shifting)  e=5; s=1.23456709876543 (true sum) e=5; s=1.234567 (after rounding and normalization)
In the above conceptual examples it would appear that a large number of extra digits would need to be provided by the adder to ensure correct rounding; however, for binary addition or subtraction using careful implementation techniques only a guard bit, a rounding bit and one extra sticky bit need to be carried beyond the precision of the operands.^{[43]}^{[44]}^{: 218–220 }
Another problem of loss of significance occurs when approximations to two nearly equal numbers are subtracted. In the following example e = 5; s = 1.234571 and e = 5; s = 1.234567 are approximations to the rationals 123457.1467 and 123456.659.
e=5; s=1.234571 − e=5; s=1.234567  e=5; s=0.000004 e=−1; s=4.000000 (after rounding and normalization)
The floatingpoint difference is computed exactly because the numbers are close—the
Multiplication and division
To multiply, the significands are multiplied while the exponents are added, and the result is rounded and normalized.
e=3; s=4.734612 × e=5; s=5.417242  e=8; s=25.648538980104 (true product) e=8; s=25.64854 (after rounding) e=9; s=2.564854 (after normalization)
Similarly, division is accomplished by subtracting the divisor's exponent from the dividend's exponent, and dividing the dividend's significand by the divisor's significand.
There are no cancellation or absorption problems with multiplication or division, though small errors may accumulate as operations are performed in succession.^{}[43] In practice, the way these operations are carried out in digital logic can be quite complex (see Booth's multiplication algorithm and Division algorithm).^{[nb 9]}
Literal syntax
Literals for floatingpoint numbers depend on languages. They typically use e
or E
to denote
123
may also be floatingpoint literals.
Examples of floatingpoint literals are:
99.9
5000.12
6.02e23
3e45
0x1.fffffep+127
in C and IEEE 754
Dealing with exceptional cases
Floatingpoint computation in a computer can run into three kinds of problems:
 An operation can be mathematically undefined, such as ∞/∞, or division by zero.
 An operation can be legal in principle, but not supported by the specific format, for example, calculating the square root of −1 or the inverse sine of 2 (both of which result in complex numbers).
 An operation can be legal in principle, but the result can be impossible to represent in the specified format, because the exponent is too large or too small to encode in the exponent field. Such an event is called an overflow (exponent too large), underflow (exponent too small) or denormalization (precision loss).
Prior to the IEEE standard, such conditions usually caused the program to terminate, or triggered some kind of
Here, the required default method of handling exceptions according to IEEE 754 is discussed (the IEEE 754 optional trapping and other "alternate exception handling" modes are not discussed). Arithmetic exceptions are (by default) required to be recorded in "sticky" status flag bits. That they are "sticky" means that they are not reset by the next (arithmetic) operation, but stay set until explicitly reset. The use of "sticky" flags thus allows for testing of exceptional conditions to be delayed until after a full floatingpoint expression or subroutine: without them exceptional conditions that could not be otherwise ignored would require explicit testing immediately after every floatingpoint operation. By default, an operation always returns a result according to specification without interrupting computation. For instance, 1/0 returns +∞, while also setting the dividebyzero flag bit (this default of ∞ is designed to often return a finite result when used in subsequent operations and so be safely ignored).
The original IEEE 754 standard, however, failed to recommend operations to handle such sets of arithmetic exception flag bits. So while these were implemented in hardware, initially programming language implementations typically did not provide a means to access them (apart from assembler). Over time some programming language standards (e.g., C99/C11 and Fortran) have been updated to specify methods to access and change status flag bits. The 2008 version of the IEEE 754 standard now specifies a few operations for accessing and handling the arithmetic flag bits. The programming model is based on a single thread of execution and use of them by multiple threads has to be handled by a means outside of the standard (e.g. C11 specifies that the flags have threadlocal storage).
IEEE 754 specifies five arithmetic exceptions that are to be recorded in the status flags ("sticky bits"):
 inexact, set if the rounded (and returned) value is different from the mathematically exact result of the operation.
 underflow, set if the rounded value is tiny (as specified in IEEE 754) and inexact (or maybe limited to if it has denormalization loss, as per the 1985 version of IEEE 754), returning a subnormal value including the zeros.
 overflow, set if the absolute value of the rounded value is too large to be represented. An infinity or maximal finite value is returned, depending on which rounding is used.
 dividebyzero, set if the result is infinite given finite operands, returning an infinity, either +∞ or −∞.
 invalid, set if a realvalued result cannot be returned e.g. sqrt(−1) or 0/0, returning a quiet NaN.
The default return value for each of the exceptions is designed to give the correct result in the majority of cases such that the exceptions can be ignored in the majority of codes. inexact returns a correctly rounded result, and underflow returns a value less than or equal to the smallest positive normal number in magnitude and can almost always be ignored.^{[46]} dividebyzero returns infinity exactly, which will typically then divide a finite number and so give zero, or else will give an invalid exception subsequently if not, and so can also typically be ignored. For example, the effective resistance of n resistors in parallel (see fig. 1) is given by . If a shortcircuit develops with set to 0, will return +infinity which will give a final of 0, as expected^{}
Overflow and invalid exceptions can typically not be ignored, but do not necessarily represent errors: for example, a rootfinding routine, as part of its normal operation, may evaluate a passedin function at values outside of its domain, returning NaN and an invalid exception flag to be ignored until finding a useful start point.^{[46]}
Accuracy problems
The fact that floatingpoint numbers cannot accurately represent all real numbers, and that floatingpoint operations cannot accurately represent true arithmetic operations, leads to many surprising situations. This is related to the finite precision with which computers generally represent numbers.
For example, the decimal numbers 0.1 and 0.01 cannot be represented exactly as binary floatingpoint numbers. In the IEEE 754 binary32 format with its 24bit significand, the result of attempting to square the approximation to 0.1 is neither 0.01 nor the representable number closest to it. The decimal number 0.1 is represented in binary as e = −4; s = 110011001100110011001101, which is
Squaring this number gives
Squaring it with rounding to the 24bit precision gives
But the representable number closest to 0.01 is
Also, the nonrepresentability of π (and π/2) means that an attempted computation of tan(π/2) will not yield a result of infinity, nor will it even overflow in the usual floatingpoint formats (assuming an accurate implementation of tan). It is simply not possible for standard floatingpoint hardware to attempt to compute tan(π/2), because π/2 cannot be represented exactly. This computation in C:
/* Enough digits to be sure we get the correct approximation. */
double pi = 3.1415926535897932384626433832795;
double z = tan(pi/2.0);
will give a result of 16331239353195370.0. In single precision (using the tanf
function), the result will be −22877332.0.
By the same token, an attempted computation of sin(π) will not yield zero. The result will be (approximately) 0.1225×10^{−15} in double precision, or −0.8742×10^{−7} in single precision.^{[nb 10]}
While floatingpoint addition and multiplication are both
. That is, (a + b) + c is not necessarily equal to a + (b + c). Using 7digit significand decimal arithmetic:a = 1234.567, b = 45.67834, c = 0.0004
(a + b) + c: 1234.567 (a) + 45.67834 (b) ____________ 1280.24534 rounds to 1280.245
1280.245 (a + b) + 0.0004 (c) ____________ 1280.2454 rounds to 1280.245 ← (a + b) + c
a + (b + c): 45.67834 (b) + 0.0004 (c) ____________ 45.67874
1234.567 (a) + 45.67874 (b + c) ____________ 1280.24574 rounds to 1280.246 ← a + (b + c)
They are also not necessarily distributive. That is, (a + b) × c may not be the same as a × c + b × c:
1234.567 × 3.333333 = 4115.223 1.234567 × 3.333333 = 4.115223 4115.223 + 4.115223 = 4119.338 but 1234.567 + 1.234567 = 1235.802 1235.802 × 3.333333 = 4119.340
In addition to loss of significance, inability to represent numbers such as π and 0.1 exactly, and other slight inaccuracies, the following phenomena may occur:
 Cancellation: subtraction of nearly equal operands may cause extreme loss of accuracy.^{[48]}^{[45]} When we subtract two almost equal numbers we set the most significant digits to zero, leaving ourselves with just the insignificant, and most erroneous, digits.^{[1]}^{: 124 } For example, when determining a derivative of a function the following formula is used:
 Conversions to integer are not intuitive: converting (63.0/9.0) to integer yields 7, but converting (0.63/0.09) may yield 6. This is because conversions generally truncate rather than round. Floor and ceiling functions may produce answers which are off by one from the intuitively expected value.
 Limited exponent range: results might overflow yielding infinity, or underflow yielding a subnormal number or zero. In these cases precision will be lost.
 Testing for safe division is problematic: Checking that the divisor is not zero does not guarantee that a division will not overflow.
 Testing for equality is problematic. Two computational sequences that are mathematically equal may well produce different floatingpoint values.^{[49]}
Incidents
 On 25 February 1991, a Scud missile in Dhahran, Saudi Arabia, contributing to the death of 28 soldiers from the U.S. Army's 14th Quartermaster Detachment.^{[50]}
Machine precision and backward error analysis
Machine precision is a quantity that characterizes the accuracy of a floatingpoint system, and is used in backward error analysis of floatingpoint algorithms. It is also known as unit roundoff or machine epsilon. Usually denoted Ε_{mach}, its value depends on the particular rounding being used.
With rounding to zero, whereas rounding to nearest, where B is the base of the system and P is the precision of the significand (in base B).
This is important since it bounds the
Backward error analysis, the theory of which was developed and popularized by James H. Wilkinson, can be used to establish that an algorithm implementing a numerical function is numerically stable.^{[51]} The basic approach is to show that although the calculated result, due to roundoff errors, will not be exactly correct, it is the exact solution to a nearby problem with slightly perturbed input data. If the perturbation required is small, on the order of the uncertainty in the input data, then the results are in some sense as accurate as the data "deserves". The algorithm is then defined as backward stable. Stability is a measure of the sensitivity to rounding errors of a given numerical procedure; by contrast, the condition number of a function for a given problem indicates the inherent sensitivity of the function to small perturbations in its input and is independent of the implementation used to solve the problem.^{[52]}
As a trivial example, consider a simple expression giving the inner product of (length two) vectors and , then and so
where
where
by definition, which is the sum of two slightly perturbed (on the order of Ε_{mach}) input data, and so is backward stable. For more realistic examples in numerical linear algebra, see Higham 2002^{[53]} and other references below.
Minimizing the effect of accuracy problems
Although individual arithmetic operations of IEEE 754 are guaranteed accurate to within half a ULP, more complicated formulae can suffer from larger errors for a variety of reasons. The loss of accuracy can be substantial if a problem or its data are illconditioned, meaning that the correct result is hypersensitive to tiny perturbations in its data. However, even functions that are wellconditioned can suffer from large loss of accuracy if an algorithm numerically unstable for that data is used: apparently equivalent formulations of expressions in a programming language can differ markedly in their numerical stability. One approach to remove the risk of such loss of accuracy is the design and analysis of numerically stable algorithms, which is an aim of the branch of mathematics known as numerical analysis. Another approach that can protect against the risk of numerical instabilities is the computation of intermediate (scratch) values in an algorithm at a higher precision than the final result requires,^{[54]} which can remove, or reduce by orders of magnitude,^{[55]} such risk: IEEE 754 quadruple precision and extended precision are designed for this purpose when computing at double precision.^{[56]}^{[nb 11]}
For example, the following algorithm is a direct implementation to compute the function A(x) = (x−1) / (exp(x−1) − 1) which is wellconditioned at 1.0,^{[nb 12]} however it can be shown to be numerically unstable and lose up to half the significant digits carried by the arithmetic when computed near 1.0.^{[57]}
double A(double X)
{
double Y, Z; // [1]
Y = X  1.0;
Z = exp(Y);
if (Z != 1.0)
Z = Y / (Z  1.0); // [2]
return Z;
}
If, however, intermediate computations are all performed in extended precision (e.g. by setting line [1] to C99 long double
), then up to full precision in the final double result can be maintained.^{[nb 13]} Alternatively, a numerical analysis of the algorithm reveals that if the following nonobvious change to line [2] is made:
Z = log(Z) / (Z  1.0);
then the algorithm becomes numerically stable and can compute to full double precision.
To maintain the properties of such carefully constructed numerically stable programs, careful handling by the compiler is required. Certain "optimizations" that compilers might make (for example, reordering operations) can work against the goals of wellbehaved software. There is some controversy about the failings of compilers and language designs in this area: C99 is an example of a language where such optimizations are carefully specified to maintain numerical precision. See the external references at the bottom of this article.
A detailed treatment of the techniques for writing highquality floatingpoint software is beyond the scope of this article, and the reader is referred to,^{[53]}^{[58]} and the other references at the bottom of this article. Kahan suggests several rules of thumb that can substantially decrease by orders of magnitude^{[58]} the risk of numerical anomalies, in addition to, or in lieu of, a more careful numerical analysis. These include: as noted above, computing all expressions and intermediate results in the highest precision supported in hardware (a common rule of thumb is to carry twice the precision of the desired result, i.e. compute in double precision for a final singleprecision result, or in double extended or quad precision for up to doubleprecision results^{[59]}); and rounding input data and results to only the precision required and supported by the input data (carrying excess precision in the final result beyond that required and supported by the input data can be misleading, increases storage cost and decreases speed, and the excess bits can affect convergence of numerical procedures:^{[60]} notably, the first form of the iterative example given below converges correctly when using this rule of thumb). Brief descriptions of several additional issues and techniques follow.
As decimal fractions can often not be exactly represented in binary floatingpoint, such arithmetic is at its best when it is simply being used to measure realworld quantities over a wide range of scales (such as the orbital period of a moon around Saturn or the mass of a
Expectations from mathematics may not be realized in the field of floatingpoint computation. For example, it is known that , and that , however these facts cannot be relied on when the quantities involved are the result of floatingpoint computation.
The use of the equality test (if (x==y) ...
) requires care when dealing with floatingpoint numbers. Even simple expressions like 0.6/0.23==0
will, on most computers, fail to be true^{[62]} (in IEEE 754 double precision, for example, 0.6/0.2  3
is approximately equal to 4.44089209850063e16). Consequently, such tests are sometimes replaced with "fuzzy" comparisons (if (abs(xy) < epsilon) ...
, where epsilon is sufficiently small and tailored to the application, such as 1.0E−13). The wisdom of doing this varies greatly, and can require numerical analysis to bound epsilon.^{[53]} Values derived from the primary data representation and their comparisons should be performed in a wider, extended, precision to minimize the risk of such inconsistencies due to roundoff errors.^{[58]} It is often better to organize the code in such a way that such tests are unnecessary. For example, in computational geometry, exact tests of whether a point lies off or on a line or plane defined by other points can be performed using adaptive precision or exact arithmetic methods.^{[63]}
Small errors in floatingpoint arithmetic can grow when mathematical algorithms perform operations an enormous number of times. A few examples are
Summation of a vector of floatingpoint values is a basic algorithm in scientific computing, and so an awareness of when loss of significance can occur is essential. For example, if one is adding a very large number of numbers, the individual addends are very small compared with the sum. This can lead to loss of significance. A typical addition would then be something like
3253.671 + 3.141276  3256.812
The low 3 digits of the addends are effectively lost. Suppose, for example, that one needs to add many numbers, all approximately equal to 3. After 1000 of them have been added, the running sum is about 3000; the lost digits are not regained. The Kahan summation algorithm may be used to reduce the errors.^{[53]}
Roundoff error can affect the convergence and accuracy of iterative numerical procedures. As an example, Archimedes approximated π by calculating the perimeters of polygons inscribing and circumscribing a circle, starting with hexagons, and successively doubling the number of sides. As noted above, computations may be rearranged in a way that is mathematically equivalent but less prone to error (numerical analysis). Two forms of the recurrence formula for the circumscribed polygon are:^{[citation needed]}
 First form:
 Second form:
 , converging as
Here is a computation using IEEE "double" (a significand with 53 bits of precision) arithmetic:
i 6 × 2^{i} × t_{i}, first form 6 × 2^{i} × t_{i}, second form  0 3.4641016151377543863 3.4641016151377543863 1 3.2153903091734710173 3.2153903091734723496 2 3.1596599420974940120 3.1596599420975006733 3 3.1460862151314012979 3.1460862151314352708 4 3.1427145996453136334 3.1427145996453689225 5 3.1418730499801259536 3.1418730499798241950 6 3.1416627470548084133 3.1416627470568494473 7 3.1416101765997805905 3.1416101766046906629 8 3.1415970343230776862 3.1415970343215275928 9 3.1415937488171150615 3.1415937487713536668 10 3.1415929278733740748 3.1415929273850979885 11 3.1415927256228504127 3.1415927220386148377 12 3.1415926717412858693 3.1415926707019992125 13 3.1415926189011456060 3.1415926578678454728 14 3.1415926717412858693 3.1415926546593073709 15 3.1415919358822321783 3.1415926538571730119 16 3.1415926717412858693 3.1415926536566394222 17 3.1415810075796233302 3.1415926536065061913 18 3.1415926717412858693 3.1415926535939728836 19 3.1414061547378810956 3.1415926535908393901 20 3.1405434924008406305 3.1415926535900560168 21 3.1400068646912273617 3.1415926535898608396 22 3.1349453756585929919 3.1415926535898122118 23 3.1400068646912273617 3.1415926535897995552 24 3.2245152435345525443 3.1415926535897968907 25 3.1415926535897962246 26 3.1415926535897962246 27 3.1415926535897962246 28 3.1415926535897962246 The true value is 3.14159265358979323846264338327...
While the two forms of the recurrence formula are clearly mathematically equivalent,^{}
"Fast math" optimization
The aforementioned lack of
In some compilers (GCC and Clang), turning on "fast" math may cause the program to disable subnormal floats at startup, affecting the floatingpoint behavior of not only the generated code, but also any program using such code as a library.^{[67]}
In most Fortran compilers, as allowed by the ISO/IEC 15391:2004 Fortran standard, reassociation is the default, with breakage largely prevented by the "protect parens" setting (also on by default). This setting stops the compiler from reassociating beyond the boundaries of parentheses.^{[68]} Intel Fortran Compiler is a notable outlier.^{[69]}
A common problem in "fast" math is that subexpressions may not be optimized identically from place to place, leading to unexpected differences. One interpretation of the issue is that "fast" math as implemented currently has a poorly defined semantics. One attempt at formalizing "fast" math optimizations is seen in Icing, a verified compiler.^{[70]}
See also
 Arbitraryprecision arithmetic
 C99 for code examples demonstrating access and use of IEEE 754 features.
 Computable number
 Coprocessor
 Decimal floating point
 Double precision
 Experimental mathematics – utilizes high precision floatingpoint computations
 Fixedpoint arithmetic
 Floatingpoint error mitigation
 FLOPS
 Gal's accurate tables
 GNU MPFR
 Halfprecision floatingpoint format
 IEEE 754 – Standard for Binary FloatingPoint Arithmetic
 IBM Floating Point Architecture
 Kahan summation algorithm
 Microsoft Binary Format (MBF)
 Minifloat
 Q (number format) for constant resolution
 Quadrupleprecision floatingpoint format (including doubledouble)
 Significant figures
 Singleprecision floatingpoint format
Notes
 mantissa of a logarithm. Somewhat vague, terms such as coefficient or argument are also used by some. The usage of the term fraction by some authors is potentially misleading as well. The term characteristic (as used e.g. by CDC) is ambiguous, as it was historically also used to specify some form of exponentof floatingpoint numbers.
 biased exponent, exponent bias, or excess n representation) is ambiguous, as it was historically also used to specify the significandof floatingpoint numbers.
 7 (1966) and Xerox Sigma 9(1970).
 Burroughs B7700(1972) computers.
 Illinois ILLIAC II(1962) computer. It is also used in the Digital Field System DFS IV and V highresolution site survey systems.
 Rice Institute R1computer (since 1958).
 ^ Base65536 floatingpoint arithmetic is used in the MANIAC II (1956) computer.
 ^ Computer hardware does not necessarily compute the exact value; it simply has to produce the equivalent rounded result as though it had computed the infinitely precise result.
 division instruction that, on rare occasions, gave slightly incorrect results. Many computers had been shipped before the error was discovered. Until the defective computers were replaced, patched versions of compilers were developed that could avoid the failing cases. See Pentium FDIV bug.
 ^ But an attempted computation of cos(π) yields −1 exactly. Since the derivative is nearly zero near π, the effect of the inaccuracy in the argument is far smaller than the spacing of the floatingpoint numbers around −1, and the rounded result is exact.
 William Kahannotes: "Except in extremely uncommon situations, extraprecise arithmetic generally attenuates risks due to roundoff at far less cost than the price of a competent erroranalyst."
 Taylor expansionof this function demonstrates that it is wellconditioned near 1: A(x) = 1 − (x−1)/2 + (x−1)^2/12 − (x−1)^4/720 + (x−1)^6/30240 − (x−1)^8/1209600 + ... for x−1 < π.
 IEEE double extended precisionthen additional, but not full precision is retained.
 numeratorof the first. By multiplying the top and bottom of the first expression by this conjugate, one obtains the second expression.
References
 ^ LCCN 2009939668.
 ^ .
 . Retrieved 20121231.
 ^ FriedrichSchillerUniversität Jena. p. 2. Archived (PDF) from the original on 20180807. Retrieved 20180807. [1](NB. This reference incorrectly gives the MANIAC II's floating point base as 256, whereas it actually is 65536.)
 ^ .
 ^ Savard, John J. G. (2018) [2007], "The Decimal FloatingPoint Standard", quadibloc, archived from the original on 20180703, retrieved 20180716
 ISBN 9780203186046. Retrieved 20190818.
[…] Systems such as the [Digital Field System] DFS IV and DFS V were quaternary floatingpoint systems and used gain steps of 12 dB. […]
(256 pages)  ^ Lazarus, Roger B. (19570130) [19561001]. "MANIAC II" (PDF). Los Alamos, NM, USA: Los Alamos Scientific Laboratory of the University of California. p. 14. LA2083. Archived (PDF) from the original on 20180807. Retrieved 20180807.
[…] the Maniac's floating base, which is 2^{16} = 65,536. […] The Maniac's large base permits a considerable increase in the speed of floating point arithmetic. Although such a large base implies the possibility of as many as 15 lead zeros, the large word size of 48 bits guarantees adequate significance. […]
 ^ Torres Quevedo, Leonardo. Automática: Complemento de la Teoría de las Máquinas, (pdf), pp. 575–583, Revista de Obras Públicas, 19 November 1914.
 ISBN 9783319505084
 ^ Randell 1982, pp. 6, 11–13.
 ^ Randell, Brian. Digital Computers, History of Origins, (pdf), p. 545, Digital Computers: Origins, Encyclopedia of Computer Science, January 2003.
 (PDF) from the original on 20220703. Retrieved 20220703. (12 pages)
 ].
 ^ Kahan, William Morton (19970715). "The Baleful Effect of Computer Languages and Benchmarks upon Applied Mathematics, Physics and Chemistry. John von Neumann Lecture" (PDF). p. 3. Archived(PDF) from the original on 20080905.
 .
 ^ Severance, Charles (19980220). "An Interview with the Old Man of FloatingPoint".
 ^ ISO/IEC 9899:1999  Programming languages  C. Iso.org. §F.2, note 307.
"Extended" is IEC 60559's doubleextended data format. Extended refers to both the common 80bit and quadruple 128bit IEC 60559 formats.
 ^ "IEEE FloatingPoint Representation". 20210803.
 ^ Using the GNU Compiler Collection, i386 and x8664 Options Archived 20150116 at the Wayback Machine.
 ^ "long double (GCC specific) and __float128". StackOverflow.
 ^ "Procedure Call Standard for the ARM 64bit Architecture (AArch64)" (PDF). 20130522. Archived (PDF) from the original on 20130731. Retrieved 20190922.
 ^ "ARM Compiler toolchain Compiler Reference, Version 5.03" (PDF). 2013. Section 6.3 Basic data types. Archived (PDF) from the original on 20150627. Retrieved 20191108.
 Kahan, William Morton (20041120). "On the Cost of FloatingPoint Computation Without ExtraPrecise Arithmetic" (PDF). Archived(PDF) from the original on 20060525. Retrieved 20120219.
 ^ "openEXR". openEXR. Archived from the original on 20130508. Retrieved 20120425.
Since the IEEE754 floatingpoint specification does not define a 16bit format, ILM created the "half" format. Half values have 1 sign bit, 5 exponent bits, and 10 mantissa bits.
 ^ "Technical Introduction to OpenEXR – The half Data Type". openEXR. Retrieved 20240416.
 ^ "IEEE754 Analysis". Retrieved 20240829.
 ^ assumed bit, while IEEE places the decimal point after the assumed bit. […] ieee_exp = msbin[3]  2; /* actually, msbin[3]1128+127 */ […] _dmsbintoieee(double *src8, double *dest8) […] MS Binary Format […] byte order => m7  m6  m5  m4  m3  m2  m1  exponent […] m1 is most significant byte => smmmmmmm […] m7 is the least significant byte […] MBF is bias 128 and IEEE is bias 1023. […] MBF places the decimal point before the assumed bit, while IEEE places the decimal point after the assumed bit. […] ieee_exp = msbin[7]  128  1 + 1023; […]
 ^ ^{a} ^{b} Steil, Michael (20081020). "Create your own Version of Microsoft BASIC for 6502". pagetable.com. Archived from the original on 20160530. Retrieved 20160530.
 ^ "IEEE vs. Microsoft Binary Format; Rounding Issues (Complete)". Microsoft Support. Microsoft. 20061121. Article ID KB35826, Q35826. Archived from the original on 20200828. Retrieved 20100224.
 ^ ^{a} ^{b} Kharya, Paresh (20200514). "TensorFloat32 in the A100 GPU Accelerates AI Training, HPC up to 20x". Retrieved 20200516.
 ^ "NVIDIA Hopper Architecture InDepth". 20220322.
 arXiv:2209.05433 [cs.LG].
 Kahan, William Morton (20060111). "How Futile are Mindless Assessments of Roundoff in FloatingPoint Computation?" (PDF). Archived(PDF) from the original on 20041221.
 ^ )
 (PDF) from the original on 20140729.
 ^ "Added Grisu3 algorithm support for double.ToString(). by mazong1123 · Pull Request #14646 · dotnet/coreclr". GitHub.
 S2CID 218472153.
 ^ Giulietti, Rafaello. "The Schubfach way to render doubles".
 ^ "abolz/Drachennest". GitHub. 20221110.
 ^ "google/doubleconversion". GitHub. 20200921.
 S2CID 231718830.
 ^ )
 .
 ^ ^{a} ^{b} US patent 3037701A, Huberto M Sierra, "Floating decimal point arithmetic control means for calculator", issued 19620605
 ^ Kahan, William Morton (19971001). "Lecture Notes on the Status of IEEE Standard 754 for Binary FloatingPoint Arithmetic" (PDF). p. 9. Archived(PDF) from the original on 20020622.
 ^ "D.3.2.1". Intel 64 and IA32 Architectures Software Developers' Manuals. Vol. 1.
 ^ Christopher Barker: PEP 485  A Function for testing approximate equality
 US Government Accounting Office. GAO report IMTEC 9226.
 ISBN 9780470864128. Retrieved 20130514.
 . Retrieved 20130514.
 ^ . 0898713552.
 .
 ^ ARITH 17, Symposium on Computer Arithmetic (Keynote Address). pp. 6, 18. Archived (PDF) from the original on 20060317. Retrieved 20130523. (NB. Kahan estimates that the incidence of excessively inaccurate results near singularities is reduced by a factor of approx. 1/2000 using the 11 extra bits of precision of double extended.)
 Kahan, William Morton (20110803). Desperately Needed Remedies for the Undebuggability of Large FloatingPoint Computations in Science and Engineering (PDF). IFIP/SIAM/NIST Working Conference on Uncertainty Quantification in Scientific Computing, Boulder, CO. p. 33. Archived(PDF) from the original on 20130620.
 Kahan, William Morton; Darcy, Joseph (2001) [19980301]. "How Java's floatingpoint hurts everyone everywhere" (PDF). Archived(PDF) from the original on 20000816. Retrieved 20030905.
 ^ (PDF) from the original on 20030815.
 Kahan, William Morton (19810212). "Why do we need a floatingpoint arithmetic standard?" (PDF). p. 26. Archived(PDF) from the original on 20041204.
 Kahan, William Morton (20010604). Bindel, David (ed.). "Lecture notes of System Support for Scientific Computation" (PDF). Archived(PDF) from the original on 20130517.
 ^ "General Decimal Arithmetic". Speleotrove.com. Retrieved 20120425.
 ^ Christiansen, Tom; Torkington, Nathan; et al. (2006). "perlfaq4 / Why is int() broken?". perldoc.perl.org. Retrieved 20110111.
 .
 Kahan, William Morton; Ivory, Melody Y. (19970703). "Roundoff Degrades an Idealized Cantilever" (PDF). Archived(PDF) from the original on 20031205.
 ^ "AutoVectorization in LLVM". LLVM 13 documentation.
We support floating point reduction operations when ffastmath is used.
 ^ "FloatingPointMath". GCC Wiki.
 ^ "55522 – funsafemathoptimizations is unexpectedly harmful, especially w/ shared". gcc.gnu.org.
 ^ "Code Gen Options (The GNU Fortran Compiler)". gcc.gnu.org.
 ^ "Bug in zheevd · Issue #43 · ReferenceLAPACK/lapack". GitHub.
 .
Further reading
 . (NB. Classic influential treatises on floatingpoint arithmetic.)
 . Retrieved 20160211.
 Sterbenz, Pat H. (1974). FloatingPoint Computation. PrenticeHall Series in Automatic Computation (1st ed.). Englewood Cliffs, New Jersey, USA: .
 Golub, Gene F.; van Loan, Charles F. (1986). Matrix Computations (3rd ed.). .
 . (NB. Edition with source code CDROM.)
 .
 . (1213 pages) (NB. This is a singlevolume edition. This work was also available in a twovolume version.)
 Kornerup, Peter; Matula, David W. (2010). Finite Precision Number Systems and Arithmetic. .
 Savard, John J. G. (2018) [2005], "FloatingPoint Formats", quadibloc, archived from the original on 20180703, retrieved 20180716
 Muller, JeanMichel; Brunie, Nicolas; de Dinechin, Florent; Jeannerod, ClaudePierre; Joldes, Mioara; Lefèvre, Vincent; Melquiond, Guillaume; LCCN 2018935254.
External links
 "Survey of FloatingPoint Formats". (NB. This page gives a very brief summary of floatingpoint formats that have been used over the years.)
 Monniaux, David (May 2008). "The pitfalls of verifying floatingpoint computations". ACM Transactions on Programming Languages and Systems. 30 (3). S2CID 218578808. (NB. A compendium of nonintuitive behaviors of floating point on popular architectures, with implications for program verification and testing.)
 OpenCores. (NB. This website contains open source floatingpoint IP cores for the implementation of floatingpoint operators in FPGA or ASIC devices. The project double_fpu contains verilog source code of a doubleprecision floatingpoint unit. The project fpuvhdl contains vhdl source code of a singleprecision floatingpoint unit.)
 Fleegal, Eric (2004). "Microsoft Visual C++ FloatingPoint Optimization". Microsoft Developer Network. Archived from the original on 20170706.