¦ noun [as modifier] Computing denoting a mode of representing numbers as two sequences of bits, one representing the digits in the number and the other an exponent which determines the position of the radix point.

<*programming, mathematics*> A number representation consisting
of a mantissa, M, an exponent, E, and a radix (or
"base"). The number represented is M*R^E where R is the
radix.
In science and engineering, exponential notation or
scientific notation uses a radix of ten so, for example, the
number 93,000,000 might be written 9.3 x 10^7 (where ^7 is
superscript 7).
In computer hardware, floating point numbers are usually
represented with a radix of two since the mantissa and
exponent are stored in binary, though many different
representations could be used. The IEEE specify a
standard representation which is used by many hardware
floating-point systems. Non-zero numbers are normalised so
that the binary point is immediately before the most
significant bit of the mantissa. Since the number is
non-zero, this bit must be a one so it need not be stored. A
fixed "bias" is added to the exponent so that positive and
negative exponents can be represented without a sign bit.
Finally, extreme values of exponent (all zeros and all ones)
are used to represent special numbers like zero and positive
and negative infinity.
In programming languages with explicit typing,
floating-point types are introduced with the keyword "float"
or sometimes "double" for a higher precision type.
See also floating-point accelerator, floating-point unit.
Opposite: fixed-point.
(2008-06-13)

In computing, floating-point arithmetic (FP) is arithmetic using formulaic representation of real numbers as an approximation to support a trade-off between range and precision. For this reason, floating-point computation is often used in systems with very small and very large real numbers that require fast processing times.

Floating-point arithmetic

In computing, **floating-point arithmetic** (**FP**) is arithmetic that represents real numbers approximately, using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. For example, 12.345 can be represented as a base-ten floating-point number:

In practice, most floating-point systems use base two, though base ten (decimal floating point) is also common.

The term *floating point* refers to the fact that the number's radix point can "float" anywhere to the left, right, or between the significant digits of the number. This position is indicated by the exponent, so floating point can be considered a form of scientific notation.

A floating-point system can be used to represent, with a fixed number of digits, numbers of very different orders of magnitude — such as the number of meters between galaxies or between protons in an atom. For this reason, floating-point arithmetic is often used to allow very small and very large real numbers that require fast processing times. The result of this dynamic range is that the numbers that can be represented are not uniformly spaced; the difference between two consecutive representable numbers varies with their exponent.

Over the years, a variety of floating-point representations have been used in computers. In 1985, the IEEE 754 Standard for Floating-Point Arithmetic was established, and since the 1990s, the most commonly encountered representations are those defined by the IEEE.

The speed of floating-point operations, commonly measured in terms of FLOPS, is an important characteristic of a computer system, especially for applications that involve intensive mathematical calculations.

A floating-point unit (FPU, colloquially a math coprocessor) is a part of a computer system specially designed to carry out operations on floating-point numbers.

Uitspraakvoorbeelden voor floating-point

1. Floating Point.

2. to 18 floating point operations.

3. wanted to do floating point properly.

4. plus floating point and video controller.

5. Floating-Point did, where there were

Voorbeelden uit tekstcorpus voor floating-point

1. The Blue Gene system to be installed in Lausanne will have a peak processing speed of at least 22.8 trillion floating–point operations per second, or 22.8 teraflops.

2. Named after the peregrine falcon, which reaches speeds of up to 340 km per hour, Shaheen is expected to reach 222 teraflops, a measure equaling a trillion floating point operations per second, Ghaslan said.