Producing a denormal is sometimes called **gradual underflow** because it allows the calculation to lose precision slowly, rather than all at once.

As implemented in the IEEE floating-point standard they are written with a written exponent of 0, but are interpreted with the value of the smallest allowed exponent (i.e., as if it were written with a 1).

Denormal numbers were implemented in the Intel 8087 while the standard was being written. This demonstrated that denormals could be supported by a practical implementation. Many implementation of a floating point unit do not directly support denormal numbers in hardware, but rather trap to some kind of software support. While this may be transparent to the user, it can result in calculations which produce or consume denormal numbers to be much slower than similar calculations on normal numbers.