In a programming language, a range of rational number values.
In computer representations of rational numbers, there are problems with the sizes of the numerator (to provide the range required) and of the denominator (to retain the precision). Consequently, floating-point notation is often preferred even though it brings problems of its own. Also, it is harder to provide hardware support for strict rational operations (which never lose precision) than for floating-point (which is a limited-precision rational notation often used for approximating real numbers). While most programming languages provide floating-point notation, only a few provide a rational type.