1. A unit of computational cost associated with matrix and vector operations. The term is widely used in algorithms for numerical linear algebra. A flop is approximately the amount of work required to compute, for example in Fortran, an expression of the form
i.e. a floating point multiplication and a floating point addition, along with the effort involved in subscripting. Thus for example, Gaussian elimination for a system of order
n requires a number of flops of order
n3 (
see linear algebraic equations). As computer design and architecture change, this unit of cost may well undergo modification.
2. Short for floating-point operation. See also flops.