'Standard' models of computation—Turing machines, Random Access Machines (RAMs), and even parallel variants like the PRAM—all operate on discrete objects: bits, Booleans, integers; at most, rational numbers (fractions) are considered in the form of numerator/denominator pairs of integers. These are the entities to be read, stored, processed, and output. A large amount of scientific computation, however, evolves around continuum rather than discrete problems. Fields like fluid dynamics, computational material science, and space mission design are just a few of them. In fact, most of applied mathematics amounts to solving various classes of ordinary or partial differential equations over the reals. This raises the need for a formal model to describe and analyze the prospects and limits of computations over $\mathbb{R}$. Leaving aside continuous-time (so-called analog, as opposed to clocked) computers, most models are either of an algebraic nature (such as the real-RAM aka Blum-Shub-Smale Machine) or produce approximations (like in Domain Theory, interval arithmetic, Recursive Analysis and Weihrauch's TTE); some even both (Hotz' analytic machines).

Mere computability investigates the algorithmic solvability of (decision or function) problems in principle. Although this is often already difficult enough to establish (and for many natural problems even wrong), in practice one usually wants efficient a solution: by devising data structures and algorithms and analyzing their worst-case behavior. Here, exponential asymptotic growth is generally considered inefficient – yet a refined diagonalization argument (the hierarchy theorem due to Hartmanis and Stearns) proves that some decidable problems may require such running times for their solution. This raises the question of optimality of an algorithm, that is, how close it comes asymptotically to the intrinsic difficulty of the problem it solves. Put differently: is it worth while to try improving it or not?

To avoid possible confusion with heuristics or methods: We consider here as algorithm only one formally correct with an asymptotic running time bound provably obeyed on every possible input.

This 'intrinsic difficulty' of a problem is called its complexity; and the running time of any algorithm solving it constitutes (asymptotically) an upper bound. Conversely, if this can be complemented by a matching lower bound, it means that the algorithm is optimal.

Such a lower bound asserts that any conceivable algorithm solving the problem requires asymptotically a certain number of steps. Thus taking into account all possible (even prospective) algorithms is, understandably, usually quite challenging. Moreover for such a claim referring to all algorithms to make sense, the underlying model of computation needs to be specified rather precisely.

Some minor details of its definition often become irrelevant when focussing on asymptotic growth, though. Moreover switching to an asymptotically superior algorithm beats, for sufficiently large inputs, any constant factor performance improvement gained by, e.g., purchasing faster hardware.

As indicated above, time (i.e. the number of steps) is often the primary criterion for efficiency. However 'fast' algorithms in this sense may turn out impractical because they would exceed other limited ressources such as space (i.e. memory consumption), communication, or #processors in parallel computing.