Complexity and Computing with Continuous Data

Building on Mathematical Logic, Theoretical Computer Science provides the foundation, concepts, and approaches to modern software development: from problem specification via algorithm design and analysis, proof of optimality (aka Complexity Theory) to implementation and formal verification.

It usually evolves around discrete data such as bits, integers, graphs. “Discrete” can be understood to mean “in steps”: there is no integer between 21 and 22. However data in Engineering or Mathematical applications is usually continuous: temperature does not 'jump' from 21 to 22 degrees Celsius, electricity from 21 to 22 volt, velocity from 21 to 22 km/h.

Floating point numbers try to approximate continuous with discrete data; but they thus violate many mathematical laws, such as distributivity. Standardized in 1985 with IEEE754, they are an anachronistic data type – like assembly language is an outdated approach to programming, long superceded by more convenient and less error-prone object-oriented high-level programming languages.

We develop a new and elegant high-level approach to algorithmically processing continuous data:

  • multivalued problem specification in first-order logic over two-sorted structure
  • algorithm design over exact arithmetic primitives with partial tests
  • algorithm analysis w.r.t. realistic bit-cost in dependence on the output precision
  • proof of optimality by sensitivity/adversary argument or relating to NP-hard problems
  • implementation in newly devised imperative object-oriented paradigm “Exact Real Computation”
  • formal verification in extension of Floyd-Hoare Logic