# Seminar on Theoretical Computer Science, Logic, and Real Computation

## 2019

### Axiomatic Exact Real Numbers in Type Theory and Program Extraction

- June 21, 13h00, E3-1 #3444
- Sewon Park (KAIST)

Abstract: We introduce a dependent type theory called ertt which has a type for exact real numbers by its axiom. Related terms and types are assumed in the type theory but its expressiveness is not beyond type-2 computability. We also introduce a simple functional programming language called erlc (exact real lambda calculus) and devise a proof extraction mechanism from a proof in the type theory into a program in erlc. The introduced type theory, the programming language and the extraction mechanism are implemented. Constructive intermediate value theorem is proved and an erlc program finding a root of a real function is extracted from the proof.

### Universal envelopes of discontinuous functions

- June 14, 13h00, E3-1 #3444
- Eike Neumann (Aston/Birmingham)

Abstract: This is a contribution to computable analysis in the tradition of Grzegorczyk, Lacombe, and Weihrauch. The main theorem of computable analysis asserts that any computable function is continuous. The solution operators for many interesting problems encountered in practice turn out to be discontinuous, however. It hence is a natural question how much partial information may be obtained on the solutions of a problem with discontinuous solution operator in a continuous or computable way. We formalise this idea by introducing the notion of continuous envelopes of discontinuous functions. The envelopes of a given function can be partially ordered in a natural way according to the amount of information they encode. We show that for any function between computably admissible represented spaces this partial order has a greatest element, which we call the universal envelope. We develop some basic techniques for the calculation of a suitable representation of the universal envelope in practice. We apply the ideas we have developed to the problem of locating the fixed point set of a continuous self-map of the unit ball in finite-dimensional Euclidean space, and the problem of locating the fixed point set of a nonexpansive self-map of the unit ball in infinite-dimensional separable real Hilbert space.

### Partial differential equations: complexity and exact real computation

- June 7, 14h00, E3-1 #3444
- Svetlana Selivanova (KAIST)

Abstract: In this talk we give an overview of recent achievements and open questions about computational complexity of finding solutions to partial differential equations (PDEs), as well as ways to implement them in exact real computation packages. The talk includes updates on the joint work with Ivan Koswara and Martin Ziegler on linear PDEs and matrix/operator powering.

### How to compute with an infinite time Turing machine?

- May 17, 14h00, E3-1 #3444
- Sabrina Ouazzani (Ecole Polytechnique, Paris)

Abstract: In this talk, we present infinite time Turing machines (ITTM), from the original definition of the model to some new infinite time algorithms.

We will present algorithmic techniques that allow to highlight some properties of the ITTM-computable ordinals. In particular, we will study gaps in ordinal computation times, that is to say, ordinal times at which no infinite time program halts. We will explain some properties of the gaps in the ordinal computation times.

### Mini-Workshop on Exact Real Computation

- May 16, 11h~18h, E3-1 #3444
- Martin, Seokbin, Gyesik, Sewon, Dongseong, Donghyun, Sveta, Ivan, Hyunwoo, Youngjo

### Jointing Functions in Exact Real Computation

- Apr.12, 13h00, E3-1 #3444
- Seokbin Lee (KAIST)

Abstract:
Floating-point numbers, though used widely for numerical calculations, are inexact and suffer from imprecision in computations. Rigorous computability with and of real numbers and functions have been studied since the works of Turing (1937) and Grzegorczyk (1957); *Exact Real Computation* is a modern approach of ameliorating issues arising from floating-point numerics by introducing high-level Abstract Data Types for real numbers (hiding their internal representation) and continuous functions. It provides *partial* comparisons, capturing that equality of two real numbers is known equivalent to the complement of the Halting problem. A solution has been to introduce primitive operations under the Kleene and Priest three-valued logic {0,1,↓}, including a *continuous conditional*.
With these tools in mind, one can express the join of two single-variable functions. We demonstrate that also the join of more functions defined on more real variables can explicitly be expressed similarly.

### Is Brownian Motion Computable?

- Apr.12, 13h30, E3-1 #3444
- Hyunwoo Lee (KAIST)

Abstract: 1D Brownian Motion (aka Wiener Process) is a probability distribution on the space C[0;1] of continuous functions f:[0;1]→R with f(0)=0. Although the pointwise and correlated distibution of f's values at any finite choice of arguments is just a multivariate Gaussian distribution, the question of whether f itself is computable remains open. We discuss several approaches to settingly the conjecture positively, and for each point out the corresponding key property to prove.

### Computing Haar Averages on SO(3)

- Apr.12, 14h00, E3-1 #3444
- Dongseong Seon (KAIST)

Abstract: According to Haar's Theorem, every compact group G has a unique right and/or left invariant regular Borel probability measure mu. We call HAAR AVERAGE (of G) the functional integrating any continuous function f:G→R with respect to mu. This generalizes, and recovers for the additive group G=[0;1) mod 1 aka multiplicative complex circle group, the usual Riemann integral. Previous work [CiE'2018, FWAC'18] had established the computability of Haar Averages for a large class of computable compact (metric) groups. Here we focus on two arguably practically most important continuous compact metric groups SO(3) of orthogonal real 3×3 matrices of determinant 1, and SU(2) of unitary complex 2×2 matrices of determinant 1.

We present (i) explicit algorithms in the paradigm ERC' arXiv:1608.05787, (ii) relate their computational bit-cost to the discrete complexity class #P_1 and (iii) prove that optimal. Again, this generalizes the well-known 1D real case [Ko91, Theorem 5.32].

### Notes on bridging Type-1 and Type-2

- Mar 22, 13h00, E3-1 #3444
- Franz Brauße (Uni Trier)

Abstract: The Type-2 Theory of Effectivity (TTE) provides a framework which allows to express and reason about real number computations on digital computers or Turing machines in a rigorous manner, whereas Type-1 computability is concerned with problems on discrete inputs.

I will report on how Type-2 computations can solve an important Type-1 problem: Satisfiability of formulas over non-linear real constraints. An implementation is available in form of an SMT solver [1] which in a conflict-driven way computes local linearizations of real functions like sin(x), exp(x), etc.

The second aspect covered is on how conceptually very different TTE implementations [2],[3] can be abstracted by an imperative low-level language with intuitive real point semantics as opposed to sequences of intervals. Efficient execution makes the language suitable to serve as a target to compile higher-level Type-1 and Type-2 languages to.

### On the Implementation of a Compiler for a 'Mathematical' Programming Language

- Mar 15, 13h00, E3-1 #3444
- Fritz Mayer-Lindenberg (TU Hamburg-Harburg)

Abstract: The talk presents the recent implementation of a programming language for finite computations on the reals. Compiler targets are fairly simple processors using a variety of standard and non-standard number codes, and networks of such. Targeted applications are embedded signal processing and robotics systems. Although the language design strives at simplicity, using a single base type, a single structure of brackets within the definitions of algorithms, and a functional style, the compiler is not as simple and uses some non-standard techniques. E.g., mathematical data types such as finite-dimensional real algebras simply redefine the multiply operation for tuples of a certain size. The compiler then automatically substitutes the multiplies in the available linear vector and polynomial operations. Other automatic functions of the compiler are for the extension of functions and operations to tuples of numbers, for replacing the real operations by rounded operations during code generation, and certain mathematical features like automatic differention. Also, the target systems are highly parallel and need to deal with real time. The compiler works in conjunction with a virtual machine that is used for constant folding and for executing parts of or simulating entire applications.

### Complexity Theory of Real Matrix and Polynomial Powering

- Mar 8, 13h00, E3-1 #3444
- Ivan Koswara (KAIST)

Abstract: The problem of powering a large (sparse) real matrix or polynomial is interesting due to its many numerical applications, but also because of the new effects it exhibits: as opposed to the integer setting, a real matrix or polynomial may have bounded powers! We relate the computational cost of various variants of the problems to different classical complexity classes between NC and PSPACE.

### Fall Detection Using Computer Vision

- Feb 21, 11h00, E3-1 #3444
- Adiba Orzikulova (Inha University)

Abstract: It is hard for many elderly people to be able to get up again after they fall. Falls may result in psychological diseases, stress, severe injuries and even death in the worst case. According to the statistics, more than 30 percent of the adult elderly aged 65 and more live alone and in upto 35% of them one fall is expected to happen per year. Therefore, it is necessary to have a real-time monitoring system of elderly status and contact family members and caregivers of them on time. Representative existing technology to mitigate this problem includes wearable sensor devices and techniques making use of machine learning. Even though these techniques are efficient and highly reliable, the former has downside of not being worn at all times, while the latter one requires high-costly resources along with big dataset which cannot be acquired for everyone due to privacy reasons. Our real-time fall detection system for elderly uses classic computer vision to detect emergency situations in the following manner. First, background is subtracted to track the foreground moving object. Next, contours of moving object along with three components defining fall probability is calculated. If the fall probability reaches defined threshold, we presume that the person has fallen and send alarm to the family members of a person in care.

## 2018

### ERA program extraction from a proof using abstract types

- Nov 30, 14h00, E3-1 #2450
- Sewon Park (KAIST)

Abstract: Currently, Coq has a standard library for axiomatic real numbers. The library assumes decidability of real number in/equality. Hence, using the library, in many cases, it is impossible to find a right computational content of a proof. In the talk, I introduce an alternative real numbers whose decidability is not assumed while real numbers still remain to be abstract. The talk shows how a proof of (constructive) Intermediate Value Theorem using the real number is turned into a root finding program in AERN2, a Haskell based ERA language .

### Elimination of unknowns for differential and difference equations

- Nov 23, 15h30, E3-1 #4420
- Gleb Pogudin (New York University)

Abstract: Elimination of unknowns is a fundamental tool for studying solutions of equations (linear, polynomial, differential, etc.). The elimination problem for differential (resp., difference) equations can be stated as follows. Given a system of differential (resp., difference equations) in m + n unknowns, determine whether there is a consequence of the system involving only the first m variables and find such a consequence if it exists.

In the talk, I will present solutions to this problem for differential and difference equations by deriving an explicit upper bound for the number of prolongations sufficient for a reduction of this elimination problem to a well-studied elimination problem for systems of polynomial equations. This gives the first elimination algorithm for difference equations and new elimination algorithm for differential equations. Our approach also allows for randomized computation to significantly increase the efficiency.

I will describe applications of these techniques to discrete mathematics (in particular, to enumerative combinatorics) and modeling (in particular, to parameter identifiability of differential models).

The talk is based on a joint work with A. Ovchinnikov and T. Vo and on a joint work with A. Ovchinnikov and T. Scanlon.

### Constructive Reverse Mathematics: an introduction and recent results

- Oct 26, 16h30, E3-1 #4420
- Hajime Ishihara (JAIST)

Abstract: This talk presents an introduction to constructive reverse mathematics (CRM) with some recent results. The aim of CRM is to classify various theorems in intuitionistic, constructive recursive and classical mathematics by logical principles, function existence axioms and their combinations. We review some results in classical reverse mathematics (Friedman-Simpson program), and axioms of intuitionistic and constructive recursive mathematics with their consequences to motivate CRM. Then we present some results of CRM with and without countable choice, and recent results in CRM without countable choice.

### Program Extraction from Proofs

- Oct 10, 16h30, E3-1 #3445
- Helmut Schwichtenberg (LMU Munich)

Abstract: Proofs can have computational content, and the most direct method to extract it is via a realizability interpretation in the form given by Kreisel. We present a recent case study: Brouwer's fan theorem for uniformly coconvex bars. As an application we prove a constructive version of the Heine-Borel Theorem, for uniformly coconvex coverings. We also discuss some relevant general issues, among them the notions of computability and equality in higher types.

### Generalizing the Signed-Digit Representation

- Aug 23, 15h00, E3-1 #3444
- Donghyun Lim (KAIST)

Abstract:
Among the many (polynomial-time) equivalent ways of encoding real numbers, the signed-digit expansion stands out beyond, say, the encoding by a sequence of fixed-point approximations:

It renders addition computable in optimal linear, as opposed to quadratic, running time in the streaming model with bit-cost; closely related: it is Höder-continuous as partial mapping from Cantor space onto (i.e. as so-called TTE *representation* of) the unit interval.

We extend and generalize this to arbitrary compact metric spaces, gauging optimality by comparing the space's entropy to the representation's modulus of continuity.

(Joint work with Svetlana Selivanova and Martin Ziegler)

### Logic of Deterrence

- July 30, 11h00, E3-1 #4420
- Christopher Fichtlscherer (University of Hamburg)

Abstract:
“*Deterrence is the art of producing in the mind of the enemy the fear to attack*” - Dr. Strangelove.
Deterrence Theory, as developed by Herman Kahn and Thomas Schelling and others, had led to the doctrine of *Mutual Assured Destruction*.
Yet it seems paradoxical, if not contradictory, to credit the arms race for maintaining, rather than jeopardizing, geopolitical stability during the Cold War.

We revisit the Game-theoretic perspective on Deterrence Theory in order to define, analyze, and empirically assess a rigorous mathematical notion of stability common in Numerics and Geometry: the distance/perturbation of payoffs that do not change the 'type' of the game. In order to formally classify all 'types' of games, we propose, compare, and generalize different equivalence relations on 2×2 games, inducing from 3 (coarsest) to 84 (finest) classes. Our aim is to capture politically relevant properties of possible outcomes under various types of actors like rational, defensive, offensive, …

### Bit-Complexity and Rigorous Implementation of the Fast Multipole Method for Trummer’s Problem

- June 27, 16h00, E3-1 #4420
- Bastian Dörig (TU Darmstadt)

Abstract: We will introduce a quasilinear, in terms of bit-cost, and implementation friendly version of the Fast Multipole Method (FMM) in order to solve the real version of Trummer’s Problem, which is closely related to the N-Body Simulation Problem with pairwise Coloumb forces. While solving the differential equation for the previous mentioned problem numerically one has to solve Trummer’s Problem in each iteartion of the Euler Method.

### Master Defense: Computing Periods...

- June 12, 14h00, E3-1 #3420
- Junhee Cho (KAIST)

Abstract:
A *period* is the difference between the volumes of two sets bounded
by systems of polynomial inequalities with rational coefficients. Periods are
receiving increasing interest in Algebraic Model Theory as they have finite
descriptions (the polynomials’ coefficients) and include all algebraic reals
as well as some transcendentals. Recent research has located their worst-case
complexity in low levels of the Grzegorczyk Hierarchy. The present work
introduces, analyzes, and evaluates a rigorous algorithm for rigorously
computing periods up to error 2^{−n}. Furthermore, we show computing periods
up to error 2^{−n} is NP-hard.

### Interval Gaussian Algorithm for Singular Interval Matrices

- May 16, 14h00, E3-1 #3420
- Sewon Park (KAIST)

Abstract: We consider an interval matrix which contains a singular matrix. The aim of the work is to, given a such interval matrix, compute a set of interval vectors each containing a vector that span the singular matrix's kernel. Considering the famous example of Reichmann, K 1979, the work analyzes (i) when does an interval Gaussian algorithm succeed, (ii) how does the computed intervals' width get bounded and (iii) bit-cost of the interval computation.

### Global Identifiability of Parametric ODE Systems

- May 16, 9h00, E11 #309
- Prof. Chee K. Yap (New York University)

Abstract: Many real-world processes such as in biology and chemistry are modeled using systems of ordinary differential equations with parameters. Given such a system, we say that a parameter is globally identifiable if it can be uniquely recovered from input and output data. The main contribution of this paper is to provide theory, an algorithm, and software for deciding global identifiability. First, we rigorously derive an algebraic criterion for global identifiability (this is an analytic property), which yields a deterministic algorithm. Second, we improve the efficiency by randomizing the algorithm while guaranteeing probability of correctness. Our implementation and comparison with several current software such as DAISY, COMBOS and GenSSI 2.0 show the superioriority of our algorithm. (Joint work with H.Hong, A.Ovchinnikov and G.Pogudin)

### Towards Soft Geometric Computation

- May 14, 16h00, E3 #1501
- Prof. Chee K. Yap (New York University)

Abstract: For over two decades, Exact Geometric Computation (EGC) has provided a paradigm in Computational Geometry for the correct implementation of geometric algorithms. It is the most successful approach for addressing numerical nonrobustness, leading to successful libraries and practical algorithms in many areas. We review some pressing reasons to extend this paradigm:

- EGC algorithms may not be Turing computable (e.g., transcendental functions)
- EGC may be too inecient (e.g., shortest path problems)
- EGC entails numerous/dicult algebraic analysis (e.g., Voronoi diagram of polyhedra)
- Exact computation is unnecessary/inappropriate (e.g., robot motion planning)

This talk describes a research program to develop \soft“ (i.e., purely numerical) approaches for addressing the above issues. We illustrate these ideas by looking at recent work in several areas:

- root isolation and clustering (ISSAC'09,'11,'12,'16, SNC'11, CiE'13)
- isotopic approximation of curves and surfaces (SoCG'09,'12, SPM'12, ICMS'14)
- Voronoi diagrams (ISVD'13, SGP'16)
- robot motion planning (SoCG'13, WAFR'14, FAW'15)

Some common themes emerge from the above list: we replace the exact algebraic model of computation (Real RAM or BSS Model) by a model based on numerical interval approximations. The main algorithmic paradigms are subdivision and iteration. We introduce an input resolution parameter (epsilon), but it must be used in a novel “soft” way. Soft versions of classical (“hard”) geometric predicates must be developed.

### Functorial Methods applied in the π-Nets Programming Language

- April 27, 9h00, E11 #309
- Prof. Dr. Fritz Mayer-Lindenberg (TU Hamburg-Harburg)

Abstract: The system architecture of networks of FPGA nodes holding soft processors imposes some extra requirements on the programming tools to be applied. There will be many fairly simple processors working in parallel on a mix of standard and non-standard number codes. Memory breaks up into disjoint sections each accessible to a few processors only. FPGA and SoC based systems are known to be difficult to use. Therefore is has been an important additional requirement to end up with a small, easy-to-use abstract language. The present talk gives some keys on how simplicity can be achieved and has been in the language π-Nets. The crucial support for multiple number types is through a method of functorial substitution starting from a unique abstract type of real number. Functorial substitution is also applied to systematically overload symbols to keep the number of program symbols small, and to integrate some high-level compiler functions reinterpreting the inter- mediate code in different fashions, one being the well-known method of automatic diffentiation.

### A Paradigm for Building and Programming Scalable FPGA based Numeric Processors

- April 25, 9h00, E11 #309
- Prof. Dr. Fritz Mayer-Lindenberg (TU Hamburg-Harburg)

Abstract: This talk presents a method for building high-performance embedded systems from FPGA chips and SoC devices integrating both an FPGA and a processor system. The focused applications are to digital signal processing, robotics and embedded high-performance computing all with emphasis on number crunching. The proposed system architecture is based on networks of small-to-mid size FPGA nodes linked by high-speed interfaces, with each FPGA node holding a sub network of soft- ware defined processors and being equipped with external memory. A hardware operating system has been defined implementing the required networking and multi-threaded control functions such that application specific system only have to configure the right mixes of computational circuits on top of it, typically from a library residing in an attached Flash memory. Compute circuits attaching to the OS comprise a non-standard floating-point ALU and an ALU supporting complex and quaternionic operations. A sample system presented in some detail links 50 SoC nodes into a highly efficient network. The system architecture is supported by a special parallel programming language targeting the simple processors with their non-standard arithmetics. It is the topic of another talk.

### Tangent cones to quasimetric sub-Riemannian spaces and applications

- April 24, 10h30, E6-1 #4415
- Dr. Svetlana Selivanova (KAIST)

Abstract: A sub-Riemannian space is a manifold with a selected distribution (of “allowed movement directions” represented by the spanning vector fields) of the tangent bundle, which spans by nested commutators, up to some finite order, the whole tangent bundle. Such geometries naturaly arise in nonolonomic mechanics, robotics, thermodynamics, quantum mechanics, neurobiology etc. and are closely related to optimal control problems on the corresponding configuration space.

As is well known, there exists an intrinsic Carnot-Caratheodory metric generated by the “allowed” vector fiels. Studying the Gromov's tangent cone to the corresponding metric space is widely used to construct efficient motion planning algorithms for related optimal control systems. We generalize this construction to weighted vector fields, which provides applications to optimal control theory of systems nonlinear on control parameters. Such construction requires, in particular, an extension of Gromov's theory to quasimetric spaces, since the intrinsic C-C metric doesn't exist in this case.

### Computing Periods...

- February 27, 17h00, #3420
- Junhee Cho (KAIST)

Abstract:
A *period* is the difference between the volumes of two semi-algebraic sets.
Recent research has located their worst-case complexity in low levels of the Grzegorczyk Hierarchy.
The present work introduces, analyzes, and evaluates three rigorous algorithms for rigorously computing periods:
a deterministic, a randomized, and a 'transcendental' one.

(Joint work with Sewon Park, to be presented at WALCOM 2018)

## 2017

### On computability and complexity of finding solutions of linear hyperbolic systems of PDEs

- December 12, 16h30, #3444
- Svetlana Selivanova (Sobolev Institute, Novosibirsk)

Abstract: We discuss possibilities of application of methods of numerical analysis and computer algebra to investigate computability and complexity of finding solution operators of systems of PDEs. Computability is considered in the sense of the Weihrauch’s TTE approach. We also report on our recent joint results with Victor Selivanov for the example of symmetric hyperbolic systems.

### Probably Approximately Correct Learning

- Nov 23, 12h00
- E3-1 #2445
- Donghyun Lim (KAIST)

Abstract: PAC learning is a theoretical framework for the rigorous mathematical analysis of machine learning proposed in 1984 by Leslie Valiant. We motivate the approach, introduce the basic concept(s), survey the most important theorem(s), sketch an example proof, and present an example application.

### Unpredictable random number generator based on thread scheduler

- Oct 26, 12h00
- E3-1 #2445
- Hyun Woo Lee (Hongik University)

Abstract: Adrian, Radu, and Sebastian suggested the software strategy to implement Unpredictable random number generator (URNG) based on the functionality of the operating system’s thread scheduler (2008). Based on their work, I make more simple but powerful URNG which produce the unpredictable random bits. By the NIST statistical test, I can check c++ implementation of this generator can make high quality random number faster than Adrian and 2 others’ strategy.

### Formal Verification in Imperative Multivalued Programming over Continuous Data Types

- Sep 29, 12h00
- E3-1 #3444
- Sewon Park (KAIST)

Abstract: Inspired and guided by the iRRAM C++ library (Müller 2001), we formally specify a programming language for the paradigm of EXACT REAL COMPUTATION: reliably operating on encapsulated CONTINUOUS data types such as (not necessarily algebraic) real numbers — imperatively and exactly (no rounding errors) with primitives computable in the sense of Recursive Analysis and a necessarily modified multivalued=non-extensional semantics of tests. We then propose a complete logical system over two types, integers and real numbers, for rigorous correctness proofs of such algorithms. We extend the rules of Hoare Logic to support the formal derivation of such proofs, and have them verified in the Coq Proof Assistant. Our approach is exemplified with three simple numerical example problems: integer rounding, solving systems of linear equations, and continuous root finding.

(joint work with Gyesik Lee, Norbert Müller, Eike Neumann, Sewon Park, and Norbert Preining)

### Undirected Reachability in LOGSPACE

- Sep. 7, 14h30
- E3-1 #4420
- Ivan Koswara (KAIST)

Abstract: We report the result of Reingold (2004) to decide undirected s-t connectivity in deterministic logarithmic space. We also report results about universal traversal sequences, which formed an important step towards solving the deterministic space complexity of undirected s-t connectivity.

### Space-Complexity of Reachability

- August 17, 10h00
- E3-1 #2452
- Ivan Koswara (KAIST)

Abstract: We report on work by Nisan, Szemeredy, and Wigderson (1992) deciding undirected graph reachability in deterministic space O(log^{1.5} n); and on Trifonov's 2005 improvement to space O(log(n)*loglog(n)).

### Surface Realization using FB-LTAG and Possible Issues on Implementation

- August 14, 15h00
- E3-1 #3445
- Seung-Woo Schin (KAIST)

Abstract: This talk will cover one of the surface realization technique using Feature Structure Based Lexicalized Tree Adjoining Grammar(FB-LTAG), suggested by [1]. After that, talk will cover possible issues in applying [1] to Basbalo framework. Firstly, FB-LTAG formalism will be introduced. FB-LTAG will serve as a grammar 'template' that contains both syntactic and semantic information. Than grammar extraction - which is denoted by FB-LTAG - algorithm from a preprocessed natural language will be suggested. Part-of-speech(POS) parse tree of a sentence and RDF Triples that contains semantic of a sentence will be used as a preprocessed data for a natural language sentence. Whole algorithm is suggested in [2].

In actual implementation of the framework, few issues need to be resolved.

1) Sentence preprocessing

On sentence preprocessing, generation of RDF triple is necessary. For now, Basbalo framework does not have ontology dedicated for baseball, which makes it impossible to generate RDF triple data. Therefore, comprehensive ontology implementation is necessary.

2) Language constraint

According to [3,4], linguistic issues can be addressed upon actual grammar extraction of Korean Language. Such constraint is mainly due to morphological difference between Korean language and English language.

3) Grammar abstractness vs Grammar scalability

Extracted grammar on [3,5] is too abstract, which is expected to increase difficulty of actual NLG process; if template semantics are too abstract, than appropriate template searching for a given semantic will be another issue. Therefore, finding the golden mean between scalabiltiy and usability will be key operation in later generation process.

[1] Claire Gardent. “Syntax and Data-to-Text Generation”. 2014.

[2] Bikash Gyawali. “Surface Realisation from Knowledge Bases”. 2016.

[3] Chung-hye Han, Juntae Yoon, Nari Kim and Martha Palmer. “A Feature-Based Lexicalized Tree Adjoining Grammar for Korean”. 2000.

[4] 이강천, 서정연. “의미 중심어에 기반한 한국어 문장 생성 시스템”. 1998.

[5] Sanghoun Song, Jong-Bok Kim, Francis Bond, Jaehyung Yang. “Development of the Korean Resource Grammar: Towards Grammar Customization”. 2010.

### Exact Real Arithmetic and Beyond: Finding Your Way with Ariadne

- August 3, 16h30
- E3-1 #2452
- Pieter Collins (Maastricht University)

Abstract: Many years ago, I was started work on a software package called Ariadne for verification of hybrid dynamic systems. This is a very hard problem, involving real numbers, functions and sets, nonlinear algebraic and differential equations, constraint satisfyability, and discontinuous behaviour, every calculation needs to be done rigorously for the result to be trustworthy. We would ideally even want to rigorously prove that our software is correct! The tool is therefore based on Computable Analysis to achieve clean syntax and semantics, with generic Interfaces for different Types leading allowing for various efficient implementations of Rigorous Numerical Algorithms.

I'll start by giving an overview of exact effective computational Real arithmetic, starting with a simple theoretical approach, and then showing how we implement these into code. I'll cover Ariadne's basic conceptualisation, such as Effective, Validated and Approximate objects, Symbolic, Bounds and Ball representations the Reals based on Dyadic and Floating-point numbers, and Accuracy, Effort, Precision, and Degree parameters. I will then show the Real number module in use. I'll then explain some of the more advanced features, such as the Functional calculus based on Taylor Polynomial Models, and finally give some examples of hybrid system verification in practice. Throughout the talk I will demonstrate features of Ariadne, both in C++ and the Python interpreter.

Finally, I will give some directions for further work, both within the Ariadne framework itself, and in the broader context of rigorous computational mathematics.

Hopefully, after this talk you will be motivated to try Ariadne yourselves to solve some problems, and maybe even contribute to it's development!

### Online Resource Leasing

- May 2+4, 16h00-17h30
- E11 #309
- Friedhelm Meyer auf der Heide (Universität Paderborn)

Abstract: Decisions regarding which resources to acquire in order to provide services comprise the core of many optimization problems. Classical problems which fall into this category of problems are for example Facility Location, Steiner Tree, and Set Cover, as well as several scheduling problems. Such problems have been widely studied in both o?ine and online settings. Traditionally, the decisions are considered to be ?nal: we buy the resources. Once a resource is bought, it can be used any time in the future without inducing further costs. Inspired by the Cloud Computing market, we consider the leasing variants of the above problems: Resources are not bought, but leased for a given time interval; the cost for a lease typically grows with the length of the interval, but the cost per time unit decreases with this length. Such a model for leasing was introduced by Meyerson as the parking permit problem. In the lectures, I will present online algorithms for several resource leasing problems. Technically, our algorithmic approach is mostly based on an online variant of primal-dual approximation algorithms.

### Algorithmic Foundations of Swarm Robotics

- April 25+27, 16h00-17h30
- E11 #309
- Friedhelm Meyer auf der Heide (Universität Paderborn)

Abstract: Swarm robotics considers large swarms of relatively simple mobile robots deployed to some 2- or 3-dimensional area. These robots have very limited sensor capabilities; typically they can only observe aspects of their local environment. The objective of swarm robotics is to understand which global behavior of a swarm is induced by local strategies, simultaneously executed by the individual robots. Typically, the decisions of the individual robots are based on local information only. In these lectures, I look at simple models of swarms of identical, anonymous robots: they are points in the plane and 'see' only their neighbors, i.e. the robots within distance one. I will focus on distributed local strategies of such swarms that result in formations like 'gathering in one point' or 'forming a line'. I will present several strategies for such formation problems, both in discrete and continuous time models, and show upper and lower bounds for their runtime and energy consumption.

### On Verified Real Computation

- April 21, 12h00-13h30
- E3-1 #3420
- Sewon Park (KAIST)

Abstract: While Numerical Engineering often focuses on the raw performance of hardware FLOATs and heuristical “recipes” for most/“practical” instances, mission-critical applications and Computer-Assisted Proofs in Pure Mathematics require guaranteed correctness in all conceivable cases. Exact Real Computation is the paradigm of imperative programming over continuous abstract data types with (necessarily) modified, namely multivalued, semantics of tests. We (i) approach the problem of formal verification for such algorithms; and we (ii) devise, implement, and empirically evaluate three (variants of) algorithms for diagonalizing (i.e. computing a spectral decomposition of) possibly degenerate Hermitian matrices.

### Bi-Topological Spaces and the Continuity Problem

- April 17, 16h00-17h30
- E6-1 #1409
- Dieter Spreen (Universität Siegen)

Abstract: The continuity problem is the question when effective (or Markov computable) maps between effectively given topological spaces are effectively continuous. It will be shown that this is always the case if the the range of the map is effectively bi-regular. As will be shown, such spaces appear quite naturally in the context of the problem.

### Putting your coin collection on a shelf

- April 3, 16h00-17h30
- E6-1 #1409
- Otfried Cheong (KAIST)

Abstract: Imagine you want to present your collection of n coins on a shelf, taking as little space as possible – how should you arrange the coins?

More precisely, we are given n circular disks of different radii, and we want to place them in the plane so that they touch the x-axis from above, such that no two disks overlap. The goal is to minimize the length of the range from the leftmost point on a disk to the rightmost point on a disk.

On this seemingly innocent problem we will meet a wide range of algorithmic concepts: An efficient algorithm for a special case, an NP-hardness proof, an approximation algorithm with a guaranteed approximation factor, APX-hardness, and a quasi-polynomial time approximation scheme.

### Reliable Degenerate Matrix Diagonalization

- March 6, 13h00-14h30
- E3-1 #2452
- Sewon Park (KAIST)

Abstract: We consider the diagonalization of real symmetric square matrices with (not necessarily algebraic entries and) particular emphasis on the degenerate case: In general ill-posed and in fact provably uncomputable a problem, Recursive Analysis [Ziegler&Brattka'04] has established that some orthonormal basis of eigenvectors is computable in the sense of output approximation up to any desired precision - provided that, in addition to approximations to the d*(d+1)/2 matrix' entires, its number k of DISTINCT eigenvalues is known/provided. The present work explores the practical impact and quantitative efficiency of this qualitative and implicit result: We devise, combine, implement, evaluate, and compare four variants of rigorous and total algorithms guaranteed to (i) extract the coefficients of the matrix' characteristic polynomial, (ii) refine root enclosures of the latter using Pellet's Predicate until separating the $k$ disjoint clusters to thus derive the individual eigenvalues' multiplicities d[j] for 0<j⇐k, (iii) employ (some combination of) Trisection, Newton and/or Gräffe Iterations to improve said real root approximations to the precision required to finally (iv) determine an orthonormal base of eigenvectors to these eigenvalues. Algorithms (ii) and (iii) are based on, vary, and extend a recent series of contributions by Sagraloff, Sharma, Yap et al. (2013-2016), (iv) on own work (2016) using a variant of Gaussian Elimination that avoids total tests for real equality as these are known equivalent to the (complement of the) Halting Problem.

Our implementation builds on the C++ library iRRAM, providing an abstract data type REAL for rapid numerical prototyping with exact arithmetic and a subtly modified yet sound semantics of comparisons. Correctness is asserted by recovering the random eigenvector bases to a family or deliberately degenerate artificial test matrices; and efficiency is evaluated with respect to the three parameters output precision, matrix dimension, and eigenvalue separation. It confirms that the sophisticated combined algorithm of Sagraloff, Sharma, Yap et al. (2013-2016) is indeed practical and asymptotically outperforms the two simpler variants - but can in turn be improved by employing trisection for medium precision.

### Computing statistical long-term properties of dynamical systems

- February 28, 18h00-19h00
- E3-1 #2452
- Holger Thies (The University of Tokyo)

Abstract: In this talk, we give an overview of some recent results by Braverman, Grigo and Rojas on the computability and complexity of long-term properties of dynamical systems. Dynamical systems are used to model a large number of processes in nature whose state evolves over time. Important questions deal with the long-term behavior of such systems when the time approaches infinity. We want to discuss computability and computational complexity of such questions in the sense of arbitrary precise approximations as studied in the field of computable analysis. While simulating such a system for bounded time is usually computable, long term properties often can not be determined in a computable way. Galatolo, Hoyrup and Rojas investigated statistical properties of discrete-time dynamical systems described by invariant measures. They show that a computable dynamic system does not necessarily have a computable invariant measure. On the other hand, Braverman, Grigo and Rojas could show that introducing a small amount of noise renders the system computable. We present some of their results and introduce the tools necessary to understand them.

## 2016

### Analyzing Markov Chains using Belief Propagation

- December 16, 15h00-16h00
- E6-1 #1409
- Eric Vigoda (Georgia Institute of Technology)

For counting weighted independent sets weighted by a parameter λ (known as the hard-core model) there is a beautiful connection between statistical physics phase transitions on infinite, regular trees and the computational complexity of approximate counting on graphs of maximum degree D. For λ below the critical point there is an elegant algorithm devised by Weitz (2006). The drawback of his algorithm is that the running time is exponential in log D. In this talk we’ll describe recent work which shows O(n log n) mixing time of the single-site Markov chain when the girth>6 and D is at least a sufficiently large constant. Our proof utilizes BP (belief propagation) to design a distance function in our coupling analysis. This is joint work with C. Efthymiou, T. Hayes, D. Stefankovic, and Y. Yin which appeared in FOCS ’16.

### Paper review: Construction of real algebraic numbers in Coq

- December 9, 11h00-12h00
- E3-1 #2452
- Gyesik Lee (Hankyong National University)

In this talk, we will review the following paper: Cyril Cohen, "Construction of real algebraic numbers in Coq", February 18, 2012.

Abstract: This paper shows a construction in Coq of the set of real algebraic numbers, together with a formal proof that this set has a structure of discrete archimedian real closed field. This construction hence implements an interface of real closed field. Instances of such an interface immediately enjoy quantifier elimination thanks to a previous work. This work also intends to be a basis for the construction of complex algebraic numbers and to be a reference implementation for the certification of numerous algorithms relying on algebraic numbers in computer algebra.

### Locating the geodesic center of a simple polygon

- November 14, 16h
- E3-1 #1501
- Hee-Kap Ahn (POSTECH)

Abstract: Computational geometry deals with basic geometric objects such as points, line segments, polygons, and polyhedra, and aims to develop efficient algorithms and data structures for solving problems by figuring out the geometric, combinatorial, and topological properties of the objects. It has provided numerous methods and algorithms to solve geometric problems in application areas efficiently, such as computer graphics, robotics, geographic information systems, computer vision, and computational biology.

The traditional geometric algorithms, however, are not adequate for real-world data because they are not designed to handle imprecise and uncertain data and therefore they may return a wrong answer due to the imprecision or uncertainty of data.

This talk will provide an introduction to recent results of computational geometry on real-world data and open questions.

### Program Extraction from Proofs

- October 17, 16h00-18h00
- E3-1 #1501
- Gyesik Lee (Hankyong National University)

Abstract : The Curry–Howard correspondence says that there is an isomorphic relation between intuitionistic logic and a constructive type theory. The basic idea is that intuitionistic proofs of mathematical formulas contain constructive information to prove them and that the information can be represented as concrete terms in a constructive type theory. One can then use these information and terms to extract computer programs from proofs. The resulting programs are error-free and correct in the sense that they satisfy the prescribed specifications. That means it is certified that the programs work as expected. This talk will explain the basic idea of Curry-Howard correspondence and its use in extracting certified programs.

### Open Problems in Optimal Patrolling

- October 10, 16h00-18h00
- E3-1 #1501
- Akitoshi Kawamura (The University of Tokyo)

Abstract : In patrolling problems, several mobile agents move inside a given region and try to cooperate so that every designated point in the region is (perpetually) visited sufficiently often (that is, no point should be left unattended for any time period of specified length). There are many variants of this problem: the agents may have the same or different speeds; the terrain may be a line, a cycle, or more general graphs; the points to be visited may be the whole or a (finite) part of the terrain. Problems of this kind are studied in different contexts with various motivations, but finding an optimal patrolling strategy is not straightforward, even in very simple settings. For example, the optimal patrolling of a line segment by several agents is not always achieved by the simple strategy where each agent moves back and forth in a sub-segment of length proportional to its speed. This talk will introduce some positive and negative results and open questions about properties of and algorithms for optimal patrolling.

### Introduction to Coq

- October 5, 16h00-18h00
- E3-1 #2452
- Sunyoung Kim (Yonsei University)

Abstract : In this talk, we introduce a proof assistant Coq, which is based on the Calculus of Inductive Constructions (CIC). Since CIC is a variant of the calculus of construction (CoC : a higher-order typed lambda calculus), we briefly consider type theory and whose central theme Curry-Howard isomorphism. We will give easy exercises that can be proved by using Coq. The C-CoRN library for Coq takes an approach different from the standard library of Coq. At last, we discuss the real numbers in Coq.

### Another New Foundation: A Theory with Combined Concepts from Set Theory, Type Theory and Lesniewski's Mereology

- September 30, 12h30-14h00
- E3-1 #2452
- Jin Hoo Lee (KAIST)

We introduce a new theory which encompasses concepts and ideas from set theory, type theory, and Lesniewski's mereology and describes its possibility as an alternative foundation for mathematics. In the introduction section I will introduce motives for development of the theory and some remarks on the methods of presentation. Axioms of the theory and their philosophical background and justication on the basis of intuitive view are discussed next. Discussed after are realizations of mathematical concepts such as N, Z, Q, R, C, etc. Then this paper concludes with comparisons between the theory and ZFC and its mathematical limitations.

### Root Clustering Problem of Analytic functions

- September 26, 12h00-13h00
- E3-1 #2452
- Sewon Park (KAIST)

Well-known Algorithms for root isolating problem consider a polynomial function which has simple roots only. Recent work of Chee Yap et al., 2013, devised an algorithm which enables to isolate (cluster) roots with multiplicities of any analytic function. This talk will review the work of Chee Yap et al., and discuss about an implementation and verification of the suggested algorithm.

### Analytic Functions - From Real Complexity Theory to Implementation

- September 8, 14h30-15h45
- E3-1 #3445
- Holger Thies (University of Tokyo)

We consider the problem of solving systems of ordinary differential equations in exact real arithmetic, i.e., in the sense of arbitrarily precise numerical computations. It is known from real complexity theory that this problem is PSPACE-hard in the general case. However, when restricting the right hand side functions of the system to be not only polynomial time computable but also analytic, the solutions will again be polynomial time computable and analytic. By carefully choosing the representation, this result can even be turned into uniform algorithms on analytic functions. The talk consists of two parts. First we explore analytic functions from the view point of real complexity theory and give running time and space bounds for various operations. We then show how to turn those theorems into algorithms and use them for practical implementations in the C++ programming language based on the iRRAM library. To that end, we also give a representation for multidimensional analytic functions in terms of an abstract data type and an implementation of such a type. We further give implementations of several different algorithms to solve the ODE problem and compare their running times.

### Recent Progress on Matrix Multiplication

- September 5, 16h00
- E3-1 #1501
- François Le Gall (Kyoto University)

Until a few years ago, the fastest algorithm for matrix multiplication was the “Coppersmith-Winograd algorithm” designed in 1987. Recently, a surge of activity by Andrew Stothers, Virginia Vassilevska-Williams and myself has finally led to improved algorithms. In this talk I will describe the long history of work on the complexity of matrix multiplication, and discuss these very recent improvements.

### Topological View on Computational Shape Optimization

- August 11, 14h00-15h30
- E3-1 #2452
- Dongseong Seon and Chansu Park (both KAIST interns from SNU)

By the sometimes so-called MAIN THEOREM of Recursive Analysis, a computable function must necessarily be continuous. We investigate this condition for example primitives in Shape Optimization: volume, perimeter, and convexity, considered as real-valued objective/constraint functionals on various classes of compact sets equipped with the Hausdorff metric.

### Computational Complexity of N-Body Problem

- August 1, 14h00-15h30
- E3-1 #2452
- Seungwoo Schin (KAIST)

N-body problem is classical problem in physics. In this talk, error bound of stepwise simulation of n-body problem will be discussed. As a side-result of error bound discussion, it can be shown that n-body reachability problem is PSPACE. Then, to show PSPACE-hardness, reduction from other problems that are known to be PSPACE-hard will be given. To be specific, following problems will be discussed; 1) Ray-tracing problem with rational coefficient reflecting obstacles(and for a brainteaser, equivalent problem in game of go will be discussed, which it's equivalence is very straightforward with the given problem), 2) 1-body reachability problem with immobile rectangular obstacles, 3) 1-body reachability problem with uniformly distributed charge plate obstacles, while both problems are discussed with restriction to the impact position.

### Holographic Algorithms

- June 22, 11h30-13h30
- E3-1 #2452
- Iwan Koswara (KAIST)

In computer science, a holographic algorithm is an algorithm that uses a holographic reduction. A holographic reduction is a constant-time reduction that maps solution fragments many-to-many such that the sum of the solution fragments remains unchanged. These concepts were introduced by Leslie Valiant, who called them holographic because “their effect can be viewed as that of producing interference patterns among the solution fragments”. The algorithms are unrelated to laser holography, except metaphorically. Their power comes from the mutual cancellation of many contributions to a sum, analogous to the interference patterns in a hologram. Holographic algorithms have been used to find polynomial-time solutions to problems without such previously known solutions for special cases of satisfiability, vertex cover, and other graph problems. They have received notable coverage due to speculation that they are relevant to the P versus NP problem and their impact on computational complexity theory. Although some of the general problems are #P-hard problems, the special cases solved are not themselves #P-hard, and thus do not prove FP = #P. Holographic algorithms have some similarities with quantum computation, but are completely classical.

### The Ford-Fulkerson Algorithm and its Analysis

- June 1, 14h30-16h00
- E3-1 #2452
- 장문수 (KAIST)

### Algorithms for computing the 2D convex hull

- May 27, 14h30-16h00
- E3-1 #2452
- Imran Fareed (KAIST)

### Algorithms for Parallel Computers

- May 26, 16h30-18h00
- E3-1 #2452
- Dongkyu Lee (KAIST)

### Obstructing Visibilities with One Obstacle

- May 25, 12h00-14h00
- E3-1 #2452
- Jiwon Park (KAIST)

An obstacle representation of a graph G is a drawing of G in the plane with polygons called obstacles; two points are adjacent iff the straight line segment connecting them does not intersect any obstacles. Obstacle number of a graph is the smallest number of obstacles which allows an obstacle representation of the graph. Even a class of graphs of obstacle number 1 is not known completely. There is a nice characterization for graphs which have a representation with 1 convex obstacle: non-double covering circular arc graphs. Also it is known that every outerplanar graph has a representation with 1 outside obstacle. And as far as I know, they are all results about graphs of obstacle number 1.

In this talk, the following new results are presented:

- A smallest graph of obstacle number 2. It has 8 vertices and it is tight. (the smallest graph of obstacle number 2 known so far had 10 vertices)
- Every graph of circumference at most 6 has an outside obstacle representation.
- A class of graphs with an outside obstacle and a a class of graphs without outside obstacle are different. (it was one of main questions on the obstacle number of graphs)

### The Knuth-Morris-Pratt (KMP) Algorithm for String Matching

- May 18, 16h30-18h00
- E3-1 #2452
- 김영석 (Tony) from KAIST's School of Computing

### Towards Capturing Order-Independent P

- May 11, 16h00-17h30
- E3-1 #3445
- Neil Immerman (University of Massachusetts)

In Descriptive Complexity we characterize the complexity of decision problems by how rich a logical language is needed to describe the problem. Important complexity classes have natural logical characterizations, for example NP is the set of problems expressible in second order existential logic (NP =SOE) and P is the set of problems expressible in first order logic, plus a fixed point operator (P = FO(FP)).

The latter characterization is over ordered graphs, i.e. the vertex set is a linearly ordered set. This is appropriate for computational problems because all inputs to a computer are ordered sequences of bits. Any ordering will do; we are interested in the order-independent properties of graphs. The search for order-independent P is closely tied to the complexity of graph isomorphism. I will explain these concepts and the current effort to capture order-independent P.

### Escape probabilities and stationary distribution of metastable Markov chains: theory and computation

- April.15 14h00-16h00
- E3-1 #4443
- Volker Betz (TU Darmstadt)

Metastability is often an obstacle to efficient computation or simulation of average or typical quantities of stochastic dynamical systems. In particular when there is no reversibility, and thus potential theory is not available, few methods exist. In this talk I will argue that escape probabilities are the right object to study in this case, and will show how they determine the metastable dynamics of finite Markov chains. This will lead to an efficient way of computing both the dynamics and the stationary distribution of metastable Markov chains.

### Computational complexity theory for spaces of integrable functions

- Apr.15 10h30-12h00
- E3-1 #4443
- Florian Steinberg (TU Darmstadt and University of Tokyo)

The framework of second order representations was introduced by Kawamura and Cook in 2010 and provides a rigorous notion of computation over continuous structures. It also provides a notion of time and space restricted computation. Choosing a representation means to specify what information a program computing an element of the structure has to provide. We relate the existence of a reasonable choice of information on a compact metric space to a purely metric property of the space: The metric entropy of a compact metric space measures fast the number of balls needed to cover the space grows with the radius of the balls decreasing. We show that the optimal running time of the metric is inherently connected to the metric entropy. These results are applied to show optimality of some concrete representations of function spaces

### Automatic Design of Casts for 3D Printing

- Mar.11 12h30-14h30
- E3-1 #2452
- Jaewoong Han (U-Waterloo and KAIST)

The modern civilization evolved with the series of industrial revolutions. According to Jeremy Rifkin, 3-D printing is the third industrial revolution that minimizes the cost of building and fastens the process of designing. In addition, easier access to a private 3-D printer will personally customize products by the user and for the user.

The 3-D printers are not affordable for everyone yet, but in near future, people will design their own products with their own 3-D printer at home. As it is a new technology, there is a lack of software applications related to 3-D printing in the market. An user-friendly program that designs and builds a set of moulds for the user’s own product is in demand.

There are a number of complications when using a 3-D printer and designing a mould. It is critical to clean and maintain a 3-D printer properly to ensure the quality of the printed parts. Many techniques apply in designing a mould to reduce the material and speed up the process of printing.

A complex convex octahedron is chosen as an input product by the user. The software application will be programmed to create eight pieces of mould for the polyhedron. Using OpenSCAD, an open-sourced software application for creating 3-D computer-aided designs, the program will locate six vertices, create a bounding box, implement existing library to convert vertices to normal vectors of faces and vice versa, calculate cross products of vectors to form cutting planes, and then divide the bounding box into eight parts. Moreover, further improvements will be made, including increasing the size of the bounding box, adding a sprue for pouring material, attaching the connecting section for each piece of the mould, and upgrading the algorithm of the program to deal with more complex geometry.

### Algebraic Specification and Verification - an Introduction to CafeOBJ

- Jan.25 10h30-12h00 and 16h00-17h30, Jan.26 10h30-12h00
- E3-1 #3445
- Norbert Preining (JAIST)

Ensuring correctness of complex systems like computer programs or communication protocols is gaining ever increasing importance. For correctness it does not suffice to consider the finished system, but correctness already on the level of specification is necessary, that is, before the actual implementation starts. To this aim, algebraic specification and verification languages have been conceived. They are aiming at a mathematically correct description of the properties and behavior of the systems under discussion, and in addition often allow to prove (verify) correctness within the same system.

This course will give an introduction to algebraic specification, its history, logic roots, and actuality with respect to current developments. Paired with the theoretical background we give an introduction to CafeOBJ, a programming language that allows both specification and verification to be carried out together. The course should enable participants to understand the theoretical background of algebraic specification, as well as enable them to read and write basic specifications in CafeOBJ.

## 2015

### Shortcuts for the Circle

- Dec 10, 12h30
- E3-1 #2450
- Otfried Cheong (KAIST) (joint work with Mark de Berg, Sang Won Bae, Joachim Gudmundsson, and Christos Levcopoulos)

Let $C$ be the unit circle in $\mathbb{R}^2$. We can view $C$ as a plane graph whose vertices are all the points on $C$, and the distance between any two points on $C$ is the length of the smaller arc between them. We consider a graph augmentation problem on $C$, where we want to place $k \ge 1$ shortcuts on $C$ such that the diameter of the resulting graph is minimized. We analyze for each $k$ with $1 \le k \le 7$ what the optimal set of shortcuts is. Interestingly, the minimum diameter one can obtain is not a strictly decreasing function of $k$. For example, with seven shortcuts one cannot obtain a smaller diameter than with six shortcuts. Finally, we prove that the optimal diameter is $2 + \Theta (1/k^\frac{2}{3})$ for any $k$.

### Worst-Case vs. Average-Case Bit-Complexity Theory of Real Functions

- Dec 3, 14h30
- N1 #111
- Martin Ziegler (KAIST)

While concepts and tools from Theoretical Computer Science are regularly applied to, and significantly support, software development for discrete problems, Numerical Engineering largely employs recipes and methods whose correctness and efficiency is demonstrated empirically. We advertise Real Complexity Theory: a resource-oriented foundation to rigorous computations over continuous universes such as real numbers, vectors, sequences, continuous functions, and Euclidean subsets: in the bit-model by approximation up to given absolute error. We review the underlying notions and some illustrative results – regarding worst-case complexity theory; to then introduce, and initiate the study of, average-case bit-complexity theory over the reals: Like in the discrete case a first, naive notion of polynomial average runtime turns out to lack robustness and is thus refined. Standard examples of explicit continuous functions with increasingly high worst-case complexity are shown to be in fact easy in the mean; while a further example is constructed with both worst and average complexity exponential: for topological/metric reasons, i.e., oracles do not help.

### Well-Separated Pair Decomposition and the Fast Multipole Method

- Nov 26, 12h30
- E3-1 #2450
- Bastian Dörig (TU Darmstadt)

The N-body problem cannot be solved analytically for N>2 and hence is generally approached numerically. Naive algorithms incur cost quadratic in N; and we report on the state-of-the-art achieving subquadratic runtime by means of approximation: the Fast Multipole Method, said to be among the top ten algorithms of the 20th century. We present a rigorous worst-case performance analysis, based on techniques from computational geometry and in terms of bit cost.

### Symmetric Difference is a Metric for Convex Shapes Modulo Homothety

- Nov 17, 13h00
- E3-1 #3420
- Ji-Won Park (KAIST)

Symmetric difference can be used to measure similarity between two shapes. In this talk, similarity up to homothety, which is a combination of translations and scaling, is concerned and there is a recent result of Yon et al. about it. However, their definition is not symmetric and thus not metric. We introduce a variant of symmetric difference whic his a metric for convex shapes modulo homothety (so two convex shapes are regarded the same iff one is a homothety of the other) and provide a proof.

### On the Computational Complexity of Perturbation Sums in Nuclear Physics

- Nov 3, 12h45
- E3-1 #3420
- Bastian Dörig (TU Darmstadt)

Perturbation Theory, although successful in many areas of physics, had long been considered irrelevant for nuclear states where series tend to diverge. Only recent work was able to mend that based on the Hartree-Fock partitioning. It thus raised the question of efficiently computing these approximating finite but large and multi-indexed sums. Using perturbation diagrams we exhibit their inherent structure and symmetries. We exploit them to derive upper and lower bounds on their complexity with a trade-off between approximation quality and runtime.