Rehearsal for presentation of KAIST Technical Report CS-TR-2024-428 at CCC'2024.
We use the spectrum of the Burau representation to introduce a series of tests for a braid being conjugate to a reducible braid, valid for every braid group. The method can be extended to exchangeable and composite braids and to Markov equivalence. We also determine the exact location of the eigenvalues of the Burau matrix (for suitable parameter), with those for reducible braids filling a complex disk and for the exchange move a Descartes oval.
Abstract: A representation-free logic-base approach for computing with infinite objects will be presented. The logic is intuitionistic first-order logic extended by strictly positive inductive and coinductive definitions. Algorithms can be extracted from proofs via realizability. The approach allows to deal with objects like the real numbers and nonempty compact sets of such. More general, it allows to deal with compact metric spaces that come equipped with a finite set of contractions such that the union of their ranges covers the space, and with continuous maps between such spaces. The computational power of the approach is equivalent to that of type-two theory of effectivity.
Abstract: Governing the Commons: The Evolution of Institutions for Collective Action is an examination of the nature of the commons, and the evolution and development of self-organisation and self-governance of those commons. The Evolution of Institutions for Collective Action will be of interest to those who seek an understanding of common-pool resources and their self-governance. I am going to present chapter one of the book. Chapter one begins with an examination of the commons and the presentation of three models of the commons. Professor Elinor Ostrom discusses: the tragedy of the commons, the prisoner's dilemma game, and the logic of collective action. Then the Professor concludes with comparison of the Leviathan approach and the privatization approach to the governance of the commons. To these two approaches the professor offers an empirical alternative. I will recommend the book to every human being. It is an interesting and inspiring book. I am looking forward to see you on the meetin
Abstract: iRRAM iterates all over if more precision is needed. In other words, iRRAM iterates all over even if only a few parts of the program need extra precision. And if that expensively-calculated expression is actually not used in the program, for example by the use of choose function, it will be a waste of computational resources. Although iRRAM has implemented LAZY_BOOLEAN, we need more laziness to remove that kind of computation. So Sewon introduced the lazy versions of types of iRRAM. I will discuss how I could implement the lazy type with iRRAM. I will also discuss briefly about lazy evaluation using call-by-need with iRRAM.
Mandelbrot set is a set of complex numbers whose exterior is computably enumerable. There is a conjecture which says that the Mandelbrot set is also computably enumerable. I will mention some facts about the set, and explain what I have done with iRRAM to draw the set on a plane.
Abstract: Game is a part of human life from ancient times. However, there were only a few attempts to study games using mathematics. Although there is the academic field called game theory, the notion of the game in game theory is a bit different from the game which we really play in regular life. Another attempt is using narratology, which studies the game as storytelling media. However, unlike a movie or a novel, storytelling is not the main property of a game. Actually, most gamers are concentrated on 'playing', more than storytelling. In this talk, I introduce a young academic discipline called Ludology, especially mathematical Ludology which studies and analyzes the game 'playing' using mathematics and computer science. In this area, the formal description of a game is regarded as a mathematical model which satisfies the rules of the game. I will talk about GDL formalism, its applications, the limitations of GDL and other attempts to formalize a game.
Abstract: During this talk I will present some elements of analysis of the singularities of the Ricci flow.
Abstract: In this lecture I will present the notions of covariant derivative, Riemann curvature tensor and Ricci tensor.
Abstract: This is a brief and not very rigorous talk about basics of computable analysis. I will cover the concept of a computable number, sequence, function and related results. If time allows, I will also explain the concept of a computable manifold I'm recently working on.
Abstract: In this second talk I will introduce the notion of tensor and I will present some examples of Riemannian manifold.
Abstract: This talk is the first of a series of talks which aim to present the main results concerning a powerful analytic method for studying the geometry and the topology of manifolds, the Ricci flow. During this first talk I will provide the audience with the notions of Riemannian geometry which are necessary for understanding the Ricci flow.
Abstract: For decades the study of type systems for programming languages has been one of the most active areas of computer science research, with important applications in software engineering, programming language design, high performance compiler implementation, and security of information networks. This talk aims to introduce the area and will cover some range and applications of related topics from type theory including simple/polymorphic type systems, subtyping, and recursive types.
Abstract: After some brief review of basic concepts from the first lecture, I will give an introduction to the table access program Knotscape and its basic features, focusing primarily on its knot location tool.
Abstract: Representing a compact set C ⊂ ℝⁿ is equivalent to defining a characteristic function f:ℝⁿ→{0,1} where f(x)=1 if x ∈ C and f(x)=0 otherwise. When it comes to implementation, however, f is not decidable on the boundary. One solution is to make the characteristic function multivalued allowing overlap around the boundary with precision parameter p. For instance, define f:ℝⁿ×ℕ→{0,1} that produces 1 if B(x,2⁻ᴾ⁺¹)∩C≠∅ and 0 if B(x,2⁻ᴾ)∩C=∅ where B(x,r) is the open ball centered at x with radius r. In this talk we show 1) how to plot an image of a compact set given a characteristic function and 2) how to create a characteristic function corresponding to a compact set that covers given g:[0,1]ᵐ→ℝⁿ.
Abstract: The computability of Probability measure has many different definition. Like sampling or valuation. Unfortunately, not all of them are equivalent. In this talk, I'll explain the definition of different kinds of computability of measures, and relation between them. From this relation, I suggest the notion of complexity of probability measure, namely sampling complexity and expectation complexity.
Abstract: Knot theory is a popular topic in Topology/Pure Mathematics. This is an introductory talk to knot theory at the level for graduate students of computer science. The goal is to explain knots, and how to understand them in terms suitable for computing (diagrams and the DT code), what are link polynomials and braids (and the Burau representation). At the end I plan to introduce the program KnotScape and explain how it can be used (or not) to draw knots and do various calculations with them.
Abstract: In the area of machine learning or statistical inference algorithms, conditional distribution plays the main role. For discrete random variables, computing the conditional distribution is quite intuitive, and there's no problem to define computability of such a distribution. However, for the continuous case, it's not an easy problem. Let's consider the following situation. Let X and Y be dependent random variable with uniform distribution on [0,1]. Then what is the value of P(0<X<0.5|Y=1)? With the classical definition, we can not say anything about its value, even though such probability is meaningful. In this talk, I'll introduce the extended definition of a conditional distribution using the concept of probability kernel, and also define the Type-2 computability of it. Also, I'll show that there exists a conditional distribution which is not computable. Finally, some known results about computability of conditional distributions will be discussed. (This talk is based on the paper “Noncomputable conditional distributions.” by Ackerman et al.)
Abstract: A well-optimized arbitrary precision library, MPFR, works great in practice. Among its many components, the multiplication of two numbers is at the heart of MPFR as the most other components rely on it. Yet, its implementation is based on integer multiplication, which incurs a sizable overhead in small number calculations, namely less than about 212bits. For relaxing this overhead, we explore other implementation that utilizes Floating Point Unit (FPU), maintaining two or four unevaluated double-precision floating-point numbers as data structures. In the performance comparison test, the two-double approach was roughly 6 times faster than MPFR.
Abstract: We consider the problem of raising small polynomials with real coefficients to exponential power. That is, we have an arbitrary but fixed polynomial, and we want to raise this to a large exponent; the goal is to compute coefficients of the resulting polynomial. This problem arises from solving difference schemes, which themselves arise from approximating solutions to partial differential equations, but it itself has independent interest. We investigate the computational complexity of solving this problem. In particular, we show it is in the class GapP, which places it at the same difficulty as computing inner product of exponential-size vectors.
This is a work in progress, joint with Gleb Pogudin, Svetlana Selivanova and Martin Ziegler.
Abstract: We work within a dependent type theory extended by an axiomatic type for classical propositions. The type theory has an interpretation in the category of assemblies over Kleene’s second algebra. The dependent type theory is a formal constructive language for multi-functions from computable analysis. For example, we may construct, in the type theory, a type $R$ and a monad $M$ such that
A type is classical if during the construction of its instances, classical reasoning principles are allowed to be used. For example, any negated type is classical. By constructing a lex modality for double negation sheafs, we show that the type theory has a universe of classical types. By studying the modality, we devise a translation function that transfers (classical) proofs from a classical language to our constructive one.
This provides us a formal framework for studying how the classical reasoning and the constructive reasoning interplay in designing and verifying algorithms for exact real number computation.
* This is a work in progress based on a joint work with Andrej Bauer, Philipp G. Haselwarter and Egbert Rijke
Abstract: In this talk I will give an overview of Ariadne, a software tool originally developed for verification of hybrid nonlinear dynamic systems. Since this is challenging problem involving many fundamental problems of continuous mathematics, the computational kernel contains general-purpose numeric, functional and geometric calculi for working with real numbers, continuous functions and open and compact subsets of Euclidean space. The semantics of the data types and operations is based on the framework of computable analyis, yielding clean and theoretical sound interfaces. Concrete implentations use rigorous numerical methods such as interval arithmetic and Taylor function models, and includes functionality for linear algebra, automatic differentiation, solution of algebraic and differential equations, and nonlinear constrained optimization. Throughout the talk I shall explain I will explain the main concepts used in the implementation, and give examples of the use of Ariadne, from simple real-number arithmetic to systems verification.
Abstract: This is an informal talk, presenting some results on the computational complexity of the problem of finding the solution of an initial value problem for ordinary differential equations with analytic right-hand side function. We consider algorithms based on the power series expansion of the function. We also discuss how we can enrich the power series by some additional information to define a representation for multivariate analytic functions and provide results on the computational complexity of uniformly computing the solution function. We further look at how the algorithms could be implemented in exact real computation frameworks and how their efficiency can be improved for practical applications. Finally, we briefly present some ideas on possible formalizations of the results in the Coq proof assistant based on the Incone library for computable analysis.
Abstract: In this presentation, I propose a practical implementation of a datatype representing every computable real numbers. Operations, such as addition, on the computable real numbers are also implemented.
Abstract: The definition for continuity of a function based on topology is well-established and widespread in modern mathematics. However, there is no unanimous definition for continuity of multivalued functions. One must seek a definition that satisfies as many desirable properties as possible: recovery of continuity of single-valued functions, closure under composition and restriction, quantitativeness, etc. This talk discusses several different approaches toward formalizing continuity of multivalued functions with computable analysis as main area of application. First, an impossibility result of topological approach is shown. Second, Matthais Schroeder's work with lower Vietoris topology is presented. Third, sequential continuity approach is discussed.
Abstract: I develop a library called Incone that is meant to be used for formalization of algorithms from computable analysis in the type theory based proof assistant Coq. The algorithms encountered in this setting often use potentially diverging while loops, which can make their formalization a pain. That such algorithms are difficult to handle is a well-known problem in type theory and has lead to the development of a number of methods for easing the process. Incone adapts and extends some of the ideas from type theory to provide tools optimized for the setting typically encountered in computable analysis. I will illustrate the basics of this machinery using the task of finding a multiplicative inverse of a real number as a simple example. The talk will not require any background in type theory or formal proofs but familiarity with computable analysis. A rough outline can be given as follows: In a first step one considers a bound of the number of loops to be given as an additional argument, thus obtaining a primitive recursive procedure and moving an unbounded search performed by a while loop to a proof of an existential statement on the specification side. Subsequently, multivalued semantics are used to formulate a stronger correctness statement of the modified algorithm that can be proven more elementary without the need for inductive reasoning. From the adapted procedure and the elementary proof, Incone can fully automatically extract a directly executable algorithm to produce the values of the function together with a proof of its correctness by use of Coq's dependent type system. In computable analysis one is often interested in more than just the values of the function, though. A common set of information about a function is a name in the function space representation (or an algorithm computing such a name). Thus, I will discuss a simple set of efficient continuity information that can be added and is sufficient to allow for an automatic extraction of (an algorithm computing) such a name. This procedure is also available in Incone and I will illustrate its use at the above example of inversion. If there is enough time I will additionally consider multiplication of real numbers as an example where the approach suggests the user to make his algorithms more amendable to parallelization. None of the automatic procedures mentioned in this talk inspect the details of the algorithms and thus they remain functional if the traditional notion of computability is replaced by other concepts. In particular they are stable under relativization. If I still have some time after all of this, I will explain in how far manual specification of the continuity information can be avoided and the relations to work about precompletions of representations and completions of represented spaces.
The talk is based on joint work with Holger Thies and Michal Konecny. It may feature additional examples that I developed in detail when discussing with Tatsuji Kawai.
Abstract: We develop a framework to define and verify the correctness of exact real arithmetic operations using the Coq proof assistant. Our work is built on top of the Incone library, a formalization of ideas from computable analysis in Coq. Incone provides a generalized notion of a represented space similar to the one known from computable analysis to assign computational content to infinite objects such as real numbers and functions. Our framework allows us to verify operations on real numbers without explicitly mentioning the underlying representation and therefore makes it possible to easily exchange and compare different representations. Further, we can use Coq’s code extraction feature to generate efficient Haskell programs for exact real computation. (This talk is based on joint work with Michal Konečný and Florian Steinberg)
Abstract Every compact metric space has a quantitative standard representation, defined in terms of fast Cauchy sequences. This representation is quantitatively admissible, meaning that every other representation can be translated to it efficiently, not just continuously. Let $(X, \delta_X)$ and $(Y, \delta_Y)$ be represented compact metric spaces. We can interleave $\delta_X$ and $\delta_Y$ to construct a representation $\delta$ for $X \times Y$. However, $X \times Y$ is also a compact metric space, so it has a standard representation $\sigma$ constructed out of scratch. Intuitively, $\delta$ seems better than $\sigma$ in that $\delta$ has more transparent structure. However, could this benefit be stated formally? One argument could be that the projection $(x,y) \mapsto x$ is computable with respect to $\delta$, but not $\sigma$. But this computability argument does not seem so secure.
Abstract: Some applications require more precision than 64-bit provides such as numerical analysis. One popular solution includes the extension mantissa and exponent, implemented in GMP. Thanks to Fast Fourier Transform, it runs fast, namely, O(n log n loglog n). When it comes to relatively small size, however, it is known that hardware-powered methods are better. We begin to build the latter method for finding the break-even point where the performances of two different methods meet.
Abstract: The category of assemblies over Kleene’s second algebra has both an object for the classical real numbers and an object for the constructive Cauchy real numbers. Taking the `constructive’ dependent type theory introduced in the previous talk as our language of the category, we present constructions of the classical Dedekind real numbers and the constructive Cauchy real numbers. One interesting point of the constructions is that they do not require a rule for general quotients or Setoids. We will see that a computational content of a function on constructive real numbers is the exact real arithmetic that we are familiar with. On the other hand, a classical real number acts as a specification of a constructive real number.
This talk includes a short review of the category of assemblies and the dependent type theory. Hence, the previous talk is not necessary for attending this one.
Abstract: Multivalued computation is essential in computable analysis. For example, deciding a soft-sign of a real number up to some tolerance factor is necessarily multivalued. The category of assemblies can express multivalued computation internally. For any two represented sets A and B, there is an object in the category whose points are precisely computable multivalued functions from A to B. We devise a dependent type theory as a language of the category. The language can express multivalued functions, existences, disjunctions and has a realizability interpretation in the category of assemblies.
In this lecture we introduce some of the main normed spaces used in the theory of PDEs: finitely continuously differentiable and Sobolev functions, their properties and examples.
We also briefly overview “Part III Methods of solving PDEs” emphasizing motivation of developing a general computational framework.
We state more formally and generally the last time established classical solution theorems; then proceed to ``weak'' solutions and computability/ noncomputability examples.
This lecture mainly focuses on examples of the (hyperbolic) transport and wave equations. We simultaneously solve them and prove basic theorems of well posedness of related initial-value (IVP) and boundary-value (BVP) problems.
In the same time we get some intuition about what kind of issues naturally arise for PDEs, including:
- characteristic surfaces and relation to ODEs;
- functional spaces to consider (from “classical” to “weak” solutions);
- domains of existence and uniqueness;
- dependence on the number of variables and unknown functions;
- (non)homogeneity;
also computability or noncomputability in different functional classes/settings.
This smoothly takes us from the introductory part to Part II: Main functions and theorems.
Brief contents of Part II:
1. Classical solutions to PDEs
2. Analytic PDEs: Cauchy-Kovalevskaya theorem and Hadamard's examples
3. Weak solutions and Sobolev spaces
4. Connection to computability: results about computability/noncomputability of the same problem in different normed spaces
In this lecture we continue the introductory part of the course by giving many examples of different PDEs and systems of PDEs arising from applications, and fitting them into the general classifications (linear/ quasilinear/ nonlinear; hyperbolic/ parabolic/ elliptic, etc.).
We explain what a well posed problem is and give examples of well and not well posed problems for PDEs of different types, as well as explain that computability and computable dependence are a (much stronger) effective version of solvability and continuous dependence.
This is the first lecture in the introductory mini-course on PDEs with relation to computability and complexity theories in the recently fast developing exact real computation approach.
The course gives a brief overview of the main concepts and tools of PDEs and recents results on their computational properties, as well as open questions.
Brief contents of Part I:
0. Motivations from computability, short reminder about ODEs (concepts and facts needed in the course)
1. Reminder of basic definitions (PDEs, their solutions etc) with examples
2. Possibility of describing the same process as a first-order system and as a higher-order equation
3. Main types of PDEs (hyperbolic, parabolic, elliptic)
4. Well posed initial-value and boundary-value problems, their properties and examples
5. Connection to computability: solvable vs computable; continuous dependence vs computable dependence
The course will consist of 5-6 lectures (depending on how fast we go) followed with
Part II: Main functions and theorems
Part III: Methods of solving PDEs
A detailed program and slides will be distributed to listeners by email.
Abstract: Both in theoretical and practical part of computer science, randomization is powerful technique. Many difficult problem can be solved in much simpler way by algorithm including 'roll dice', which can give efficient and correct solution with high probability. And for this kinds of randomized algorithm, we must be able to do sampling from fixed probability distribution. And we consider this problem for continuous objects in the sense of Computable Analysis. At first, we proved that any borel probability distribution on second countable T_0 space can be simulated by infinite fair coin flips. This result extends [Schroder&Simpson'05]. Secondly, we'll give the algorithm to sample the random function from C[0,1] with wiener measure(aka Browninan motion), and classify the condition that this algorithm is computable. Joint work with Martin Ziegler, Willem Fouche, Donghyun Lim, Sewon Park and Matthias Schröder.
Abstract: Grassmannians are the set of subspaces of a given dimension over a vector space, such as Euclidean space, and they are equipped with the operations Minkowski sum (join), intersection (meet), orthogonal complement, and projection. Under the paradigm of Exact Real Computation and acknowledging that testing for equality is undecidable in this paradigm, we give specifications and devise algorithms for representing Grassmannians and operations, while verifying their correctness and avoiding problems with computability. These algorithms are also realized and implemented in iRRAM. Joint work with Donghyun Lim, Sewon Park and Martin Ziegler.
Abstract: We establish complexity bounds for computing vector inner product and matrix powers with real polynomial-time computable entries, in the rigorous sense of computable (and complexity) analysis. We prove that for exponentially long vectors the inner product belongs to the complexity class FP^GapP, while raising a matrix of exponential dimension to exponential power is in PSPACE provided the powers are uniformly bounded. For the special case of 2-band circulant matrices naturally arising in partial differential equations, we devise a polynomial-time algorithm based on efficient polynomial (binomial) powering. Joint work with Gleb Pogudin, Svetlana Selivanova and Martin Ziegler.
Abstract: A power series with positive radius of convergence constitutes the germ to an analytic function via analytic continuation. The latter process is famous in Pure Mathematics,but infamous in Numerics for its instability. Exact Real Computation [arXiv:1608.05787v4],realized in the iRRAM C++ library, provides real numbers as abstract data type devoid ofrounding errors. Joint work with Hyunwoo Lee and Martin Ziegler.
Abstract: Har-mo-ny is a commercial reachability game whose goal is to reach the goal state by successively applying legal moves starting from the initial state. State space search is a general method applicable to this type of games. We present heuristics that guide the search to reduce the number of states explored for the game of Har-mo-ny. The heuristics are implemented and evaluated.
Abstract: The world without the internet is unimaginable as it has come through our lives in a great depth. In a couple of decades, information and users has been increased dramatically along with cyber crimes. Among many types of cyber crimes, we deal with the case in which the attacker can execute an arbitrary code on the client part when the client connects to the attacker’s web server. In particular, combined with the flash player, widely used, the damage is expected to get bigger. We analyze and implement one of these kind vulnerabilities.
Abstract: We introduce a dependent type theory called ertt which has a type for exact real numbers by its axiom. Related terms and types are assumed in the type theory but its expressiveness is not beyond type-2 computability. We also introduce a simple functional programming language called erlc (exact real lambda calculus) and devise a proof extraction mechanism from a proof in the type theory into a program in erlc. The introduced type theory, the programming language and the extraction mechanism are implemented. Constructive intermediate value theorem is proved and an erlc program finding a root of a real function is extracted from the proof.
Abstract: This is a contribution to computable analysis in the tradition of Grzegorczyk, Lacombe, and Weihrauch. The main theorem of computable analysis asserts that any computable function is continuous. The solution operators for many interesting problems encountered in practice turn out to be discontinuous, however. It hence is a natural question how much partial information may be obtained on the solutions of a problem with discontinuous solution operator in a continuous or computable way. We formalise this idea by introducing the notion of continuous envelopes of discontinuous functions. The envelopes of a given function can be partially ordered in a natural way according to the amount of information they encode. We show that for any function between computably admissible represented spaces this partial order has a greatest element, which we call the universal envelope. We develop some basic techniques for the calculation of a suitable representation of the universal envelope in practice. We apply the ideas we have developed to the problem of locating the fixed point set of a continuous self-map of the unit ball in finite-dimensional Euclidean space, and the problem of locating the fixed point set of a nonexpansive self-map of the unit ball in infinite-dimensional separable real Hilbert space.
Abstract: In this talk we give an overview of recent achievements and open questions about computational complexity of finding solutions to partial differential equations (PDEs), as well as ways to implement them in exact real computation packages. The talk includes updates on the joint work with Ivan Koswara, Gleb Pogudin and Martin Ziegler on linear PDEs and matrix/operator powering.
Abstract: In this talk, we present infinite time Turing machines (ITTM), from the original definition of the model to some new infinite time algorithms.
We will present algorithmic techniques that allow to highlight some properties of the ITTM-computable ordinals. In particular, we will study gaps in ordinal computation times, that is to say, ordinal times at which no infinite time program halts. We will explain some properties of the gaps in the ordinal computation times.
Abstract: Floating-point numbers, though used widely for numerical calculations, are inexact and suffer from imprecision in computations. Rigorous computability with and of real numbers and functions have been studied since the works of Turing (1937) and Grzegorczyk (1957); Exact Real Computation is a modern approach of ameliorating issues arising from floating-point numerics by introducing high-level Abstract Data Types for real numbers (hiding their internal representation) and continuous functions. It provides *partial* comparisons, capturing that equality of two real numbers is known equivalent to the complement of the Halting problem. A solution has been to introduce primitive operations under the Kleene and Priest three-valued logic {0,1,↓}, including a continuous conditional. With these tools in mind, one can express the join of two single-variable functions. We demonstrate that also the join of more functions defined on more real variables can explicitly be expressed similarly.
Abstract: 1D Brownian Motion (aka Wiener Process) is a probability distribution on the space C[0;1] of continuous functions f:[0;1]→R with f(0)=0. Although the pointwise and correlated distibution of f's values at any finite choice of arguments is just a multivariate Gaussian distribution, the question of whether f itself is computable remains open. We discuss several approaches to settingly the conjecture positively, and for each point out the corresponding key property to prove.
Abstract: According to Haar's Theorem, every compact group G has a unique right and/or left invariant regular Borel probability measure mu. We call HAAR AVERAGE (of G) the functional integrating any continuous function f:G→R with respect to mu. This generalizes, and recovers for the additive group G=[0;1) mod 1 aka multiplicative complex circle group, the usual Riemann integral. Previous work [CiE'2018, FWAC'18] had established the computability of Haar Averages for a large class of computable compact (metric) groups. Here we focus on two arguably practically most important continuous compact metric groups SO(3) of orthogonal real 3×3 matrices of determinant 1, and SU(2) of unitary complex 2×2 matrices of determinant 1.
We present (i) explicit algorithms in the paradigm ERC' arXiv:1608.05787, (ii) relate their computational bit-cost to the discrete complexity class #P_1 and (iii) prove that optimal. Again, this generalizes the well-known 1D real case [Ko91, Theorem 5.32].
Abstract: The Type-2 Theory of Effectivity (TTE) provides a framework which allows to express and reason about real number computations on digital computers or Turing machines in a rigorous manner, whereas Type-1 computability is concerned with problems on discrete inputs.
I will report on how Type-2 computations can solve an important Type-1 problem: Satisfiability of formulas over non-linear real constraints. An implementation is available in form of an SMT solver [1] which in a conflict-driven way computes local linearizations of real functions like sin(x), exp(x), etc.
The second aspect covered is on how conceptually very different TTE implementations [2],[3] can be abstracted by an imperative low-level language with intuitive real point semantics as opposed to sequences of intervals. Efficient execution makes the language suitable to serve as a target to compile higher-level Type-1 and Type-2 languages to.
Abstract: The talk presents the recent implementation of a programming language for finite computations on the reals. Compiler targets are fairly simple processors using a variety of standard and non-standard number codes, and networks of such. Targeted applications are embedded signal processing and robotics systems. Although the language design strives at simplicity, using a single base type, a single structure of brackets within the definitions of algorithms, and a functional style, the compiler is not as simple and uses some non-standard techniques. E.g., mathematical data types such as finite-dimensional real algebras simply redefine the multiply operation for tuples of a certain size. The compiler then automatically substitutes the multiplies in the available linear vector and polynomial operations. Other automatic functions of the compiler are for the extension of functions and operations to tuples of numbers, for replacing the real operations by rounded operations during code generation, and certain mathematical features like automatic differention. Also, the target systems are highly parallel and need to deal with real time. The compiler works in conjunction with a virtual machine that is used for constant folding and for executing parts of or simulating entire applications.
Abstract: The problem of powering a large (sparse) real matrix or polynomial is interesting due to its many numerical applications, but also because of the new effects it exhibits: as opposed to the integer setting, a real matrix or polynomial may have bounded powers! We relate the computational cost of various variants of the problems to different classical complexity classes between NC and PSPACE.
Abstract: It is hard for many elderly people to be able to get up again after they fall. Falls may result in psychological diseases, stress, severe injuries and even death in the worst case. According to the statistics, more than 30 percent of the adult elderly aged 65 and more live alone and in upto 35% of them one fall is expected to happen per year. Therefore, it is necessary to have a real-time monitoring system of elderly status and contact family members and caregivers of them on time. Representative existing technology to mitigate this problem includes wearable sensor devices and techniques making use of machine learning. Even though these techniques are efficient and highly reliable, the former has downside of not being worn at all times, while the latter one requires high-costly resources along with big dataset which cannot be acquired for everyone due to privacy reasons. Our real-time fall detection system for elderly uses classic computer vision to detect emergency situations in the following manner. First, background is subtracted to track the foreground moving object. Next, contours of moving object along with three components defining fall probability is calculated. If the fall probability reaches defined threshold, we presume that the person has fallen and send alarm to the family members of a person in care.
Abstract: Currently, Coq has a standard library for axiomatic real numbers. The library assumes decidability of real number in/equality. Hence, using the library, in many cases, it is impossible to find a right computational content of a proof. In the talk, I introduce an alternative real numbers whose decidability is not assumed while real numbers still remain to be abstract. The talk shows how a proof of (constructive) Intermediate Value Theorem using the real number is turned into a root finding program in AERN2, a Haskell based ERA language .
Abstract: Elimination of unknowns is a fundamental tool for studying solutions of equations (linear, polynomial, differential, etc.). The elimination problem for differential (resp., difference) equations can be stated as follows. Given a system of differential (resp., difference equations) in m + n unknowns, determine whether there is a consequence of the system involving only the first m variables and find such a consequence if it exists.
In the talk, I will present solutions to this problem for differential and difference equations by deriving an explicit upper bound for the number of prolongations sufficient for a reduction of this elimination problem to a well-studied elimination problem for systems of polynomial equations. This gives the first elimination algorithm for difference equations and new elimination algorithm for differential equations. Our approach also allows for randomized computation to significantly increase the efficiency.
I will describe applications of these techniques to discrete mathematics (in particular, to enumerative combinatorics) and modeling (in particular, to parameter identifiability of differential models).
The talk is based on a joint work with A. Ovchinnikov and T. Vo and on a joint work with A. Ovchinnikov and T. Scanlon.
Abstract: This talk presents an introduction to constructive reverse mathematics (CRM) with some recent results. The aim of CRM is to classify various theorems in intuitionistic, constructive recursive and classical mathematics by logical principles, function existence axioms and their combinations. We review some results in classical reverse mathematics (Friedman-Simpson program), and axioms of intuitionistic and constructive recursive mathematics with their consequences to motivate CRM. Then we present some results of CRM with and without countable choice, and recent results in CRM without countable choice.
Abstract: Proofs can have computational content, and the most direct method to extract it is via a realizability interpretation in the form given by Kreisel. We present a recent case study: Brouwer's fan theorem for uniformly coconvex bars. As an application we prove a constructive version of the Heine-Borel Theorem, for uniformly coconvex coverings. We also discuss some relevant general issues, among them the notions of computability and equality in higher types.
Abstract:
Among the many (polynomial-time) equivalent ways of encoding real numbers, the signed-digit expansion stands out beyond, say, the encoding by a sequence of fixed-point approximations:
It renders addition computable in optimal linear, as opposed to quadratic, running time in the streaming model with bit-cost; closely related: it is Höder-continuous as partial mapping from Cantor space onto (i.e. as so-called TTE representation of) the unit interval.
We extend and generalize this to arbitrary compact metric spaces, gauging optimality by comparing the space's entropy to the representation's modulus of continuity.
(Joint work with Svetlana Selivanova and Martin Ziegler)
Abstract: “Deterrence is the art of producing in the mind of the enemy the fear to attack” - Dr. Strangelove. Deterrence Theory, as developed by Herman Kahn and Thomas Schelling and others, had led to the doctrine of Mutual Assured Destruction. Yet it seems paradoxical, if not contradictory, to credit the arms race for maintaining, rather than jeopardizing, geopolitical stability during the Cold War.
We revisit the Game-theoretic perspective on Deterrence Theory in order to define, analyze, and empirically assess a rigorous mathematical notion of stability common in Numerics and Geometry: the distance/perturbation of payoffs that do not change the 'type' of the game. In order to formally classify all 'types' of games, we propose, compare, and generalize different equivalence relations on 2×2 games, inducing from 3 (coarsest) to 84 (finest) classes. Our aim is to capture politically relevant properties of possible outcomes under various types of actors like rational, defensive, offensive, …
Abstract: We will introduce a quasilinear, in terms of bit-cost, and implementation friendly version of the Fast Multipole Method (FMM) in order to solve the real version of Trummer’s Problem, which is closely related to the N-Body Simulation Problem with pairwise Coloumb forces. While solving the differential equation for the previous mentioned problem numerically one has to solve Trummer’s Problem in each iteartion of the Euler Method.
Abstract: A period is the difference between the volumes of two sets bounded by systems of polynomial inequalities with rational coefficients. Periods are receiving increasing interest in Algebraic Model Theory as they have finite descriptions (the polynomials’ coefficients) and include all algebraic reals as well as some transcendentals. Recent research has located their worst-case complexity in low levels of the Grzegorczyk Hierarchy. The present work introduces, analyzes, and evaluates a rigorous algorithm for rigorously computing periods up to error 2^{−n}. Furthermore, we show computing periods up to error 2^{−n} is NP-hard.
Abstract: We consider an interval matrix which contains a singular matrix. The aim of the work is to, given a such interval matrix, compute a set of interval vectors each containing a vector that span the singular matrix's kernel. Considering the famous example of Reichmann, K 1979, the work analyzes (i) when does an interval Gaussian algorithm succeed, (ii) how does the computed intervals' width get bounded and (iii) bit-cost of the interval computation.
Abstract: Many real-world processes such as in biology and chemistry are modeled using systems of ordinary differential equations with parameters. Given such a system, we say that a parameter is globally identifiable if it can be uniquely recovered from input and output data. The main contribution of this paper is to provide theory, an algorithm, and software for deciding global identifiability. First, we rigorously derive an algebraic criterion for global identifiability (this is an analytic property), which yields a deterministic algorithm. Second, we improve the efficiency by randomizing the algorithm while guaranteeing probability of correctness. Our implementation and comparison with several current software such as DAISY, COMBOS and GenSSI 2.0 show the superioriority of our algorithm. (Joint work with H.Hong, A.Ovchinnikov and G.Pogudin)
Abstract: For over two decades, Exact Geometric Computation (EGC) has provided a paradigm in Computational Geometry for the correct implementation of geometric algorithms. It is the most successful approach for addressing numerical nonrobustness, leading to successful libraries and practical algorithms in many areas. We review some pressing reasons to extend this paradigm:
This talk describes a research program to develop \soft“ (i.e., purely numerical) approaches for addressing the above issues. We illustrate these ideas by looking at recent work in several areas:
Some common themes emerge from the above list: we replace the exact algebraic model of computation (Real RAM or BSS Model) by a model based on numerical interval approximations. The main algorithmic paradigms are subdivision and iteration. We introduce an input resolution parameter (epsilon), but it must be used in a novel “soft” way. Soft versions of classical (“hard”) geometric predicates must be developed.
Abstract: The system architecture of networks of FPGA nodes holding soft processors imposes some extra requirements on the programming tools to be applied. There will be many fairly simple processors working in parallel on a mix of standard and non-standard number codes. Memory breaks up into disjoint sections each accessible to a few processors only. FPGA and SoC based systems are known to be difficult to use. Therefore is has been an important additional requirement to end up with a small, easy-to-use abstract language. The present talk gives some keys on how simplicity can be achieved and has been in the language π-Nets. The crucial support for multiple number types is through a method of functorial substitution starting from a unique abstract type of real number. Functorial substitution is also applied to systematically overload symbols to keep the number of program symbols small, and to integrate some high-level compiler functions reinterpreting the inter- mediate code in different fashions, one being the well-known method of automatic diffentiation.
Abstract: This talk presents a method for building high-performance embedded systems from FPGA chips and SoC devices integrating both an FPGA and a processor system. The focused applications are to digital signal processing, robotics and embedded high-performance computing all with emphasis on number crunching. The proposed system architecture is based on networks of small-to-mid size FPGA nodes linked by high-speed interfaces, with each FPGA node holding a sub network of soft- ware defined processors and being equipped with external memory. A hardware operating system has been defined implementing the required networking and multi-threaded control functions such that application specific system only have to configure the right mixes of computational circuits on top of it, typically from a library residing in an attached Flash memory. Compute circuits attaching to the OS comprise a non-standard floating-point ALU and an ALU supporting complex and quaternionic operations. A sample system presented in some detail links 50 SoC nodes into a highly efficient network. The system architecture is supported by a special parallel programming language targeting the simple processors with their non-standard arithmetics. It is the topic of another talk.
Abstract: A sub-Riemannian space is a manifold with a selected distribution (of “allowed movement directions” represented by the spanning vector fields) of the tangent bundle, which spans by nested commutators, up to some finite order, the whole tangent bundle. Such geometries naturaly arise in nonolonomic mechanics, robotics, thermodynamics, quantum mechanics, neurobiology etc. and are closely related to optimal control problems on the corresponding configuration space.
As is well known, there exists an intrinsic Carnot-Caratheodory metric generated by the “allowed” vector fiels. Studying the Gromov's tangent cone to the corresponding metric space is widely used to construct efficient motion planning algorithms for related optimal control systems. We generalize this construction to weighted vector fields, which provides applications to optimal control theory of systems nonlinear on control parameters. Such construction requires, in particular, an extension of Gromov's theory to quasimetric spaces, since the intrinsic C-C metric doesn't exist in this case.
Abstract: A period is the difference between the volumes of two semi-algebraic sets. Recent research has located their worst-case complexity in low levels of the Grzegorczyk Hierarchy. The present work introduces, analyzes, and evaluates three rigorous algorithms for rigorously computing periods: a deterministic, a randomized, and a 'transcendental' one.
(Joint work with Sewon Park, to be presented at WALCOM 2018)
Abstract: We discuss possibilities of application of methods of numerical analysis and computer algebra to investigate computability and complexity of finding solution operators of systems of PDEs. Computability is considered in the sense of the K. Weihrauch’s TTE approach. We also report on our recent joint results with V. Selivanov for the example of symmetric hyperbolic systems.
Abstract: PAC learning is a theoretical framework for the rigorous mathematical analysis of machine learning proposed in 1984 by Leslie Valiant. We motivate the approach, introduce the basic concept(s), survey the most important theorem(s), sketch an example proof, and present an example application.
Abstract: Adrian, Radu, and Sebastian suggested the software strategy to implement Unpredictable random number generator (URNG) based on the functionality of the operating system’s thread scheduler (2008). Based on their work, I make more simple but powerful URNG which produce the unpredictable random bits. By the NIST statistical test, I can check c++ implementation of this generator can make high quality random number faster than Adrian and 2 others’ strategy.
Abstract: Inspired and guided by the iRRAM C++ library (Müller 2001), we formally specify a programming language for the paradigm of EXACT REAL COMPUTATION: reliably operating on encapsulated CONTINUOUS data types such as (not necessarily algebraic) real numbers — imperatively and exactly (no rounding errors) with primitives computable in the sense of Recursive Analysis and a necessarily modified multivalued=non-extensional semantics of tests. We then propose a complete logical system over two types, integers and real numbers, for rigorous correctness proofs of such algorithms. We extend the rules of Hoare Logic to support the formal derivation of such proofs, and have them verified in the Coq Proof Assistant. Our approach is exemplified with three simple numerical example problems: integer rounding, solving systems of linear equations, and continuous root finding.
(joint work with Gyesik Lee, Norbert Müller, Eike Neumann, Sewon Park, and Norbert Preining)
Abstract: We report the result of Reingold (2004) to decide undirected s-t connectivity in deterministic logarithmic space. We also report results about universal traversal sequences, which formed an important step towards solving the deterministic space complexity of undirected s-t connectivity.
Abstract: We report on work by Nisan, Szemeredy, and Wigderson (1992) deciding undirected graph reachability in deterministic space O(log^{1.5} n); and on Trifonov's 2005 improvement to space O(log(n)*loglog(n)).
Abstract: This talk will cover one of the surface realization technique using Feature Structure Based Lexicalized Tree Adjoining Grammar(FB-LTAG), suggested by [1]. After that, talk will cover possible issues in applying [1] to Basbalo framework. Firstly, FB-LTAG formalism will be introduced. FB-LTAG will serve as a grammar 'template' that contains both syntactic and semantic information. Than grammar extraction - which is denoted by FB-LTAG - algorithm from a preprocessed natural language will be suggested. Part-of-speech(POS) parse tree of a sentence and RDF Triples that contains semantic of a sentence will be used as a preprocessed data for a natural language sentence. Whole algorithm is suggested in [2].
In actual implementation of the framework, few issues need to be resolved.
1) Sentence preprocessing
On sentence preprocessing, generation of RDF triple is necessary. For now, Basbalo framework does not have ontology dedicated for baseball, which makes it impossible to generate RDF triple data. Therefore, comprehensive ontology implementation is necessary.
2) Language constraint
According to [3,4], linguistic issues can be addressed upon actual grammar extraction of Korean Language. Such constraint is mainly due to morphological difference between Korean language and English language.
3) Grammar abstractness vs Grammar scalability
Extracted grammar on [3,5] is too abstract, which is expected to increase difficulty of actual NLG process; if template semantics are too abstract, than appropriate template searching for a given semantic will be another issue. Therefore, finding the golden mean between scalabiltiy and usability will be key operation in later generation process.
[1] Claire Gardent. “Syntax and Data-to-Text Generation”. 2014.
[2] Bikash Gyawali. “Surface Realisation from Knowledge Bases”. 2016.
[3] Chung-hye Han, Juntae Yoon, Nari Kim and Martha Palmer. “A Feature-Based Lexicalized Tree Adjoining Grammar for Korean”. 2000.
[4] 이강천, 서정연. “의미 중심어에 기반한 한국어 문장 생성 시스템”. 1998.
[5] Sanghoun Song, Jong-Bok Kim, Francis Bond, Jaehyung Yang. “Development of the Korean Resource Grammar: Towards Grammar Customization”. 2010.
Abstract: Many years ago, I was started work on a software package called Ariadne for verification of hybrid dynamic systems. This is a very hard problem, involving real numbers, functions and sets, nonlinear algebraic and differential equations, constraint satisfyability, and discontinuous behaviour, every calculation needs to be done rigorously for the result to be trustworthy. We would ideally even want to rigorously prove that our software is correct! The tool is therefore based on Computable Analysis to achieve clean syntax and semantics, with generic Interfaces for different Types leading allowing for various efficient implementations of Rigorous Numerical Algorithms.
I'll start by giving an overview of exact effective computational Real arithmetic, starting with a simple theoretical approach, and then showing how we implement these into code. I'll cover Ariadne's basic conceptualisation, such as Effective, Validated and Approximate objects, Symbolic, Bounds and Ball representations the Reals based on Dyadic and Floating-point numbers, and Accuracy, Effort, Precision, and Degree parameters. I will then show the Real number module in use. I'll then explain some of the more advanced features, such as the Functional calculus based on Taylor Polynomial Models, and finally give some examples of hybrid system verification in practice. Throughout the talk I will demonstrate features of Ariadne, both in C++ and the Python interpreter.
Finally, I will give some directions for further work, both within the Ariadne framework itself, and in the broader context of rigorous computational mathematics.
Hopefully, after this talk you will be motivated to try Ariadne yourselves to solve some problems, and maybe even contribute to it's development!
Abstract: Decisions regarding which resources to acquire in order to provide services comprise the core of many optimization problems. Classical problems which fall into this category of problems are for example Facility Location, Steiner Tree, and Set Cover, as well as several scheduling problems. Such problems have been widely studied in both o?ine and online settings. Traditionally, the decisions are considered to be ?nal: we buy the resources. Once a resource is bought, it can be used any time in the future without inducing further costs. Inspired by the Cloud Computing market, we consider the leasing variants of the above problems: Resources are not bought, but leased for a given time interval; the cost for a lease typically grows with the length of the interval, but the cost per time unit decreases with this length. Such a model for leasing was introduced by Meyerson as the parking permit problem. In the lectures, I will present online algorithms for several resource leasing problems. Technically, our algorithmic approach is mostly based on an online variant of primal-dual approximation algorithms.
Abstract: Swarm robotics considers large swarms of relatively simple mobile robots deployed to some 2- or 3-dimensional area. These robots have very limited sensor capabilities; typically they can only observe aspects of their local environment. The objective of swarm robotics is to understand which global behavior of a swarm is induced by local strategies, simultaneously executed by the individual robots. Typically, the decisions of the individual robots are based on local information only. In these lectures, I look at simple models of swarms of identical, anonymous robots: they are points in the plane and 'see' only their neighbors, i.e. the robots within distance one. I will focus on distributed local strategies of such swarms that result in formations like 'gathering in one point' or 'forming a line'. I will present several strategies for such formation problems, both in discrete and continuous time models, and show upper and lower bounds for their runtime and energy consumption.
Abstract: While Numerical Engineering often focuses on the raw performance of hardware FLOATs and heuristical “recipes” for most/“practical” instances, mission-critical applications and Computer-Assisted Proofs in Pure Mathematics require guaranteed correctness in all conceivable cases. Exact Real Computation is the paradigm of imperative programming over continuous abstract data types with (necessarily) modified, namely multivalued, semantics of tests. We (i) approach the problem of formal verification for such algorithms; and we (ii) devise, implement, and empirically evaluate three (variants of) algorithms for diagonalizing (i.e. computing a spectral decomposition of) possibly degenerate Hermitian matrices.
Abstract: The continuity problem is the question when effective (or Markov computable) maps between effectively given topological spaces are effectively continuous. It will be shown that this is always the case if the the range of the map is effectively bi-regular. As will be shown, such spaces appear quite naturally in the context of the problem.
Abstract: Imagine you want to present your collection of n coins on a shelf, taking as little space as possible – how should you arrange the coins?
More precisely, we are given n circular disks of different radii, and we want to place them in the plane so that they touch the x-axis from above, such that no two disks overlap. The goal is to minimize the length of the range from the leftmost point on a disk to the rightmost point on a disk.
On this seemingly innocent problem we will meet a wide range of algorithmic concepts: An efficient algorithm for a special case, an NP-hardness proof, an approximation algorithm with a guaranteed approximation factor, APX-hardness, and a quasi-polynomial time approximation scheme.
Abstract: We consider the diagonalization of real symmetric square matrices with (not necessarily algebraic entries and) particular emphasis on the degenerate case: In general ill-posed and in fact provably uncomputable a problem, Recursive Analysis [Ziegler&Brattka'04] has established that some orthonormal basis of eigenvectors is computable in the sense of output approximation up to any desired precision - provided that, in addition to approximations to the d*(d+1)/2 matrix' entires, its number k of DISTINCT eigenvalues is known/provided. The present work explores the practical impact and quantitative efficiency of this qualitative and implicit result: We devise, combine, implement, evaluate, and compare four variants of rigorous and total algorithms guaranteed to (i) extract the coefficients of the matrix' characteristic polynomial, (ii) refine root enclosures of the latter using Pellet's Predicate until separating the $k$ disjoint clusters to thus derive the individual eigenvalues' multiplicities d[j] for 0<j⇐k, (iii) employ (some combination of) Trisection, Newton and/or Gräffe Iterations to improve said real root approximations to the precision required to finally (iv) determine an orthonormal base of eigenvectors to these eigenvalues. Algorithms (ii) and (iii) are based on, vary, and extend a recent series of contributions by Sagraloff, Sharma, Yap et al. (2013-2016), (iv) on own work (2016) using a variant of Gaussian Elimination that avoids total tests for real equality as these are known equivalent to the (complement of the) Halting Problem.
Our implementation builds on the C++ library iRRAM, providing an abstract data type REAL for rapid numerical prototyping with exact arithmetic and a subtly modified yet sound semantics of comparisons. Correctness is asserted by recovering the random eigenvector bases to a family or deliberately degenerate artificial test matrices; and efficiency is evaluated with respect to the three parameters output precision, matrix dimension, and eigenvalue separation. It confirms that the sophisticated combined algorithm of Sagraloff, Sharma, Yap et al. (2013-2016) is indeed practical and asymptotically outperforms the two simpler variants - but can in turn be improved by employing trisection for medium precision.
Abstract: In this talk, we give an overview of some recent results by Braverman, Grigo and Rojas on the computability and complexity of long-term properties of dynamical systems. Dynamical systems are used to model a large number of processes in nature whose state evolves over time. Important questions deal with the long-term behavior of such systems when the time approaches infinity. We want to discuss computability and computational complexity of such questions in the sense of arbitrary precise approximations as studied in the field of computable analysis. While simulating such a system for bounded time is usually computable, long term properties often can not be determined in a computable way. Galatolo, Hoyrup and Rojas investigated statistical properties of discrete-time dynamical systems described by invariant measures. They show that a computable dynamic system does not necessarily have a computable invariant measure. On the other hand, Braverman, Grigo and Rojas could show that introducing a small amount of noise renders the system computable. We present some of their results and introduce the tools necessary to understand them.
For counting weighted independent sets weighted by a parameter λ (known as the hard-core model) there is a beautiful connection between statistical physics phase transitions on infinite, regular trees and the computational complexity of approximate counting on graphs of maximum degree D. For λ below the critical point there is an elegant algorithm devised by Weitz (2006). The drawback of his algorithm is that the running time is exponential in log D. In this talk we’ll describe recent work which shows O(n log n) mixing time of the single-site Markov chain when the girth>6 and D is at least a sufficiently large constant. Our proof utilizes BP (belief propagation) to design a distance function in our coupling analysis. This is joint work with C. Efthymiou, T. Hayes, D. Stefankovic, and Y. Yin which appeared in FOCS ’16.
In this talk, we will review the following paper: Cyril Cohen, "Construction of real algebraic numbers in Coq", February 18, 2012.
Abstract: This paper shows a construction in Coq of the set of real algebraic numbers, together with a formal proof that this set has a structure of discrete archimedian real closed field. This construction hence implements an interface of real closed field. Instances of such an interface immediately enjoy quantifier elimination thanks to a previous work. This work also intends to be a basis for the construction of complex algebraic numbers and to be a reference implementation for the certification of numerous algorithms relying on algebraic numbers in computer algebra.
Abstract: Computational geometry deals with basic geometric objects such as points, line segments, polygons, and polyhedra, and aims to develop efficient algorithms and data structures for solving problems by figuring out the geometric, combinatorial, and topological properties of the objects. It has provided numerous methods and algorithms to solve geometric problems in application areas efficiently, such as computer graphics, robotics, geographic information systems, computer vision, and computational biology.
The traditional geometric algorithms, however, are not adequate for real-world data because they are not designed to handle imprecise and uncertain data and therefore they may return a wrong answer due to the imprecision or uncertainty of data.
This talk will provide an introduction to recent results of computational geometry on real-world data and open questions.
Abstract : The Curry–Howard correspondence says that there is an isomorphic relation between intuitionistic logic and a constructive type theory. The basic idea is that intuitionistic proofs of mathematical formulas contain constructive information to prove them and that the information can be represented as concrete terms in a constructive type theory. One can then use these information and terms to extract computer programs from proofs. The resulting programs are error-free and correct in the sense that they satisfy the prescribed specifications. That means it is certified that the programs work as expected. This talk will explain the basic idea of Curry-Howard correspondence and its use in extracting certified programs.
Abstract : In patrolling problems, several mobile agents move inside a given region and try to cooperate so that every designated point in the region is (perpetually) visited sufficiently often (that is, no point should be left unattended for any time period of specified length). There are many variants of this problem: the agents may have the same or different speeds; the terrain may be a line, a cycle, or more general graphs; the points to be visited may be the whole or a (finite) part of the terrain. Problems of this kind are studied in different contexts with various motivations, but finding an optimal patrolling strategy is not straightforward, even in very simple settings. For example, the optimal patrolling of a line segment by several agents is not always achieved by the simple strategy where each agent moves back and forth in a sub-segment of length proportional to its speed. This talk will introduce some positive and negative results and open questions about properties of and algorithms for optimal patrolling.
Abstract : In this talk, we introduce a proof assistant Coq, which is based on the Calculus of Inductive Constructions (CIC). Since CIC is a variant of the calculus of construction (CoC : a higher-order typed lambda calculus), we briefly consider type theory and whose central theme Curry-Howard isomorphism. We will give easy exercises that can be proved by using Coq. The C-CoRN library for Coq takes an approach different from the standard library of Coq. At last, we discuss the real numbers in Coq.
We introduce a new theory which encompasses concepts and ideas from set theory, type theory, and Lesniewski's mereology and describes its possibility as an alternative foundation for mathematics. In the introduction section I will introduce motives for development of the theory and some remarks on the methods of presentation. Axioms of the theory and their philosophical background and justication on the basis of intuitive view are discussed next. Discussed after are realizations of mathematical concepts such as N, Z, Q, R, C, etc. Then this paper concludes with comparisons between the theory and ZFC and its mathematical limitations.
Well-known Algorithms for root isolating problem consider a polynomial function which has simple roots only. Recent work of Chee Yap et al., 2013, devised an algorithm which enables to isolate (cluster) roots with multiplicities of any analytic function. This talk will review the work of Chee Yap et al., and discuss about an implementation and verification of the suggested algorithm.
We consider the problem of solving systems of ordinary differential equations in exact real arithmetic, i.e., in the sense of arbitrarily precise numerical computations. It is known from real complexity theory that this problem is PSPACE-hard in the general case. However, when restricting the right hand side functions of the system to be not only polynomial time computable but also analytic, the solutions will again be polynomial time computable and analytic. By carefully choosing the representation, this result can even be turned into uniform algorithms on analytic functions. The talk consists of two parts. First we explore analytic functions from the view point of real complexity theory and give running time and space bounds for various operations. We then show how to turn those theorems into algorithms and use them for practical implementations in the C++ programming language based on the iRRAM library. To that end, we also give a representation for multidimensional analytic functions in terms of an abstract data type and an implementation of such a type. We further give implementations of several different algorithms to solve the ODE problem and compare their running times.
Until a few years ago, the fastest algorithm for matrix multiplication was the “Coppersmith-Winograd algorithm” designed in 1987. Recently, a surge of activity by Andrew Stothers, Virginia Vassilevska-Williams and myself has finally led to improved algorithms. In this talk I will describe the long history of work on the complexity of matrix multiplication, and discuss these very recent improvements.
By the sometimes so-called MAIN THEOREM of Recursive Analysis, a computable function must necessarily be continuous. We investigate this condition for example primitives in Shape Optimization: volume, perimeter, and convexity, considered as real-valued objective/constraint functionals on various classes of compact sets equipped with the Hausdorff metric.
N-body problem is classical problem in physics. In this talk, error bound of stepwise simulation of n-body problem will be discussed. As a side-result of error bound discussion, it can be shown that n-body reachability problem is PSPACE. Then, to show PSPACE-hardness, reduction from other problems that are known to be PSPACE-hard will be given. To be specific, following problems will be discussed; 1) Ray-tracing problem with rational coefficient reflecting obstacles(and for a brainteaser, equivalent problem in game of go will be discussed, which it's equivalence is very straightforward with the given problem), 2) 1-body reachability problem with immobile rectangular obstacles, 3) 1-body reachability problem with uniformly distributed charge plate obstacles, while both problems are discussed with restriction to the impact position.
In computer science, a holographic algorithm is an algorithm that uses a holographic reduction. A holographic reduction is a constant-time reduction that maps solution fragments many-to-many such that the sum of the solution fragments remains unchanged. These concepts were introduced by Leslie Valiant, who called them holographic because “their effect can be viewed as that of producing interference patterns among the solution fragments”. The algorithms are unrelated to laser holography, except metaphorically. Their power comes from the mutual cancellation of many contributions to a sum, analogous to the interference patterns in a hologram. Holographic algorithms have been used to find polynomial-time solutions to problems without such previously known solutions for special cases of satisfiability, vertex cover, and other graph problems. They have received notable coverage due to speculation that they are relevant to the P versus NP problem and their impact on computational complexity theory. Although some of the general problems are #P-hard problems, the special cases solved are not themselves #P-hard, and thus do not prove FP = #P. Holographic algorithms have some similarities with quantum computation, but are completely classical.
An obstacle representation of a graph G is a drawing of G in the plane with polygons called obstacles; two points are adjacent iff the straight line segment connecting them does not intersect any obstacles. Obstacle number of a graph is the smallest number of obstacles which allows an obstacle representation of the graph. Even a class of graphs of obstacle number 1 is not known completely. There is a nice characterization for graphs which have a representation with 1 convex obstacle: non-double covering circular arc graphs. Also it is known that every outerplanar graph has a representation with 1 outside obstacle. And as far as I know, they are all results about graphs of obstacle number 1.
In this talk, the following new results are presented:
In Descriptive Complexity we characterize the complexity of decision problems by how rich a logical language is needed to describe the problem. Important complexity classes have natural logical characterizations, for example NP is the set of problems expressible in second order existential logic (NP =SOE) and P is the set of problems expressible in first order logic, plus a fixed point operator (P = FO(FP)).
The latter characterization is over ordered graphs, i.e. the vertex set is a linearly ordered set. This is appropriate for computational problems because all inputs to a computer are ordered sequences of bits. Any ordering will do; we are interested in the order-independent properties of graphs. The search for order-independent P is closely tied to the complexity of graph isomorphism. I will explain these concepts and the current effort to capture order-independent P.
Metastability is often an obstacle to efficient computation or simulation of average or typical quantities of stochastic dynamical systems. In particular when there is no reversibility, and thus potential theory is not available, few methods exist. In this talk I will argue that escape probabilities are the right object to study in this case, and will show how they determine the metastable dynamics of finite Markov chains. This will lead to an efficient way of computing both the dynamics and the stationary distribution of metastable Markov chains.
The framework of second order representations was introduced by Kawamura and Cook in 2010 and provides a rigorous notion of computation over continuous structures. It also provides a notion of time and space restricted computation. Choosing a representation means to specify what information a program computing an element of the structure has to provide. We relate the existence of a reasonable choice of information on a compact metric space to a purely metric property of the space: The metric entropy of a compact metric space measures fast the number of balls needed to cover the space grows with the radius of the balls decreasing. We show that the optimal running time of the metric is inherently connected to the metric entropy. These results are applied to show optimality of some concrete representations of function spaces
The modern civilization evolved with the series of industrial revolutions. According to Jeremy Rifkin, 3-D printing is the third industrial revolution that minimizes the cost of building and fastens the process of designing. In addition, easier access to a private 3-D printer will personally customize products by the user and for the user.
The 3-D printers are not affordable for everyone yet, but in near future, people will design their own products with their own 3-D printer at home. As it is a new technology, there is a lack of software applications related to 3-D printing in the market. An user-friendly program that designs and builds a set of moulds for the user’s own product is in demand.
There are a number of complications when using a 3-D printer and designing a mould. It is critical to clean and maintain a 3-D printer properly to ensure the quality of the printed parts. Many techniques apply in designing a mould to reduce the material and speed up the process of printing.
A complex convex octahedron is chosen as an input product by the user. The software application will be programmed to create eight pieces of mould for the polyhedron. Using OpenSCAD, an open-sourced software application for creating 3-D computer-aided designs, the program will locate six vertices, create a bounding box, implement existing library to convert vertices to normal vectors of faces and vice versa, calculate cross products of vectors to form cutting planes, and then divide the bounding box into eight parts. Moreover, further improvements will be made, including increasing the size of the bounding box, adding a sprue for pouring material, attaching the connecting section for each piece of the mould, and upgrading the algorithm of the program to deal with more complex geometry.
Ensuring correctness of complex systems like computer programs or communication protocols is gaining ever increasing importance. For correctness it does not suffice to consider the finished system, but correctness already on the level of specification is necessary, that is, before the actual implementation starts. To this aim, algebraic specification and verification languages have been conceived. They are aiming at a mathematically correct description of the properties and behavior of the systems under discussion, and in addition often allow to prove (verify) correctness within the same system.
This course will give an introduction to algebraic specification, its history, logic roots, and actuality with respect to current developments. Paired with the theoretical background we give an introduction to CafeOBJ, a programming language that allows both specification and verification to be carried out together. The course should enable participants to understand the theoretical background of algebraic specification, as well as enable them to read and write basic specifications in CafeOBJ.
Let $C$ be the unit circle in $\mathbb{R}^2$. We can view $C$ as a plane graph whose vertices are all the points on $C$, and the distance between any two points on $C$ is the length of the smaller arc between them. We consider a graph augmentation problem on $C$, where we want to place $k \ge 1$ shortcuts on $C$ such that the diameter of the resulting graph is minimized. We analyze for each $k$ with $1 \le k \le 7$ what the optimal set of shortcuts is. Interestingly, the minimum diameter one can obtain is not a strictly decreasing function of $k$. For example, with seven shortcuts one cannot obtain a smaller diameter than with six shortcuts. Finally, we prove that the optimal diameter is $2 + \Theta (1/k^\frac{2}{3})$ for any $k$.
While concepts and tools from Theoretical Computer Science are regularly applied to, and significantly support, software development for discrete problems, Numerical Engineering largely employs recipes and methods whose correctness and efficiency is demonstrated empirically. We advertise Real Complexity Theory: a resource-oriented foundation to rigorous computations over continuous universes such as real numbers, vectors, sequences, continuous functions, and Euclidean subsets: in the bit-model by approximation up to given absolute error. We review the underlying notions and some illustrative results – regarding worst-case complexity theory; to then introduce, and initiate the study of, average-case bit-complexity theory over the reals: Like in the discrete case a first, naive notion of polynomial average runtime turns out to lack robustness and is thus refined. Standard examples of explicit continuous functions with increasingly high worst-case complexity are shown to be in fact easy in the mean; while a further example is constructed with both worst and average complexity exponential: for topological/metric reasons, i.e., oracles do not help.
The N-body problem cannot be solved analytically for N>2 and hence is generally approached numerically. Naive algorithms incur cost quadratic in N; and we report on the state-of-the-art achieving subquadratic runtime by means of approximation: the Fast Multipole Method, said to be among the top ten algorithms of the 20th century. We present a rigorous worst-case performance analysis, based on techniques from computational geometry and in terms of bit cost.
Symmetric difference can be used to measure similarity between two shapes. In this talk, similarity up to homothety, which is a combination of translations and scaling, is concerned and there is a recent result of Yon et al. about it. However, their definition is not symmetric and thus not metric. We introduce a variant of symmetric difference whic his a metric for convex shapes modulo homothety (so two convex shapes are regarded the same iff one is a homothety of the other) and provide a proof.
Perturbation Theory, although successful in many areas of physics, had long been considered irrelevant for nuclear states where series tend to diverge. Only recent work was able to mend that based on the Hartree-Fock partitioning. It thus raised the question of efficiently computing these approximating finite but large and multi-indexed sums. Using perturbation diagrams we exhibit their inherent structure and symmetries. We exploit them to derive upper and lower bounds on their complexity with a trade-off between approximation quality and runtime.