Jürgen Herzberger was one of the pioneers in the field of interval analysis. He wrote his PhD thesis on a subject in this field and also his thesis submitted for habilitation was concerned with questions from this field. He is the author or co-author of several books and of many papers published in international journals. He organized several conferences and is the editor or co-editor of the corresponding proceedings.

In this talk I will present a short survey of his contributions to interval analysis.

This lecture aims at giving a consistent presentation of several classical numeration systems from a dynamical viewpoint. According to [1], we consider numeration systems in a broad sense, that is, representation systems of numbers that also include e.g. continued fraction expansions. They might be positional or not, provide unique expansions or be redundant. The dynamical systems emerging from these representations give some insight into their understanding: they allow one to get information on the statistics of digits, on the mean behaviour or provide simple algorithmic generation processes. These number systems include various numerations on rational integers, real or complex numbers, or on polynomials. Special attention will be payed to β-numeration (one expands a positive real number with respect to the base β > 1), to rational base numerations, to canonical number systems (one expands elements of a number field), and to multidimensional continued fractions and generalizations of Euclid's algorithms, in the sense of [2,3], or based on lattice reduction algorithms. We will allude to some of their applications in computer arithmetic and cryptography, and to implementation issues.

- G. Barat, V. Berthé, P. Liardet, J. Thuswaldner. Dynamical directions in numeration, Ann. Inst. Fourier (Grenoble) 56 (2006), 1987–2092.
- A. J. Brentjes. Multidimensional continued fraction algorithms, Mathematisch Centrum, Amsterdam, 1981.
- F. Schweiger. Multi-dimensional continued fractions, Oxford Science Publications, Oxford Univ. Press, Oxford (2000).

The interval analysis is a set of techniques dealing with a large number of problems in many application fields. Over the last ten years in robotics significant contributions have been made in kinematic and static models solving, in the computation of manipulator workspaces, in robot identification, control or design. Our team has obtained some interesting results by combining the quality of numerical analysis algorithms and the efficiency of constraint programming techniques developed for interval case. All this even when the formulation of the issue must have been adapted to take advantage of interval analysis properties.

The time has now come to transfer our expertise. However, if the principles of the analysis intervals seem accessible the community thus knows that the creation and the efficient implementation of an algorithm is difficult. It then becomes essential to make our work available to students for scientific dissemination, as well as non interval experts, researchers or engineers using them as sub-modules to solve often larger problems.

If each of us has some powerful software often available in a low-level language, it is interesting to propose a higher level interface to make it accessible to non-computer scientists.

We therefore offer Int4sci, a Scilab interface for interval arithmetic and some basic modules of interval analysis. We justify the choice of this language, the linked low level library, as well as the syntax and the design of the interface.

Finally, we show how this toolkit can be increased and how to contribute to this project.

We survey and unify recent results on the existence of accurate
algorithms for evaluating multivariate polynomials, and more generally
for accurate numerical linear algebra with structured matrices. By
accurate

we mean the computed answer has relative error less
than 1, i.e. has some leading digits correct. We also address
efficiency, by which we mean algorithms that run in polynomial time in
the size of the input. Our results will depend strongly on the model
of arithmetic: Most of our results will use what we call the
*Traditional Model (TM)*, that the computed result of
op(`a`,`b`), a binary operation like
`a`+`b`, is given by
op(`a`,`b`)*(1+δ) where all we know is that
|δ| ≤ ɛ ≪ 1. Here ɛ is a constant also known as machine epsilon.

We will see a common reason that the following disparate problems
all permit accurate and efficient algorithms using only the four basic
arithmetic operations: finding the eigenvalues of a suitably
discretized scalar elliptic PDE, finding eigenvalues of arbitrary
products, inverses, or Schur complements of totally nonnegative
matrices (such as Cauchy and Vandermonde), and evaluating the Motzkin
polynomial. Furthermore, in all these cases the high accuracy is
deserved,

i.e. the answer is determined much more accurately by
the data than the conventional condition number would suggest.

In contrast, we will see that evaluating even the simple polynomial
`x`+`y`+`z` accurately is impossible in the
TM, using only the basic arithmetic operations. We give a set of
necessary and sufficient conditions to decide whether a high accuracy
algorithm exists in the TM, and describe progress toward a decision
procedure that will take any problem and either provide a high
accuracy algorithm or a proof that none exists.

When no accurate algorithm exists in the TM, it is natural to
extend the set of available accurate operations op() by a library of
additional operations, such as `x`+`y`+`z`,
dot products, or indeed any enumerable set which could then be used to
build further accurate algorithms. We show how our accurate algorithms
and decision procedure for finding them can be extended in this
case.

Finally, we address other models of arithmetic, and address the relationship between (im)possibility in the TM with (in)efficient algorithms operating on numbers represented as bit strings.

This is joint work with Ioana Dumitriu, Olga Holtz and Plamen Koev.

It is shown how functions that are defined by an evaluation program can be approximated locally by a piecewise-linear model. In contrast to the standard approach in algorithmic or automatic differentiation, we explicitly account for the occurrence of nonsmooth but Lipschitz continuous elements, in particular the absolute value function and thus also the min and max operations. Under reasonable assumptions the resulting piecewise-linear and continuous model approximates the underlying Lipschitz function with a local error of second order. It can be generated by a minor extension of ADOL-C and other automatic differentiation tools. The local piecewise linear model can be used to compute generalized Jacobians and Newton steps. In the overdetermined case one obtains a generalization of the Gauss-Newton method with similar computational complexity.

To achieve the highest levels of assurance, floating-point algorithms can be formally verified using computer proof assistants. In principle, such frameworks permit one to perform virtually any mathematical proof in a completely reliable way. However, this generality comes at a cost: in contrast to some of the other tools commonly used in formal verification (for example model checkers or SAT solvers), the formal verification process can be labor-intensive and require considerable expertise. We will describe how we have applied formal verification to several algorithms used at Intel, in the process building infrastructure that can be used to make other similar verifications easier. We will also discuss the possible use of more highly automated tools and processes to make such verifications more efficient in terms of user interaction.

We will present recent work on two intensely studied problems in dynamical systems. The first deals with the discrete quadratic map; here the long-standing question concerns the regions of stability when varying the parameter of the map. The second topic concerns Hilbert's 16th problem, which asks for the possible number (and configuration) of limit cycles for planar polynomial vector fields.

Both problems have been thoroughly studied for more than a century, and with a formidable variation of techniques. We will show how validated numerics can contribute to gaining further insight into these problems. This work is joint with Daniel Wilczak and Tomas Johnson, respectively.

In this talk a numerical method to verify the invertibility of infinite dimensional linear operators and to estimate its norm of the inverse will be given. This self-validating method is an improvement of the one for second-order elliptic boundary value problems introduced by the author and co-workers [1,2].

By using a projection and its constructive a priori error estimates in suitable Hilbert spaces, a computable sufficient condition assuring the invertibility of the linear operator is formulated as an inequality based upon the verification method originally developed by M. T. Nakao since 1988. Under the invertibility criterion an efficient approach to the bounds for the norm of the inverse operator is also discussed. The method is derived from a fixed-point formulation and the projection error, and can be applied to non-self-adjoint operators as it is.

As useful applications of the result that confirm the actual effectiveness of the method, the following problem will be considered.

- a computer-assisted existence proof of solutions for nonlinear second-order elliptic problems,
- some eigenvalue excluding methods for self-adjoint and non-self-adjoint eigenvalue problems which gives an important information to clarify the eigenvalue distribution,
- a computer-assisted method to enclose an eigenpair with guaranteed accurate error bounds for the Orr-Sommerfeld equation.

This work is based on joint research with Kaori Nagatou, Michael Plum and Mitsuhiro T. Nakao.

- M. T. Nakao, K. Hashimoto and Y. Watanabe. A numerical method to verify the invertibility of linear elliptic operators with applications to nonlinear problems, Computing, 75, 1–14 (2005).
- M. T. Nakao and Y. Watanabe. An efficient approach to the numerical verification for solutions of elliptic differential equations, Numerical Algorithms, 37, 311–323 (2004).

We present a hybrid symbolic-numeric algorithm for certifying a polynomial or a rational function with rational coefficients to be non-negative for all real values of the variables by computing a representation for it as a fraction of two polynomial sum-of-squares (SOS) with rational coefficients. We can either perform high precision Newton iterations on the numerical SOS computed by SDP solvers in Matlab or use the high precision SDP solver in Maple to get the SOS with necessary precision, then we can convert the numerical SOS into an exact rational SOS by orthogonal projection or rational coefficient vector recovery [1,4,5]. Sums-of-squares rational lower bound certificates for the radius of positive semidefiniteness of a multivariate polynomial also offer an alternative SOS proof for those positive definite polynomials that are not SOS but have a positive distance to the nearest polynomial with a real root [3]. Moreover, we show that a random linear transformation of the variables allows with probability one for certifying the positive semidefiniteness of a multivariate polynomial by representing it as an SOS over the variety defined by partial derivatives of the polynomial with respect to each variable except one [2].

This is joint work with Feng Guo, Sharon E. Hutton, Erich L. Kaltofen, Bin Li, Mohab Safey El Din and Zhengfeng Yang.

- Feng Guo. SDPTools: A high precision SDP solver in Maple, MM-Preprints, 28:66–73, 2009.
- Feng Guo, Mohab Safey EI Din, and Lihong Zhi. Global optimization of polynomials using generalized critical values and sums of squares, 2010. In Stephen Watt editor, ISSAC 2010, 107–114, ACM Press.
- Sharon E. Hutton, Erich Kaltofen, and Lihong Zhi. Computing the radius of positive semidefiniteness of a multivariate real polynomial via a dual of Seidenberg's method, 2010. In Stephen Watt editor, ISSAC 2010, 227–234, ACM Press.
- Erich Kaltofen, Bin Li, Zhengfeng Yang, and Lihong Zhi. Exact certification of global optimality of approximate factorizations via rationalizing sums-of-squares with floating point scalars. In David Jeffrey, editor, ISSAC 2008, 155–163, New York, N. Y., 2008. ACM Press.
- Erich Kaltofen, Bin Li, Zhengfeng Yang, and Lihong Zhi. Exact certification in global polynomial optimization via sums-of-squares of rational functions with rational coefficients, 2009. Accepted for publication in J. Symbolic Comput.