RESOURCES

**This is an old revision of the document!**

Upcoming Talks

Previous years: 2008, 2007, 2006

- December 22: Jean-Eric Pin. “New mathematical trends in automata theory”

Abstract: In this lecture, I will try to illustrate on a few selected examples several mathematical tools used in automata theory. These tools include Model Theory and games, algebra and topology. Examples will include expressive power of fragments of logic or temporal logic and classical problems of language theory.

- December 8: Kousha Etessami. “Analysis of Recursive Markov Chains, Recursive Markov Decision Processes, and Recursive Stochastic Games”

Abstract:Recursive Markov Chains (RMCs) are a natural abstract model of procedural probabilistic programs and other systems involving both recursion and probability. RMCs define a class of denumerable Markov chains with a rich theory generalizing that of multi-type Branching (stochastic) Processes and Stochastic Context-Free? Grammars, and they are tightly related to probabilistic Pushdown Systems. Recursive Markov Decision Processes (RMDPs) and Recursive Simple Stochastic Games (RSSGs) extend RMCs with a controller and two adversarial players, respectively.

In a series of recent papers we have studied central algorithmic analysis and verification questions for RMCs, RMDPs, and RSSGs, providing some strong upper and lower bounds on the complexity of key algorithmic problems.

In this talk I will provide a broad survey of this work, indicate some

of the main techniques involved in our analyses, discuss potential

application domains, and indicate some of the many directions

for future research.

This talk describes joint work with Mihalis Yannakakis (Columbia U.) contained in a series of recent papers that appear at:

STACS'05, TACAS'05, ICALP'05, QEST'05, and STACS'06.

- December 6: Sandra Quickert. “Descriptive set theory and parity games”

Abstract: In this talk I will introduce basic concepts in descriptive set theory which have proven to be very useful in computer science. There is namely a very tight relation between parity games and the so-called Hausdorff-Kuratowski? difference hierarchy. The main aim of this talk is to give a clear picture of this hierarchy and how it relates to parity games. Finally, I will give some applications of this relationship to the mu-calculus.

- December 1: Stefan Kiefer, Uni Stuttgart. “Abstraction Refinement with Craig Interpolation and Symbolic Pushdown Systems”

Abstract: Counterexample-guided abstraction refinement (CEGAR) has proven to be a powerful method for software model-checking. In this talk, this concept is investigated in the context of sequential (possibly recursive) programs whose statements are given as BDDs. It is examined how Craig interpolants can be computed efficiently in this case and a new, special type of interpolants is proposed. Moreover, it is shown how to treat multiple counterexamples in one refinement cycle. We have implemented this approach as an extension of the model checker Moped and report on experiments.

- November 29: Goran Frehse, Verimag. “Forward/Backward Abstraction Refinement with PHAVer”

http://www-verimag.imag.fr/~frehse/frehse_vita.htm

Abstract: Embedded controllers can exhibit complex behavior because instantaneous, event-driven changes may interact with the continuous, time-driven evolution of the physical environment in ways that are difficult to predict. Because of their mixed discrete-continuous nature, such systems are called hybrid, and they are used to model a wide range of safety-critical systems, such as control systems in aviation or automotive and chemical process applications. A common way to formally verify the safety of the design of such a system is to compute the set of reachable states and check whether it contains any states that are deemed forbidden. Because hybrid systems are notoriously difficult to analyze, one is usually forced use overapproximation techniques. In reachability analysis, such overapproximations are employed in two ways: to find conservative bounds on the solutions of the dynamic equations and to efficiently represent the set of reachable states.

Our verification tool PHAVer computes reachability for hybrid systems with affine dynamics and a wide range of nondeterminism in a formally, i.e., algorithmically as well as numerically, sound way. It does so by partitioning the state space and computing piecewise constant bounds on the derivatives for each partition. The reachable states can then be efficiently computed with geometric operations on polyhedra. While PHAVer computes with exact arithmetic and solves the dynamic equations by brute-force overapproximation, it outperforms other tools, including such using nonconservative approximation techniques, for several benchmarks and is at present arguably the most powerful verification tool for hybrid systems.

In this talk, we present a significant improvement to PHAVer's reachability algorithm that can yield dramatic reductions in time and memory consumption. Initially, we compute reachability with very large partitions, therefore quickly but grossly overapproximating the behavior. We intersect the reachable states with the forbidden states and propagate this information back to the initial states with a computation of the backward reachable states. By iterating back and forth between initial and forbidden states at decreasing partition sizes we obtain an efficient partitioning of the parts of the state space that are relevant to the reachability of the forbidden states, while using very coarse partitions in parts that are not. The benefit of this approach is illustrated with benchmark examples and a voltage controlled oscillator circuit that was previously beyond the capabilities of verification tools.

PHAVer is available at http://www.cs.ru.nl/~goranf/.

- November 24: Laurent Doyen, Uni Brussels. “On the implementability of Timed and Hybrid Systems”

Abstract: Timed and hybrid systems are dynamical systems with both discrete and continuous components. A paradigmatic example of a hybrid system is a digital embedded control program for an analog plant environment, like a furnace or an airplane: the controller state moves discretely between control modes, and in each control mode, the plant state evolves continuously according to physical laws. Several verification and control problems have been studied for those systems. However, the question of implementability has received attention only recently.

When a high level description of a controller has been proven correct it would be valuable to ensure that an implementation of that model can be obtained in a systematic way in order to ensure the conservation of correctness. We study this question in the context of timed systems and we introduce a new semantics, for timed automata, the Almost ASAP semantics, that is implementable, decidable and tool-supported. With this semantics, we can derive implementations of timed systems that are correct by construction.

On the other hand, we attack the problem of implementability of hybrid systems using a slight generalization of the well-known games of incomplete information. We give a new method to solve such games based on fixed point computation that does not involve determinization, by contrast with the classical method proposed by Reif. Currently, we have applied this method to finite games and discrete-time hybrid games. In the future, we hope to extend our approach to general hybrid systems in continuous-time.

This is a joint work with Martin De Wulf and Jean-Francois Raskin.

- November 17: Joel Ouaknine, Oxford University Computing Laboratory. “SAT-based Acceleration of Decision Procedures: An Application to Presburger Arithmetic”

Abstract: We present a new SAT-based framework for accelerating decision procedures for quantifier-free theories, and illustrate our approach by applying it to Presburger arithmetic (i.e., integer linear arithmetic). Given a quantifier-free Presburger formula f, our algorithm invokes a SAT solver to produce proofs of unsatisfiability of approximations of f. These proofs are in turn used to generate abstractions of f as inputs to the decision procedure. The SAT-encodings of the approximations of f are obtained by instantiating the variables of the formula over finite domains. The satisfying integer assignments provided by the decision procedure are then used to selectively increase domain sizes and generate fresh SAT-encodings of f. The efficiency of this approach derives from the ability of SAT solvers to extract small unsatisfiable cores, leading to small abstracted formulas. We present experimental results which suggest that our algorithm is considerably more efficient than directly invoking the decision procedure on the original formula.

- November 11: Sean McDirmid, LAMP. “SuperGlue: Component Programming with Object-oriented Signals.”

Abstract: Components that process state, such as many kinds of user-interface widgets, are important because they make programs more usable with their ability to handle continuously changing data. Unfortunately, the code that glues together such state-processing components is often difficult to write because it is exposed to many complicated event handling details. This paper introduces SuperGlue? as a declarative language for gluing together state-processing components through time-varying values known as signals, which hide event handling details from glue code. To support the construction of realstic programs with an unbounded number of signal connections, SuperGlue? code can abstract over signal connections with extensible object types. With this combination of signals and objects, programmers can build large state-processing programs with substantially less glue code when compared to conventional approaches. For example, the SuperGlue? implementation of an email client is around half the size of an equivalent Java implementation.

- November 3: Alexandre Bergel. “Classbox/J: Controlling the Scope of Change in Java”

Abstract: Unanticipated changes to complex software systems can introduce anomalies such as duplicated code, suboptimal inheritance relationships and a proliferation of run-time downcasts. Refactoring to eliminate these anomalies may not be an option, at least in certain stages of software evolution. Classboxes are modules that restrict the visibility of changes to selected clients only, thereby offering more freedom in the way unanticipated changes may be implemented, and thus reducing the need for convoluted design anomalies. In this paper we demonstrate how classboxes can be implemented in statically-typed languages like Java. We also present an extended case study of Swing, a Java GUI package built on top of AWT, and we document the ensuing anomalies that Swing introduces. We show how Classbox/J, a prototype implementation of classboxes for Java, is used to provide a cleaner implementation of Swing using local refinement rather than subclassing.

- October 27: Hugo Gimbert, Warsaw University. “Positional payoff games”

Abstract: Games are a natural tool for modelizing interactions between selfish agents. We focuse on the class of full-information infinite duration payoff games with finitely many states.

The case where only two agents interact and their interests are strictly opposed correspond to the class of antagonistic payoff games which is a common tool in the area of system modelling and verification. In this framework, the most popular games are undoubtedly parity, mean-payoff and discounted games. They share the useful property that both players have optimal positional strategies. Such games are called positional games.

In the first part of the talk, we adress and solve the problem of characterizing those games that are positional, and give some applications. Then, we give positive and negative results about the extension of this characterization to the more general case of Nash equilibria in multiplayer games.

- October 20: Carlo A. Furia. “Compositionality and Integration for Real-Time? Systems”

Abstract: One of the main problems that prevent the application of formal methods to large real systems is that they do not scale up, as they become practically difficult to use when the size of the modeled system increases considerably. In software engineering, the traditional “old-fashioned” prescriptions to the problem of scaling come under the notions of “modularization” and “integration”.

In this talk, I will discuss an approach to the application of *modularization and integration* of *formal methods* for the specification an verification of *real-time systems*. The reference framework is that of descriptive (a.k.a. declarative) specifications written using TRIO, an expressive metric temporal logic, and verifications performed using human-assisted deductive techniques.

I will first present a *compositional framework* based on an ad hoc operator to express rely/guarantee formulas, a simple “syntactic” inference rule to compose modular rely/guarantee specifications, and some methodology to guide the verification process. The framework is suitable to perform deductive verification on modular specifications written using TRIO's encapsulation features, and is applicable to both discrete-time and continuous-time models.

Then, I will discuss how to