english only
EPFL > I&C > Tresor > Seminars
Research Topics
Useful Links

Upcoming Talks
Previous years: 2008, 2007, 2005


The synthesis problem is concerned with automatically constructing a correct
system from a specification. In LTL synthesis the specification is given as set
of properties written in Linear Temporal Logic (LTL). The constructed system is
a reactive system that takes input values and provides output values that are
correct with respect to the specification.

In my talk I will start with an introduction into the theory necessary to solve
the synthesis problem for LTL. Then I will focus on explaining how synthesis
can be used to repair faulty finite state systems. Finally, I will
discuss our efforts to synthesize an arbiter for a commercial bus from its
formal specifications.

Atomic blocks allow programmers to delimit sections of code as
`atomic', leaving the language's implementation to enforce atomicity.
Existing work has shown how to implement atomic blocks over
word-based transactional memory that provides scalable
multi-processor performance without requiring changes to the basic
structure of objects in the heap. However, these implementations
perform poorly because they interpose on all accesses to shared memory
in the atomic block, redirecting updates to a thread-private log which
must be searched by reads in the block and later reconciled with the heap
when leaving the block.

In this talk I'll describe work we've been doing at Microsoft Research
to improve the performance of atomic blocks: (1) we introduce a new
`direct access' implementation that avoids searching thread-private
logs, (2) we develop compiler optimizations to reduce the amount of
logging (e.g. when a thread accesses the same data repeatedly in an
atomic block), (3) we use runtime filtering to detect duplicate log
entries that are missed statically, and (4) we also add a series of
GC-time techniques to compact the logs generated by long-running
atomic blocks. The combination of these techniques lets us our
implementation support short-running scalable concurrent benchmarks
with less than 50% overhead over a non-thread-safe baseline. We
support long atomic blocks containing millions of shared memory
accesses with a 2.5-4.5x slowdown.

I'll describe some interesting recent observations about the semantics
of atomic blocks and their interaction with the language's memory
model: what does it mean for a programmer to use atomic blocks
“correctly”… and what impact do different decisions place on the

In my diploma project I performed a series of case studies in modeling
and automated verification of authentication protocols and key exchange
protocols used as components of various web services. Verification was performed
using the AVISPA toolset, and the modeling done in its input language
HLPSL. I have modeled the Diffie-Hellman? and Challenge-Response?
protocols as steps towards the modeling of the ISO 9798-3 protocol. This
protocol has not been previously modeled with the AVISPA toolset. The
description of how to combine Diffie-Hellman? with Challenge-Response? in
order to obtain the ISO 9798-3 protocol comes from a technical report by
A. Datta on deriving the JFK protocol. I have also modeled the protocols
present in Microsoft WSE 3.0, based on a paper by A. Gordon on semantics
for web service authentication. With further details provided by A. Gordon
I have shown that the AVISPA toolset is able to do verification of web
service specific authentication protocols although it uses a different formalism
in describing protocol models. Finally I have extracted a model of the
authentication protocol used in the Globus Toolkit from its documentation.
The verification was done to find if any man-in-the-middle, replay, or multiple
session attacks were possible on the protocols. All of the models were
successfully verified using the OFMC backend of the AVISPA toolset. The
conclusion that I reached doing these case studies is that all models are safe
when faced with these kinds of attacks.

Object invariants describe the consistency of object-oriented data
structures and are central to reasoning about the correctness of
object-oriented software. Yet, reasoning about object invariants in the
presence of object references, methods, and subclassing is difficult. This
talk describes a methodology for specifying and verifying object-oriented
programs. It also presents an extension of the basic methodology to
multi-threaded programs, which allows one to prove the absence of data
races. The methodology allows sound, modular reasoning.

The generation of invariants is a cornerstone in system verification,
and for this reason, it has been an outstanding research problem for a
long time. Numerous methods have been developed for particular classes
of invariant properties, most of them of linear nature, such as
intervals (e.g., 0 ⇐ x ⇐ 1), linear inequalities (e.g., x ⇐ y) and
linear congruences (e.g., x = y (mod 2)). This talk addresses the
problem of automatically generating polynomial non-linear invariants
(e.g., x=y^2). In the first place, the abstract domain of ideals will
be introduced for the generation of invariant conjunctions of
polynomial equalities of bounded degree. It can deal with polynomial
assignments (e.g., x := x + xy), as well as polynomial equality guards
(such as x = 0) and polynomial disequality guards (such as x != 0).
Moreover, it will be shown how these techniques can be applied to the
verification of imperative programs as well as to other systems, such
as Petri nets and hybrid systems. Finally, some ideas will be given on
the extension of these techniques for the generation of polynomial
inequalities as invariants.

Developing correct multithreaded software is very challenging, due to the
potential for unintended interference between threads. We present type systems
for verifying two key non-interference properties in multithreaded software:
race-freedom and atomicity. Verifying atomicity is particularly valuable since
atomic procedures can be understood according to their sequential semantics,
which significantly simplifies subsequent (formal and informal) correctness
arguments. We will describe our experience applying these type systems and
corresponding type inference algorithms to standard multithreaded benchmarks and
other applications, and illustrate some defects revealed by this approach.

The remarkable relations between topological and combinatorial complexity of regular languages were first observed by Klaus Wagner. In his seminal paper published in 1977 he described a hierarchy which coincided with the Wadge hierarchy restricted to regular omega-languages, and at the same time, was a refinement of the Mostowski-Rabin? index hierarchy. We follow his steps and generalize these results to the case of languages of infinite trees recognised by deterministic parity automata.

We investigate the existence of continuous reductions by means of Wadge games redefined in terms of tree automata. Using these games, we build canonical automata representing the Wadge degrees of all deterministically recognizable tree languages. Then we describe a procedure calculating the canonical form of a given automaton. This procedure works within the time of determining the productive states of the automaton (the exact complexity is unknown, but not worse then exponential).

Thus we obtain a complete description of the Wadge hierarchy of deterministically recognizable tree languages, as well as a procedure to calculate the exact position of a given language in this hierarchy.

Consensus is a widely-studied fundamental problem in theoretical and practical
distributed computing. Roughly speaking, it allows processes to agree on a
common output. We are interested in the implications that various timeliness
and failure detector assumptions have on the performance of consensus
algorithms that exploit them. We present a general framework, GIRAF, for
expressing such assumptions, and reasoning about the performance of indulgent
algorithms (algorithms that tolerate an arbitrary period of asynchrony). We
investigate several indulgent models using GIRAF and give upper and lower
bounds for the number of communication rounds needed to reach consensus in
these models.

API evolution is the process of migrating an inter-library interface
from one version to another. Such a migration requires checking that
all libraries which interact through the interface be updated.
Libraries can be updated one by one if there is a transition period
during which both updated and non-updated libraries can communicate
through some transitional version of the interface. Static type
checking can verify that all libraries have been updated, and thus
that a transition period may end and the interface be moved forward
safely. Anti-deprecation is a novel type-checking feature that allows
static checking for more interface evolutions periods.
Anti-deprecation, along with the more familiar deprecation, is
formally studied as an extension to Featherweight Java. This formal
study unearths weaknesses in two widely used deprecation checkers.

We study infinite stochastic games played by $n$-players on a finite graph
with goals specified by sets of infinite traces.
The games are {\em concurrent} (each player simultaneously and independently
chooses an action at each round), {\em stochastic} (the next
state is determined by a probability distribution depending on the current
state and the chosen actions), {\em infinite} (the game continues for an
infinite number of rounds), {\em nonzero-sum} (the players' goals
are not necessarily conflicting), and undiscounted.
We show that if each player has an upward-closed objective,
then there exists an $\vare$-Nash equilibrium in memoryless strategies,
for every $\varepsilon >0$; and exact Nash equilibria need not exist.
Upward-closure of an objective means that if a set $Z$ of infinitely
repeating states is winning, then all supersets of $Z$ of infinitely repeating
states are also winning.
Memoryless strategies are strategies that are independent of history of plays
and depend only on the current state.
We also study the complexity of finding values (payoff profile) of an
$\varepsilon$-Nash equilibrium.
We show that the values of an $\vare$-Nash equilibrium in nonzero-sum
concurrent games with upward-closed objectives for all players can be computed by
computing $\varepsilon$-Nash equilibrium values of nonzero-sum concurrent games with
reachability objectives for all players and a polynomial procedure.
As a consequence we establish that values of an $\varepsilon$-Nash equilibrium
can be computed in TFNP (total functional NP), and hence in EXPTIME.

We study observation-based strategies for two-player turn-based games
on graphs with omega-regular objectives. An observation-based
strategy relies on imperfect information about the history of a play,
namely, on the past sequence of observations. Such games occur in the
synthesis of a controller that does not see the private state of the
plant. Our main results are twofold. First, we give a fixed-point
algorithm for computing the set of states from which a player can win
with a deterministic observation-based strategy for any omega-regular
objective. The fixed point is computed in the lattice of antichains
of state sets. This algorithm has the advantages of being directed by
the objective and of avoiding an explicit subset construction on the
game graph. Second, we give an algorithm for computing the set of
states from which a player can win with probability 1 with a
randomized observation-based strategy for a B\“uchi objective. This
set is of interest because in the absence of perfect information,
randomized strategies are more powerful than deterministic ones. We
show that our algorithms are optimal by proving matching lower bounds.

Several verification methods involve reasoning about multi-valued
systems, in which an atomic proposition is interpreted at a state as a
lattice element, rather than a Boolean value.
We develop an automata-theoretic framework
for reasoning about multi-valued objects, and describe its
application. The basis to our framework are {\em lattice automata} on
finite and infinite words, which assign to each input word a lattice
element. We study the expressive power of lattice automata, their
closure properties, the blow-up involved in related constructions, and
decision problems for them.
Lattice automata exhibit interesting features from a theoretical point
of view. For example, we show that
while determinization of lattice automata involves a blow up that
depends on the size of the lattice, such a blow up can be avoided when
we complement lattice automata. Thus, complementation is easier than
determinization. In addition to studying the theoretical aspects of
lattice automata, we describe how they can be used for an efficient
reasoning about a multi-valued extension of LTL.

We present a static analysis based on separation logic that supports
counter-example guided abstraction refinement. We handle both
refinement of the heap shape information and of the arithmetic portion
of the abstract state. One interesting aspect of our approach is the
generation of counter-example programs (possibly involving loops)
instead of traces. Our analysis is also modular in that any static
analysis tool can be used to perform refinement of the arithmetic
portion of the abstraction.

Abstract: DDP is the analysis approach of pursuing inter-related goals speculatively. Demand-driven (DD) algorithms in general take the approach of pursuing goals that depend on each other for their solutions. DDP enriches the base DD approach by using pruning to limit the number of actively pursued goals. In this talk I will: overview DDP, describe DDP's genesis in an application where linear-time algorithms are too slow, present experimental evidence about DDP's precision vs. CPU-time envelope, and outline possible future directions for new applications, e.g. supporting whole-program optimization.

Abstract: Parity conditions and Streett/Rabin conditions are central in games theory.
There are two main versions of these games: The so-called “weak” ones deal with the set of occuring states, while the classical games deal with the set of states occuring infinitely often.
Between these two models, Chaterjee and Henzinger defined a hybrid way, called “finitary”. As in classical games, “bad” events (odd priorities or requests) that occur finitely often are not considered. But for those bad events that occur infinitely often, the corresponding good event (smaller even priority or corresponding answer) must occur after a bounded delay. Chatterjee and Henzinger proved that these games are determined for parity and Streett winning conditions, and proposed algorithms to compute winning regions and strategies.
We present here a faster way to solve such finitary games. We solve finitary parity in polynomial time (O(n³)), and finitary Streett in time O(4k.k2.n².m²).

Abstract: During the system-level design process of a real-time embedded system, a designer typically faces various questions related to the system performance: Do the timing properties of a certain architecture meet the design requirements? What is the on-chip memory demand? What are the different CPU and bus utilizations? Which resource acts as a bottleneck? These and other similar questions are generally hard to answer for embedded system architectures that are highly heterogeneous, parallel and distributed and thus inherently complex. In the first part of this talk I will give an introduction to the field of system-level performance analysis and its challenges, and I will introduce the approach of Modular Performance Analysis with Real-Time Calculus that we developed at ETH Zurich. In the second part of this talk, I will introduce the theory of Real-Time Interfaces, an approach to combine the areas of Interface-Based Design and Real-Time Calculus for interface-based design of real-time systems.

Abstract: In this talk we consider the model checking problem for Metric Temporal Logic (MTL)—a version of Linear Temporal Logic with timing constraints, which is interpreted over dense time. We focus on the `safety' fragment of MTL, including invariance and bounded liveness, and we allow infinitely precise or `punctual' specifications (for example, that something will become true in exactly one time unit). Our main result is to show EXPSPACE-completeness of model checking in case the timing constraints in the logic are encoded in binary, and PSPACE-completeness in case timing constraints are encoded in unary. Our upper and lower complexity bounds involve a connection between MTL formulas and reversal-bounded Turing machines.
Starting with the foundational work of Alur, Feder and Henzinger, existing decision procedures for MTL crucially involve restricting the precision of the timing constraints, or restricting the precision of the semantics by adopting an integer-time model. In both cases the model checking problem is EXPSPACE-complete. We found it surprising that by staying with safety properties one can accommodate dense time and punctual specifications, with no complexity penalty.
This is joint work with Patricia Bouyer, Nicolas Markey and Joel Ouaknine.

Abstract: We describe a new program termination analysis designed to handle imperative programs whose termination depends on the mutation of the program's heap. We first describe how an abstract interpretation can be used to construct a finite number of relations which, if each is well-founded, implies termination. We then give an abstract interpretation based on separation logic formulae which tracks the depths of pieces of heaps. Finally, we combine these two techniques to produce an automatic termination prover. We show that the analysis is able to prove the termination of loops extracted from Windows device drivers that could not be proved terminating before by other means; we also discuss a previously unknown bug found with the analysis.

Abstract: In this talk, the basic mathematical requirements needed to formally construct and prove complex software systems are presented. It will be illustrated with an example dealing with distributed computations.

Abstract: It is often useful to revert a long-running computation to an earlier state. Transient faults, for example, can often be repaired by re-executing the code in which they occur. While relatively straightforward to describe in a sequential setting through the capture and application of continuations, it is less clear how to ascribe a meaningful language-level semantics for safe re-execution in the presence of concurrency. In this talk, we present a language-based checkpointing and recovery mechanism for Concurrent ML. Stabilizers permit the construction of first-class lightweight per-thread monitors that can be used to efficiently record and restore globally consistent checkpoints.
We have explored the use of stabilizers for transient fault recovery, for expressing open-nested multi-threaded transactions, and as a foundation for speculative threading. Experimental results on realistic server-side applications indicate that the overheads of using stabilizers is small, on the order of four to six percent, leading us to conclude that they are a viable facility for defining safe modular checkpoints for complex concurrent programs. This is joint work with Lukaz Ziarek and Philip Schatz.

Abstract: State explosion is the curse of concurrency. Thread-modular verification a la Flanagan/Qadeer is an appealing approach that has shown its practical usefulness in fighting the state explosion problem for verifying concurrent systems. The method trades the gain of PTIME efficiency for the possibility of inconclusive responses due to a coarse abstraction. The method is given as an inference system which does not quantify that abstraction. Not knowing what the abstraction consists of and what loss of precision it entails is unsatisfying at least conceptually. We investigate the formalization of the method as an abstract interpretation. We are able to identify the abstraction; its definition involves Cartesian products of sets. This result seems the first step in a systematic study of similar abstractions of the state explosion problem.

Abstract: This talk introduces a concurrency control abstraction, called preemptible atomic regions (PARs). PARs are a restricted form of software transactional memory that provide a convincing alternative to mutual exclusion monitors in commodity uniprocessor real-time systems. I will report on the implementation of Real-time Virtual Machine with built-in support for atomicity. The predictability requirements of real-time systems and the rich source language (Java) present a number of challenges for an efficient implementation of atomicity. Furthermore, interoperability with existing code requires smooth integration with existing implementations of monitors that support priority inheritance or priority ceiling emulation. PARs have been evaluated on several real-time programs, including a fielded avionics application. The results show that atomicity can improve the response time of high-priority tasks without noticeably decreasing overall throughput.

Abstract: In this talk I shall present a method for verifying programs handling counter variables and singly-linked lists. The main difficulty is due to the fact that lists may share elements and even have cycles. I will present a logic used to specify both shape and arithmetic properties of the program state. The decidability of the validity problem in the specification logic is proved next. The semantics of the program is given in terms of abstract graph transformations. This approach allows one to model the program as a counter automaton, and use existing techniques to compute invariants and decide termination. Finally, some experimental results are presented.

Abstract: Many software model checkers are based on predicate abstraction. Values of variables in branching conditions are represented abstractly using predicates. The strength of this approach is its path-sensitive nature. However, if the control flow depends heavily on the values of memory cells on the heap, the approach does not work well, because it is difficult to find `good' predicate abstractions to represent the heap. In contrast, shape analysis can lead to a very compact representation of data structures stored on the heap. In this paper, we combine shape analysis with predicate abstraction, and integrate it into the software model checker BLAST. Because shape analysis is expensive, we do not apply it globally. Instead, we ensure that shapes are computed and stored locally, only where necessary for proving the verification goal. To achieve this, we extend lazy abstraction refinement, which so far has been used only for predicate abstractions, to shapes. This approach does not only increase the precision of model checking and shape analysis taken individually, but also increases the efficiency of shape analysis (we do not compute shapes where not necessary). We implemented the technique by extending BLAST with calls to TVLA, and evaluated it on several C programs manipulating data structures, with the result that the combined tool can now automatically verify programs that are not verifiable using either shape analysis or predicate abstraction on its own.

Abstract: Cryptographic protocols are small message-passing distributed programs relying on cryptographic primitives to pass information between different sites communicating over an insecure network. Unlike ordinary analysis of distributed application, the main challenge in the analysis of these protocols is the handling of an attacker who tries to make the protocol fail. We have proposed a generic model of intruder who is parameterized by the operations it can perform, and we have recently proved that decision procedures for the existence of an attack on a protocol are modular. But, maybe more interestingly, it turns out that the proposed model of intruder is very generic, and we will present in the second part of the talk how can the problem of composing functions can be reduced to the check that a protocol trace is an attack. A possible domain of application is the definition of brokers for “functionnal” Web Services, such as e.g. an interface to the scalapack library (work in progress).

Abstract: We address the problem of refining abstract domains to remove spurious error traces, which is a central step in the Counterexample-guided Abstraction Refinement (CEGAR) paradigm. We are interested in handling general abstract domains, and particularly in localizing abstractions to be coarse enough for scalability and precise enough for proving interesting properties. One of the potential advantages is the ability to work with different abstract domains at different control flow locations, which is necessary in order to scale analyses with expensive abstract domains. We discuss some of the challenges in handling non-relational domains and describe a refinement procedure for general relational domains. The procedure optimizes the refinement according to preference orders based on the set of control flow where refinement occurs and the abstract values that appear along the refined trace.
This joint work with John Field, Tom Henzinger, Ramalingam, and Mooly Sagiv.