Upcoming Talks
Previous years: 2008, 2006, 2005

## 2007

Abstract:

We present a method for generating polynomial invariants for a subfamily of imperative loops operating on numbers, called the P-solvable loops. The method uses algorithmic combinatorics and algebraic techniques.

The approach is shown to be complete for some special cases. By completeness we mean that it generates a set of polynomial invariants from which, under additional assumptions, any polynomial invariant can be derived.

These techniques are implemented in a new software package Aligator written in Mathematica and successfully tried on many programs implementing interesting algorithms working on numbers.

Title: Parameterized procedure contracts for modular verification

Abstract:

Modular verification promises high scalability compared to whole program analyses. However this scalability comes at a price: the user must annotate the program with procedure contracts. We present a new algorithm for automatic inference of procedure contracts. Instead of annotating each procedure with a fixed pre- and postcondition pair, the user provides templates that describe sets of possible procedure contracts. The inference algorithm then uses a theorem prover to find instantiations of these templates for each procedure, such that the program is correct. In contrast to existing techniques our algorithm is still modular and checks each procedure in isolation. The inference helped us to significantly reduce the number of annotations needed for checking properties of device drivers. This is joint work with Shuvendu Lahiri and Shaz Qadeer.

Abstract:

(Joint work with Iman Narasamdya)

The problem of translation validation in optimizing compilers can be formulated as follows. Given two programs P1 and P2 such that P2 is obtained from P1 by applying some transformation (for example, performed by an optimising compiler), prove automatically that P2 and P1 are equivalent.

The problem may turn out to be simpler than the general problem of program equivalence since P1 and P2 may be similar to each other and this similarity can be exploited in addition to properties of P1 and P2.

We propose an approach for proving program equivalence based on inter-program invariants: a generalization of program invariants for the case of two programs. To this end, we first present the standard theory of invariants in an abstract form and then develop a similar theory for the case of two programs.

* 2007-12-13 - 14:15 Uwe Waldmann, Max-Planck-Institut für Informatik: “SPASS+T”

Abstract:

SPASS+T is an extension of the superposition-based theorem prover SPASS that enlarges the reasoning capabilities of SPASS using an arbitrary SMT procedure, built-in simplification rules and standard axioms for arithmetic. We give an overview of the architecture of SPASS+T and discuss some of its inference rules, an advanced coupling between SPASS and the SMT procedure that has recently been implemented, and other new developments.

This research is joint work with Virgile Prevosto and Stephan Zimmer.

* 2007-12-11 -16:15 Geoffrey Washburn: “Generalizing Parametricity Using Information Flow”

Abstract:

Using type meta-data to determine the behavior of programs is called type-directed programming. Type-directed programming allows many operations on data to be expressed easily and concisely, such as serialization, iterators, and structural equality. However, the ability to dispatch based upon type meta-data at runtime means that abstract types are no longer black boxes, and that authors of abstract data types cannot rely on type abstraction to hide implementation details from clients. One consequence is that it is no longer possible to freely interchange alternate implementations of an abstract data type.

Currently, in languages without type-directed operations, authors of abstract data types can use Reynolds's Parametricity Theorem to reason about the properties of abstract data types and the relationship between different implementations of the same abstract data type. In this talk, I will discuss a generalization of the Parametricity Theorem that allows the same kinds of reasoning in the presence of type-directed operations. The key is to augment the type and kind system of the language with information-flow annotations, this change makes the dependency relationships that are implicit in the standard Parametricity Theorem explicit.

This research was joint work with Stephanie Weirich of the University of Pennsylvania.

• 2007-12-06 -16:15 Bernd Schoeller: “Enhancing Contracts for Proofs”

Abstract:

The design of the Eiffel programming language and the Design by Contract methodology has its roots in the works by Tony Hoare and the invention of axiomatic semantics for program verification. But even with these strong ties, it is sometimes not feasible and sometimes impossible to prove the functional correctness of simple Eiffel applications and libraries.
This talk will explore why this is so, analyzing sequential Eiffel and its definition by the ECMA Eiffel standard. It will continue by suggesting two simple extensions to Design by Contract, models and frames, that are faithful to the spirit of modularity and object-orientation, and show how these extensions help to make formal verification of object-oriented programs possible, and in many cases feasible.

Abstract: Analysis of discrete embedded systems often assumes that the embedded systems, like controllers, are finite-state and that there are finitely many in the whole system, for instance in a car. This assumption can in general not be kept if multiple systems like cars are supposed to interact, for example in order to autonomously form convoys to reduce energy and space consumption (as studied in the California PATH Project).

A rather natural model for such systems, where participants as cars freely appear and disappear, may be interconnected via links, and communicate via synchronous or asynchronous message exchange, are graph-labelled transition systems complemented by a first-order variant of temporal logic.

The talk surveys and discusses three aspects of specification and analysis of so-called Dynamic Communication Systems (DCS). Firstly, there is a new model description language for DCS, which we will discuss in particular in comparison to calculii like the pi-calculus. Secondly, we employ a particular first-order temporal logic. It solves a couple of issues shared between most earlier proposals, which interestingly converged syntax-wise but differ significantly in semantics. And thirdly, we demonstrate the application of an abstraction technique by Ken McMillan in this setting, which is particularly suitable for a fragment of our logic corresponding to so-called scenarios.

Abstract: In the model checking framework for the verification of state based systems, temporal logics as, e.g., linear time temporal logic (LTL) or computation tree logic (CTL) are very popular for specifying properties of the system under consideration. These logics are expressive enough to describe important classes of properties like reachability, safety, or liveness conditions.

In the context of software model checking, most of the systems are infinite due to unbounded data structures or recursion. It has turned out that many of the verification techniques can be transferred to the case of recursive programs over finite data types using pushdown automata as a model for the programs. For these models other properties like pre-post-conditions for procedures become interesting. Such conditions cannot be expressed in the usual specification logics because relating entry and exit points of a procedure in a computation is a non-regular condition.

In the last few years, initiated by the work of Alur, Etessami, and Madhusudan, some logics have been developed for this purpose. The aim of this talk is to give an overview of these logics and the main results from this area.

Abstract: The intrisic problem in deciding satisfiability of formulas of a temporal logic often is to capture the regeneration of least fixpoint constructs correctly. This is particularly difficult for mu-calculi which can feature arbitrary nested least and greatest fixpoint constructs. Niwinski and Walukiewicz present a tableau system for the modal mu-calculus which introduces the notion of a trace - an internal path through a tableau branch. Satisfiability is then characterised by a global condition on tableau branches regarding the (non-)existence of such traces. Recently, Dax, Hofmann and myself have reformulated their tableau system for the linear time mu-calculus and compared two approaches to deciding these global conditions: one uses a reduction to the inclusion problem for nondeterministic Büchi automata. The other borrows a technique from static analysis for functional programs, namely the size change termination principle. There, the problem is to decide whether or not a certain simplified functional program terminates for all inputs. While this can also be done using Büchi automata, there is also another method which searches for an idempotent morphism in a finite category generated by the sequents and rules of the tableau system. These problems are in fact interreducible, i.e. the morphism search can also solve the inclusion problem for Büchi automata. This approach is, however, limited to the linear time mu-calculus. The satisfiability problem for the modal mu-calculus is known to be EXPTIME-complete and therefore most probably not polynomially reducible to these PSPACE-complete problems of Büchi inclusion and morphism search. The right correspondence for the modal mu-calculus would be the problem of deciding whether or not a certain simplified logic program produces one output. To the best of our knowledge there is no known method like the morphism search for logic programs. However, satisfiability for the modal mu-calculus can be decided using complementation for nondeterministic Büchi automata and the emptiness problem for parity tree automata in a way that generalises the automata-theoretic method for the linear time mu-calculus.

In this talk we want to introduce the linear time and modal mu-calculus, present the correspondence between the former and the size change termination principle, show how to decide satisfiability for the modal mu-calculus, and then discuss how algorithms for that can solve typical static analysis problems about logic programs.

Abstract: Markov decision processes are a fundamental model in the area of system design and control optimization. In this talk, I will concentrate on a class of infinite-state Markov decision processes generated by stateless pushdown automata (pBPA). These processes combine non-deterministic and probabilistic choice with recursive calls. The reachability problem for pBPA is as follows: Given a set T of configurations and an initial configuration s, decide whether there is a suitable strategy such that the probability of reaching a configuration of T by a path initiated in s is equal to (or different from) a given rational number. This problem is the most basic one in the area of controller synthesis and verification.

The main aim of this talk is to present results on decidability and complexity of the reachability problem for pBPA and to illustrate some methods used for solving this problem. I will start with a discussion of general properties of infinite-state Markov decision processes that are connected with the reachability problem. Then I will present polynomial-time algorithms for the qualitative version of the reachability problem for pBPA. I will conclude with a presentation of open problems and some possibilities for further research.

This talk is based on joint work with Vaclav Brozek, Vojtech Forejt, and Antonin Kucera.

Abstract: Algorithms are different from programs and should not be described with programming languages. For example, algorithms are usually best described in terms of mathematical objects like sets and graphs instead of the primitive objects like bytes and integers provided by programming languages. Until now, the only simple alternative to programming languages has been pseudo-code.

+CAL is an algorithm language based on TLA+. A +CAL algorithm is automatically translated to a TLA+ specification that can be checked with the TLC model checker or reasoned about formally. (No knowledge of TLA+ is assumed).

+CAL makes pseudo-code obsolete.

• 2007-10-08: Monday, 15:00, in BC 355. Tobias Nipkow

Reflecting Quantifier Elimination: From Dense Linear Orders to Presburger Arithmetic

This talk presents reflected quantifier elimination procedures for both integer and real linear arithmetic. Reflection means that the algorithms are expressed as recursive functions on recursive data types inside some logic (in our case HOL), are verified in that logic, and can then be applied to the logic itself. After a brief overview of reflection we will discuss a number of quantifier elimination algorithms for the following theories:

- Dense linear orders without endpoints. We formalize the standard DNF-based algorithm from the literature. - Linear real arithmetic. We present both a DNF-based algorithm extending the case of dense linear orders and an optimized version of the algorithm by Ferrante and Rackoff. - Presburger arithmetic. Again we show both a naive DNF-based algorithm and the DNF-avoiding one by Cooper.

We concentrate on the algorithms and their formulation in Isabelle/HOL, using the concept of “locales to allow modular definitions and verification.

This is joint work with Amine Chaieb.

• 2007-09-27: 16:15 Ruzica Piskac: Decision Procedures for Multisets with Cardinality Constraints

Applications in software verification and interactive theorem proving often involve reasoning about sets of objects. Cardinality constraints on such collections also arise in these applications. Multisets arise in these applications for analogous reasons as sets: abstracting the content of linked data structure with duplicate elements leads to multisets, and interactive theorem provers such as Isabelle specify theories of multisets and prove a number of theorems about them. However, the decidability and complexity of multiset constraints is much less understood than the decidabilty of the constraints on sets.

The first contribution of this paper is a polynomial-space algorithm for deciding expressive quantifier-free constraints on multisets with cardinality operators. Our decision procedure reduces in polynomial time constraints on multisets to constraints in an extension of linear arithmetic with certain unbounded summation expressions. We prove bounds on solutions of resulting constraints and describe a polynomial-space decision procedure for these constraints.

The second contribution of this paper is a proof that adding quantifiers to a constraint language containing subset and cardinality operators yields undecidable constraints. The result follows by reduction from Hilbert's 10th problem because quantified constraints can define both addition and multiplication of natural numbers.

Abstract: Counterexample-guided abstraction refinement (CEGAR) is a key technique for the verification of computer programs. In this talk, a new CEGAR-based algorithm for the mu-calculus is presented. It differs from existing algorithms, since it is based on a more expressive abstract model. What's more, it applies refinement only locally (to a single abstract state), i.e., the lazy abstraction technique for safety properties is adapted to the mu-calculus. In addition, it separates refinement determination from the (3-valued based) model checking. Apart from that, three different heuristics for refinement determination are discussed. Furthermore, the practical relevance of mu-calculus verification is illustrated via different applications.

Abstract: One of the main challenges in the verification of software systems is the analysis of statically unbounded data structures with dynamic memory allocation, such as linked data structures. We describe a novel shape analysis for verifying data structures. Symbolic Shape Analysis uses logical formulae to represent sets of states in a heap manipulating program. As in three-valued shape analysis, an abstract domain is specified by predicates over objects on the heap. Symbolic Shape Analysis improves upon existing shape analyses in terms of degree of automation. It uses decision procedures in order to automatically construct an abstract transformer and to automatically refine the abstraction based on spurious counter-examples. The construction of the abstract transformer is analogous to the corresponding construction for classical predicate abstraction, except that predicates over objects on the heap take the place of state predicates and boolean heaps (sets of bitvectors) take the place of boolean states (bitvectors).

We have implemented Symbolic Shape Analysis in the Bohne tool as part of the Jahob system for verifying data structure consistency. Bohne verifies data structure operations and shows that 1) the operations preserve data structure invariants and 2) the operations satisfy their interfaces specified in terms of changes to the set of objects stored in the data structure. We have used Bohne to verify implementations of linked data structures such as (doubly-linked, sorted) lists, trees with and without parent pointers, as well as uses of data structure interfaces in client code.

Title: Static Analysis of XML Programs: a Modal Logic Perspective

The wide adoption of data-centric architectures motivates research for static type systems able to ensure a high level of safety and efficiency of programs manipulating XML data. One of the main challenges is caused by the emergence of powerful querying mechanisms (such as XPath), for which major analysis tasks have a very high computational complexity, and even rapidly become undecidable depending on the considered features. I will present a new modal logic designed for reasoning on finite trees, along with an efficient satisfiability-testing algorithm. The logic is expressive enough to support regular tree types together with multidirectional navigation and finite recursion in trees. The fully implemented system allows to effectively solve problems that are boolean combinations involving XPath queries and XML types (e.g. query containment, equivalence, static typing of an annotated query…). I will show how the logic solver can be used for the static analysis of programs and mention some important applications and extensions of these results.

Internet applications currently have a choice between stream and datagram transport abstractions. Datagrams efficiently support small transactions and streams are suited for long-running conversations, but neither abstraction adequately supports applications like HTTP that exhibit a mixture of transaction sizes, or applications like FTP and SIP that use multiple transport instances. Structured Stream Transport (SST) enhances the traditional stream abstraction with a hierarchical hereditary structure, allowing applications to create lightweight child streams from any existing stream. Unlike TCP streams, these lightweight streams incur neither 3-way handshaking delays on startup nor TIME-WAIT periods on close. Each stream offers independent data transfer and flow control, allowing different transactions to proceed in parallel without head-of-line blocking, but all streams share one congestion control context. SST supports both reliable and best-effort delivery in a way that semantically unifies datagrams with streams and solves the classic “large datagram” problem, where a datagram's loss probability increases exponentially with fragment count. Finally, an application can prioritize its streams relative to each other and adjust priorities dynamically through out-of-band signaling. A user-space prototype shows that SST is TCP-friendly to within 2%, and performs comparably to a user-space TCP and to within 10% of kernel TCP on a WiFi network.

Abstract: Fairness is a convenient and popular tool when modelling and specifying concurrent systems. A large variety of fairness notions exists in the literature. However, in contrast to safety and liveness, there was no fully satisfactory abstract characterisation of fairness.

We give and justify a characterisation of fairness that is in line with the characterisations of safety and liveness given by Lamport, Alpern, and Schneider. We provide independent characterisations in terms of language-theory, game theory, and general topology.

Our characterisation of fairness gives rise to a notion of a “fairly correct” system: a system is fairly correct if there exists a –possibly strong– fairness assumption under which it is correct. Many distributed, especially fault-tolerant, systems are only fairly correct with respect to their actual specification since often, fully correct solutions are too expensive or do not exist.

We motivate this relaxation of correctness and compare it with the probabilistic notion of an “almost correct” system, that is, a system that is correct with probability 1 for a given probability measure. While the two notions “fairly correct” and “almost correct” share many properties, they do not coincide in general.

The comparison of the two notions directly leads to pleasing results for the problem of automatically checking whether a finite-state system is fairly correct.

Abstract: We consider the problem of giving a consistent global semantics to systems with both discrete-time and continuous-time components. Our solution is based on a notion of ideal sampling, where the discrete-time components “watch” periodically the behavior of the continuous-time components. Based on this notion, we identify a simply definable and reasonably expressive MTL fragment whose formulas preserve their meaning when switching their interpretation from discrete to continuous time, and vice versa.

Some relevant features of the resulting framework are then analyzed. In particular, we assess the expressiveness of the MTL fragment, we show how the MTL fragment can be extended, and we compare the notion of discretization introduced by the framework against other notions based on digitization.

Finally, we present a verification technique based on the integration framework. The technique performs validity checking of MTL formulas over continuous-time (with some restrictions) through reduction to discrete-time.

• 2007-06-18 Monday, 16:00 Hai Huu Nguyen: Verification of Shape and Size Properties

We propose a verification system that is concise, precise and expressive for ensuring safety properties of pointer-based programs. Our approach uses user-definable shape predicates to allow programmers to describe a wide range of data structures with their associated size and bag properties. To support automatic verification, we design a new entailment checking procedure that can handle well-founded inductive predicates using unfold/fold reasoning. Our entailment checking is sound and terminating, but incomplete.

We have built a prototype of our verification system. The system expects pre- and postcondition, as well as loop invariants, to be provided by the user. It can presently handle a number of interesting data structures, such as height-balanced trees, ordered trees and lists, priority queues,…

I will present a set of simple techniques that enable software systems to survive otherwise fatal errors. The goal is to enable systems to execute through such errors, potentially with degraded functionality, to continue to serve the needs of their users.

Failure-oblivious computing makes it possible to ignore memory addressing errors. A system that uses failure-oblivious computing checks each memory access to discard out of bounds writes and manufacture values for out of bounds reads. Our experimental results show that this technique eliminates buffer-overflow security vulnerabilities and enables widely used servers such as Apache, Pine, and Sendmail to continue to execute successfully through otherwise fatal memory errors.

It is also possible to eliminate resource consumption errors such as memory leaks, file handle leaks, infinite loops, and infinite recursions. Our experimental results show that our techniques eliminate resource consumption errors in widely used progams such as Squid, Pine, and xinetd.

All of these techniques are simple to implement and deploy. They do, however, perturb the standard programming language semantics and introduce the possibility of taking the software down unanticipated execution paths. As such, they represent a significant departure from standard approaches. I will briefly discuss the benefits and risks of adopting such techniques.

• 2007-06-05 Viktor Schuppan: “Recent Results on Model Checking and Satisfiability of Linear Temporal Logic”

Abstract:

We give an overview of three recent projects on model checking and satisfiability of linear temporal logic.

In the first project we augment a finite state automaton such that existence of a fair cycle in the original automaton corresponds to reachability of a bad state in the augmented automaton. This enables us to use methods and tools for verification of safety properties to check \omega-regular properties.

The previous technique actually allows to find the shortest fair cycle in an automaton. In the second part we show how to encode a linear temporal logic property into a Büchi automaton such that a shortest fair path in the product of a model and a property corresponds to a shortest counterexample to the property in the model.

Finally we have made first steps towards a satisfiability modulo theory approach for linear temporal logic. Notable features of the approach include an extension of the pure literal rule from propositional to linear temporal logic and a method to extract an unsatisfiable core from a conjunction of linear temporal logic formulas.

• 2007-06-04 Philipp Haller: “Actors that Unify Threads and Events”

Abstract:

There is an impedance mismatch between message-passing concurrency and virtual machines, such as the JVM. VMs usually map their threads to heavyweight OS processes. Without a lightweight process abstraction, users are often forced to write parts of concurrent applications in an event-driven style which obscures control flow, and increases the burden on the programmer. In this talk we show how thread-based and event-based programming can be unified under a single actor abstraction. Using advanced abstraction mechanisms of the Scala programming language, we implemented our approach on unmodified JVMs. Our programming model integrates well with the threading model of the underlying VM.

* 2007-05-31 Andreas Rossberg: “Typed Open Programming – A higher-order typed approach to dynamic modularity”

Abstract: Open programming – the development of programs that support dynamic exchange of higher-order values and components – is a domain currently dominated by untyped or weakly-typed languages. We present an approach for reconciling open programming with strong static typing.

We first present the design of a concrete programming language, Alice ML, that consists of a conventional functional language extended with a set of orthogonal features like higher-order modules, dynamic type checking, higher-order pickling, and concurrency. On top of these a flexible system of dynamic components and a simple but expressive notion of distribution is realised. The central concept in this design is the “package”, a first-class value embedding a module along with its interface type, which is dynamically checked whenever the module is extracted.

The standard model of type abstraction, being based on existential quantification, is invalidated by the presence of primitives for dynamic type inspection. To maintain abstraction safety in our approach, we hence develop an alternative formalism using dynamic generation of type names and a novel notion of abstracton kinds classifying abstract types. In order to recover ML-style after-the-fact type abstraction and modular type translucency, we further extend the system with higher-order notions of coercions over types and (dependent) kinds.

• 2007-05-29 Patrick Lam, MIT: “Statically Verifying Software Safety Properties with Tracematches”

Tracematches enable developers to declaratively specify safety properties for Java programs. These properties can be either domain-specific properties (e.g. compiler passes are always executed in the correct order) or generic properties (e.g. linked lists are never modified during iteration). While it is possible to dynamically monitor tracematches, such an approach 1) carries runtime overhead and 2) does not suggest reasonable recovery actions. We investigate a static analysis that promises to reduce runtime overhead and, in some cases, can guarantee that a program will never violate certain safety properties (eliminating the need for recovery actions). Our approach propagates tracematch states through methods to ensure that unsafe states are never triggered during an execution, thereby enabling the elimination of runtime checks. Pointer analysis turns out to be important in the static analysis of tracematches, and I will discuss the various analyses needed to make our overall static analysis work.

Joint work with Eric Bodden and Laurie Hendren

• 2007-05-25 11:00, Eshcar Hillel, Technion: “Highly-Concurrent Data Structures”

Abstract:

With multi-core and multiprocessing architectures becoming common, modern applications require concurrent data structures for their computations. To improve the throughput of a concurrent application, it is critical to have low contention and high level of concurrency. That is, operations that access separate parts of the data structure should not interfere with each other, allowing more operations to complete and increasing throughput.

Software transactional memory (STM) is a leading methodology for alleviating the difficulty of designing concurrent applications. Existing STM implementations, however, introduce a great degree of interference among concurrent operations, causing slow-down and contention even among operations that access widely separated data sets.

This talk explains what highly-concurrent data structures are, and presents two methodologies for designing highly-concurrent implementations. In both methodologies, the operations are implemented by acquiring virtual locks on nodes according to some order that decreases the length of delay chains and increases concurrency. Then, the operation applies its changes atomically.

We present LSTM, a highly-concurrent dynamic STM, in which operations proceed without interference or contention unless they access overlapping data sets.

Additionally, we present a novel approach for highly-concurrent doubly-linked lists, based on dynamically maintaining a legal coloring of the nodes in the list.

This is a joint work with Hagit Attiya (Computer Science department, Technion).

• 2007-05-15 16:15 BC 355 Pierre Geneves, EPFL: “Logic-Based Static Analysis of XML Paths and Types”

Abstract:

The wide adoption of Web standards and the importance of designing reliable data-centric applications have made crucial to provide the next generation of programming languages with a high level of safety, scalability, and efficiency for this purpose. In particular, the emergence of XPath/XQuery-like mechanisms for querying and manipulating XML data suggests equipping the host languages with adapted precise and efficient static type systems.

One of the main challenges is caused by XPath, for which typical static analysis tasks (such as typing of an annotated query, or query containment) have a very high complexity, and rapidly become undecidable depending on the considered language features.

I will detail results obtained for a large XPath fragment, using a new logic of finite trees. This logic currently offers the best balance known between expressivity and complexity. It is expressive enough to support regular tree types together with multidirectional navigation and finite recursion in trees.

I will demo how the satisfiability solver for this logic can be used for the static analysis of programs.

I will also mention important applications and extensions of these results: type inference, type projection for scalable runtime processing, and static analysis techniques for security and data access control.

Abstract: We are investigating an automata-theoretic type system for behavioral specification of open systems that allows expressing assumptions and guarantees made by interacting components about the behavioral properties of each other.

The assumptions made by a component and the guarantees it makes if its assumptions are satisfied are captured by its interface. We present the notions of compatibility and refinement between interfaces. Our framework is amenable to compositional refinement. This allows interfaces to be used as types to summarize the behavior of more concrete components. In order to work with open systems, we have to analyze two player games, instead of conventional (one player) transition systems. In this context the synthesis question is relevant in addition to the verification question; and we are able to compute the most general environment conditions under which a given set of components are compatible with each other. The benefit we achieve from being able to work with open systems is that we can analyze large systems incrementally without needing extensive summary information to close the system at each stage. We have applied this formalism to hardware systems to represent and analyze safety constraints on signal values; on recursive software components to represent and analyze constraints on allowed method invocation patterns; on embedded software systems to represent and analyze resource usage constraints; and for specification and verification of the method invocation behavior of web service applications constructed from asynchronously interacting multi-threaded distributed components. We get several interesting two player games corresponding to the most interesting verification and synthesis questions in these contexts and have efficient algorithms for solving them. We have implemented our algorithms for checking compatibility of interfaces in our tool Chic. We are also investigating other applications of our component interface description methodology, such as automated partitioning of software into units to achieve high test coverage with few false alarms.

Abstract: Register allocation is one of the most important optimizations in modern compilers. It maps the variables of a program to the processor's registers in order to accelerate the program's execution.

Graph coloring, as introduced by Chaitin, has been the most popular and most successful register allocation technique since its introduction in the early 1980s. It reduces the assignment problem to coloring the so-called interference graph which is constructed from the program to compile. However, since graph coloring is NP-complete and each undirected graph can (theoretically) occur as interference graph during register allocation, register allocation is also NP-complete.

This drastically changes if we require the program to be in static single assignment form (SSA), a widely used program representation flavor. It turns out that the interference graphs of SSA programs all fall into a special class of graphs, the so-called chordal graphs. Their most appealing feature is that they can be colored in quadratic time.

We present the theoretical properties of the interference graphs of SSA programs, outline the correspondences to non-SSA programs and show how the properties of chordal graphs help to build simpler and more efficient register allocators.

Abstract: We extend combinatorial imperfect information games to a multi-player setting, where information is exchanged with respect to hierarchical constraints. We show that this class of games can be used for model checking first-order logic on structures, where elements are built inductively. We focus on structures with relations given by automata where these games have finite arenas. Assuming that players alternate their moves, it is possible to decide the winning coalition in such a game. We discuss how this game-based representation can be exploited algorithmically.

Abstract: We consider the termination problem of programs manipulating tree-like dynamic data structures. Our approach is based on an abstract-check-refine loop. We use abstract regular tree model-checking to infer invariants of the program. Then, we translate the program to a counter automaton which simulates it. If the counter automaton can be shown to terminate using existing techniques, the program terminates. If not, we analyze the possible counterexample given by a counter automata termination checker and either conclude that the program does not terminate, or else refine the abstraction and repeat. We show that the spuriousness problem for lasso-shaped counterexamples is decidable in some non-trivial cases. We applied the method successfully on several interesting case studies.

Joint work with Peter Habermehl, Tomas Vojnar and Adam Rogalewicz

Abstract:
Deadlocks are undesirable states of concurrent systems, where a set of
processes is in a circular wait state in which each process is blocked
trying to gain access to a resource held by the next process in the
chain. Solutions can be classified into three categories: detection,
prevention, and avoidance. Deadlock detection takes an optimistic
approach, assuming that deadlocks are infrequent, detecting and
correcting them at run-time. Deadlock prevention is a pessimistic
approach where the possibility of a deadlock is broken statically at
the price of restricting concurrency. Deadlock avoidance takes a
middle route. At run-time each allocation request is examined to
ensure that it cannot lead to a deadlock. This method is possible by
requesting spawned processes to announce their maximum resource
utilization.

embedded systems (DREs). Deadlock detection and prevention can easily
be adapted to distributed systems. However, detection is not
applicable to real-time systems since the worst case running time is
long. Moreover, in embedded systems actions cannot be undone. Deadlock
prevention limits concurrency unnecessarily, so an efficient
distributed deadlock avoidance schema can have a big practical impact
in DREs performance. However, it is known since the mid 90s that the
general solution for deadlock avoidance is impractical, since it
requires maintaining global states and global atomicity of
actions. The communication costs involved simply outweigh the benefits
gained from avoidance over prevention.

I will present an efficient distributed deadlock avoidance schema that
requires no inter-site communication, based on a combination of static
analysis and run-time protocols. This solution assumes that the
possible sequences of calls are available for analysis at design
time. I will also present extensions of the basic schema that
guarantee liveness, and an efficient distributed priority inheritance
protocol. Finally, I will discuss the trade-offs to implement these
protocols and the importance of accurate static analysis.

• April 2: Shaz Qadeer, MSR: “Iterative context-bounding: a new approach for finding errors in large multithreaded programs”

Abstract:
Multithreaded programs are difficult to get right. The interaction of concurrently executing threads leads to a huge number of program behaviors.
Programmers, often unable to account for all possible interactions among threads, make errors which are difficult to find by traditional testing methods. Model checking attempts to improve over testing by systematically enumerating program behaviors. However, systematic enumeration is unable to scale to large programs because the number of possible executions grows exponentially with the execution depth.

In this lecture, I will present iterative context-bounding, a new approach for effectively searching the state space of a multithreaded program.
In an execution of a multithreaded program, a context switch occurs when a thread temporarily stops execution and a different thread starts. Iterative context-bounding gives priority to executions with fewer context switches, exploring all executions with no context switches followed by all executions with one context switch and so on.

For a fixed number of context switches, the total number of executions in a program is polynomial in the number of steps taken by each thread. This theoretical upper bound makes it practically feasible to scale systematic exploration to large programs without sacrificing the ability to go deep in the state space. Our evaluation of iterative context-bounding on large real-world programs provides empirical evidence that a small number of context switches is sufficient to expose nontrivial concurrency bugs. We have implemented iterative context-bounding in the CHESS model checker. CHESS has uncovered 9 previously unknown bugs in our benchmarks, each exposed by an execution with at most 2 context switches.

Abstract:
Performance analysis plays an important role in the design of embedded real-time systems. Modular Performance Analysis is an analytical system-level method based on Real-Time? Calculus for analysing complex distributed real-time systems. It allows for obtaining hard worst-case bounds on memory, energy, and timing requirements. It incorporates interface theory and supports component-based design of systems. Using an
MPEG-2 decoder example, we will show how to determine the task deadlines for an EDF scheduler using MPA and show the existence of a schedulability region.

• March 29: Simon Perathoner: “Influence of different system abstractions on the performance analysis of distributed real-time systems”

Abstract:
System level performance analysis techniques play a fundamental role in the design process of embedded systems. Several different abstractions have been proposed so far to address the problem of accurate performance analysis of distributed embedded systems in early design stages. However, the various formal analysis approaches are highly heterogeneous in terms of modelling scope and analysis accuracy and there is a lack of literature on their evaluation and comparison. Thus, it is very difficult for a designer to ascertain which abstraction is most suitable for the analysis of a certain system. In order to relieve this problem we define a first set of benchmark systems for performance analysis. We apply different analysis methods to the benchmarks in order to highlight the specific effects of various abstractions. Moreover, we point out several pitfalls for the analysis accuracy of single approaches and investigate the reasons for pessimistic performance predictions.

Abstract: Pattern matching is part of the enduring appeal of typed functional programming languages such as ML and Haskell. However, traditionally pattern matching only works on concrete data structures, and thus encourages programmers to break abstraction boundaries. In this talk I will give an informal introduction to “active patterns” in F# 2.0, which allow for extensible pattern matching over abstract types. The design encompass both views (as proposed by Wadler) and adhoc patterns (previously proposed for Haskell and called ‘unapply’ methods in Scala), but remains a relatively modest extension to the language.

Background: F# is a .NET functional programming language – see here and here

Abstract:
Markov decision processes (MDP's) are controllable discrete event systems
with stochastic transitions. Performance of an MDP is evaluated by a payoff
function, which associates a real value with each execution of the system.
The controller of an MDP seeks to maximize the expected performance.

Most classical payoff functions, such as the mean-payoff, the discounted
payoff, the limsup payoff or the parity payoff share a non-trivial common
property. In fact, in any MDP equipped with one of these payoff functions,
the controller has an optimal strategy which is positional: he can control
the MDP optimally without memorizing information and in a deterministic way.
From algorithmic point of view, MDPs in which there exists optimal
positional strategies are good modelization tools since values and optimal
strategies are in general computable, often in polynomial time.

These observations motivate our interest for the class of payoff functions
which ensure existence of positional optimal strategies. In this talk, we

Abstract:
Verification and synthesis are the two basic approaches to guarantee that a
system is correct. While verification requires the programmer to provide both
the specification and the implementation, synthesis automatically transforms
the specification into an implementation that is correct by construction.
In this talk, we investigated synthesis in the distributed setting of partial
designs, where the implementation consists of several processes that interact
according to a given architecture. In the verification of partial designs, we
distinguish processes with known from processes with sought implementation.
We will explore the influence of the architecture on the decidability of
distributed synthesis and the verification of partial designs for synchronous
and asynchronous systems.

Abstract:
The talk will present a stochastic modelling approach for the analysis of biochemical reaction networks. The temporal evolution of the chemical species' populations is described in terms of a stochastic process which has an elegant and compact representation based on stochastic automata networks (SANs). The idea will be illustrated by a simple example and benefits as well as drawbacks of the approach will be discussed.

• February 15: Laura Kovacs, RISC: “Polynomial Invariant Generation by Algebraic Techniques for Program Verification in Theorema”

An approach utilizing combinatorics, algebraic methods and logic is
presented for generating polynomial loop invariants for a family of
imperative programs operating on numbers. The approach has been
implemented in the Theorema system, which seems ideal for such an
integration given that it is built on top of the computer algebra
system Mathematica, has a theorem prover for first-order
logic as well as for mechanizing induction. These invariant
assertions are then used for generating the necessary verification
conditions as first-order logical formulae, based on Hoare logic and
the weakest precondition strategy. The approach has been
successfully tried on many programs implementing interesting number
theoretic algorithms. It is also shown that for a subfamily of
loops, called P-solvable loops, the approach is complete in
generating polynomial equations as invariants.

Abstract: Concurrent programs are often designed such that certain functions
executing within critical threads must terminate. Examples of such
cases can be found in operating systems, web servers, e-mail clients,
etc. Unfortunately, no known automatic program termination prover
supports a practical method of proving the termination of threads. In
this paper we describe such a procedure. As is often true in this
setting, the procedure's scalability is achieved through the use of
environment models that abstract away the surrounding threads. The
procedure's accuracy is due to a novel method of incrementallly
constructing environment abstractions. Our method finds the
conditions that a thread requires of its environment in order to
establish termination, by looking at the conditions necessary to prove
that certain paths through the thread represent well-founded relations
if executed in isolation of the other threads. The paper closes with
a description of experimental results using an implementation of our
procedure on Windows device drivers and a description of a previously
unknown bug found with the tool.

• February 13: Ashutosh K. Gupta, EPFL: “Proving Termination of Program Loops”

Abstract:
Proving termination of program loops is needed in software
verification. Ranking functions provide an effective tool for this
task, which can be efficiently automated for several classes of
ranking functions.

In this work, we improve the existing algorithms for the synthesis
of linear ranking function, which are widely used in practice, in
two ways. Firstly, we increase their applicability by devising
heuristic that provide ranking functions that are particularly
useful for software verification. Secondly, we propose an extension
to n-depth linear ranking functions, which are strictly more
powerful than the (plain) linear ranking functions.
The abilities of ranking function synthesis algorithm are inherently
limited by the class of generated functions: a failure to compute a
ranking function does not imply that the subject program does not
terminate. To address this limitation, we propose an algorithm for
proving non-termination of program loops. This algorithm can be
applied if no ranking function can be synthesized.

We implemented the proposed algorithms and experimentally evaluated
their practical applicability for (dis-)proving termination.

Abstract:
The standard counterexample-guided abstraction-refinement (CEGAR) approach uses finite transition systems as abstractions of concrete systems. We present an approach to represent and refine abstractions of infinite-state systems that uses regular languages instead of finite transition systems. The advantage of using languages over transition systems is that we can store more fine-grained information in the abstraction and thus reduce the number of abstract states. Based on this language-based approach for CEGAR, we present new abstraction-refinement algorithms for hybrid system verification. We evaluated our approach by verifying various non-linear hybrid systems.
(This is joint work with Stefan Ratschan and Zhikun She.)

• February 7: “How Should We Program the Internet?” by John Field, IBM Research

Abstract:
Increasingly, software is being built as loosely-coupled collections of distributed components interacting over the internet, glued together by systems software such as databases, messaging systems, web servers, and browsers. Moreover, many such applications are also “inter-organizational”, combining components written in and running in distinct administrative domains. While the trend toward internetworked applications is inexorable, the programming models we are using to build such applications were largely designed for monolithic, freestanding applications. In this talk, I will discuss some of the reasons why programming models should evolve to better support internetworked applications, and enumerate some of the distinguishing features of such applications. I will then describe recent work on the /Reactor /model, a simple/ /kernel programming model which is intended to explore integration of front-end “presentation logic”, back-end “business logic”, and data access in a single distributed language that supports both synchronous and asynchronous component composition.

Abstract:
Nowadays more and more real-life systems are (automatically) controlled by computer programs. It is of a capital importance to know whether the programs governing these systems are correct. This gave rise to the theory of verification. In order to handle real-life systems, various mathematical models have been introduced: timed automata, hybrid systems, etc. Verification adresses the following question: “Given some mathematical model can we verify whether it satisfies a given property (e.g., reachability of safety)?”
Verification question concerns only models for closed systems, where every transition is controlled. If we want to distinguish between actions of a controller and actions of an environment we have to consider control on those formalisms. In this context the control problem addresses the following question: “Given some mathematical model can we force the model to satisfy a given property whatever the environment will do?”
In this talk, we will focus on a particular mathematical model, namely o-minimal hybrid systems. O-minimal hybrid systems enjoy rich continuous dynamics, nevertheless they admit finite bisimulation. This result was first proved in 2000 by G.Lafferriere, G.J.Pappas and S.Sastry. In this talk, we will show how finite words can be used in order to encode symbolically trajectories of o-minimal hybrid systems. We will see that this words encoding technique can help when studying o-minimal hybrid systems (i) to obtain a new proof of the existence of a finite bisimulation; (ii) to study control (reachability) problem.

(Based on joint works with Patricia Bouyer, Fabrice Chevalier, Christian Michaux, Cedric Riviere, and Christophe Troestler.)

• January 25: Sumit Gulwani, MSR: “Program Verification using Probabilistic Techniques”

Abstract:
Probabilistic algorithms have played a significant role in several areas of computer science. Such algorithms are usually more precise, efficient, and even simpler than their deterministic counterparts. In this talk, I will talk about two sampling based probabilistic techniques for discovering and proving program invariants.

One of these techniques, called random interpretation, can be seen as combining the complementary strengths of abstract interpretation and random testing. The key idea of this technique is to compute and manipulate program invariants efficiently by representing them by a small random sample of the set of program states that they represent (as opposed to manipulating program invariants symbolically).

The other technique can be seen as combining the complementary strengths of forward and backward program analysis. This technique finds correctness proofs of a pre/post condition pair by providing an invariant at each program point that can be locally verified. This is done by first initializing the (guesses for the) invariants at all program points to anything, and then iteratively selecting a random program point and updating its invariant randomly to make it more locally consistent with respect to the invariants at the neighboring points.

This is joint work with George Necula (UC-Berkeley) and Nebojsa Jojic (MSR).

• January 22: Sumit Gulwani, MSR: “Static Analysis for Heap Manipulating Low-Level? Software”

Abstract:
Analysis of heap-manipulating programs in languages like C or low-level code is challenging because of presence of recursive data structures and pointer arithmetic. In this talk, I will describe an abstract interpretation based approach for static analysis of such programs (which can be used for checking memory safety, memory leaks, and used specified assertions).

Our abstract domain allows representation of must and may equalities among pointer expressions. The integer variables used in the pointer expressions can be existentially as well as universally quantified and can have constraints over some base domain. However, quantification of only a special form, namely $\exists \forall$ quantification, is allowed. This choice has been made to balance expressiveness with efficient automated deduction. The existential quantification is over some ghost variables of the program that need to be made explicit to express the program invariants required to prove some program properties. The universal quantification allows us to talk about properties of collections of memory locations.

The transfer functions for performing operations over this abstract domain are based on the transfer functions for the underlying quantifier-free base constraint domain (e.g., the conjunctive domain of linear arithmetic combined with uninterpreted functions). To our knowledge, this is the first abstract interpreter that can automatically deduce first-order logic invariants in programs (without requiring any explicit predicates).

This is joint work with Ashish Tiwari (SRI).

Abstract:
Despite their popularity and importance, pointer-based programs remain
a major challenge for program verification. In this paper, we propose
an automated verification system that
is concise, precise and expressive for ensuring the safety of
pointer-based programs. Our approach uses {\em user-definable}
shape predicates to allow programmers to describe a wide range of
data structures with their associated size properties. To support
automatic verification, we design a new entailment checking procedure
that can handle {\em well-founded} inductive predicates using {\em
unfold/fold} reasoning. To improve expressivity, we support a {\em
non-deterministic set of states} for proof search, {\em intersection
types} for methods and {\em coercion rules} for related shape
predicates. We have proven the soundness and termination of our
verification system, and have built a prototype system.