In inverse chronological order
with Pierre Wolper and François Cantin.
Abstract: In this talk, we consider the problem of computing the real convex hull of a finite set of n-dimensional integer vectors. The starting point is a finite-automaton representation of the initial set of vectors. The proposed method consists in computing a sequence of automata representing approximations of the convex hull and using extrapolation techniques to compute the limit of this sequence. The convex hull can then be directly computed from this limit in the form of an automaton-based representation of the corresponding set of real vectors. The technique is quite general and has been implemented. Also, our result fits in a wider scheme whose objective is to improve the techniques for converting automata-based representation of constraints to formulas.
Building on the work started in Flyspeck I, we continued the formalization of the proof of the Kepler Conjecture by eliminating further 92.5% of the tame graphs enumerated in Flyspeck I. This is done by showing that the basic linear program associated with an eliminated tame graph is infeasible. This work has been carried out using the Isabelle mechanical theorem prover which has been amended by the HOL Computing Library.
Abstract: I will present the development and application of microfluidic large-scale integration (MLSI) to biology. MLSI allows for the integration of thousands of micromechanical valves on a microfluidic platform, which in turn can be applied to characterizing complex biological systems. I will discuss experiments which allowed us to measure the first comprehensive binding energy landscape of a transcription factor, as well as the functional evolvability of a transcription factor family. My laboratory is also currently working on developing a high-throughput platform for life cell imaging, which will also be presented.
We propose a general, formal definition of malware in the
language of modal logic. Our definition is general thanks
to its abstract formulation, which, being abstract, is
independent of — but nonetheless generally applicable to
— the manifold concrete manifestations of malware. From
our formulation of malware, we derive equally general and
formal definitions of benware (not malware), anti-malware
(“antibodies” against malware), and medware (“medicine”
for aﬀected software). We provide theoretical tools and
practical techniques for the detection, comparison, and
classification of malware and its derivatives. Our general
defining principle is causation of (in)correctness.
This is joint work with Julian C. Bradfield (U. Edinburgh).
(joint work with Tim Harris) (Slides)
Abstract: We present a hierarchical method for verifying implementations of software transactional memory (STM) and we show how to use this method to verify the Bartok STM. Our approach is distinguished by providing an end-to-end proof that relates the behavior of a transactional program using atomic blocks down to a detailed model of the STM implementation.
Our approach is split over work at two levels. First, the high-level part provides algorithm-level semantics for the program-STM combination. This semantics reflects a generic, write-in-place, lazy-invalidate STM where the individual operations on the STM API are assumed to be atomic: transactional reads and writes, transaction commit and rollback. At this level, the internals of the STM implementation are left unspecified. Instead intuitive conditions on when and how the STM invalidates and rolls back transactions are identified, constituting a sufficient condition for serializability, for which a manual proof is provided.
Second, in the low-level part of our approach, executions of the Bartok STM implementation are modeled and related to serializable executions at the algorithm-level. A verification-oriented model implementation of an STM in C\# based on the Bartok STM code is provided for this purpose. Through several steps of Lipton-style reduction and abstraction, an implementation-level execution is gradually transformed to an algorithm-level one with larger atomic actions that satisfies the sufficient conditions for serializability. A novel feature of our approach is the formulation of the correctness of these transformations as assertions in sequential programs. We verify these automatically using the Boogie tool.
Using our method, we detected a previously unnoticed omission in the published Bartok STM pseudo-code and known bugs in earlier versions of the implementation. We were also able to verify the correctness of the most recent version.
(joint work with Tayfun Elmas and Shaz Qadeer) (Slides)
Abstract: We present a proof calculus and method for the static verification of assertions and procedure specifications in shared-memory concurrent programs. The key idea in our approach is to use atomicity as a proof tool and to simplify the verification of assertions by rewriting programs to consist of larger atomic actions. We propose a novel, iterative proof style in which alternating use of abstraction and reduction is exploited to compute larger atomic code blocks in a sound manner. This makes possible the verification of assertions in the transformed program by simple sequential reasoning within atomic blocks, or significantly simplified application of existing concurrent program verification techniques such as the Owicki-Gries or rely-guarantee methods. Our method facilitates a clean separation of concerns where at each phase of the proof, the user worries only about only either the sequential properties or the concurrency control mechanisms in the program. We implemented our method in a tool called \qedtool. We demonstrate the simplicity and effectiveness of our approach on a number of benchmarks including ones with intricate concurrency protocols.
Abstract: We describe the design and implementation of an automatic invariant generator for imperative programs. While automatic invariant generation though constraint solving has been extensively studied from a theoretical viewpoint as a classical means to program verification, in practice, existing tools do not scale even to moderately sized programs. This is because the constraints that need to be solved even for small programs are already too difficult for the underlying (non-linear) constraint solving engines. To overcome this obstacle, we propose to strengthen static constraint generation with information obtained from static abstract interpretation and dynamic execution of the program. The strengthening comes in form of additional linear constraints that trigger series of simplifications and make solving more scalable. We demonstrate the practical applicability of the approach by an experimental evaluation on a popular collection of challenging benchmark programs and comparisons with related tools based on abstract interpretation and
software model checking.
Title: An Improved Algorithm for Three-Color Parity Games (Watch the video )
Abstract: Three-color parity games capture the disjunction of a Buchi and a co-Buchi condition. The most efficient known algorithm for these games is the progress measure algorithm by Jurdzinski. In this talk we present an acceleration technique that, while leaving the worst-case complexity unchanged, often leads to considerable speed-ups in games arising in practice. As an application, we consider games played in discrete real time, where players should be prevented from stopping time by always choosing moves with delay zero. The time progress condition can be encoded as a three-color parity game and checked using a symbolic implementation of the above-mentioned algorithm.
Abstract: Current predicate abstraction-based model checkers can be classified into two categories, according to how theorem provers are used: The whole-program approach is implemented by transforming the program into one large SMT formula that is satisfiable iff there exists a program execution that reaches a certain error location [CBMC, SATabs, Calysto]. The single-block approach is implemented by transforming each program operation into a formula (weakest precondition) that is used to determine the truth value of predicates in the successor state [SLAM, BLAST]. None of the two approaches can be identified as the clear winner. We identify advantages and disadvantages of both approaches, and outline a unifying model-checking approach that is based on summarization [CPAchecker].
Abstract: We present Brahms, an algorithm for sampling random nodes in a large dynamic system prone to malicious behavior. Brahms stores small membership views at each node, and yet overcomes Byzantine attacks by a linear portion of the system. Brahms is composed of two components. The first is an attack-resilient gossip-based membership protocol. The second component extracts independent uniformly random node samples from the stream of node ids gossiped by the first. We evaluate Brahms using rigorous analysis, backed by simulations, which show that our theoretical model captures the protocol’s essentials. We study two representative attacks, and show that with high probability, an attacker cannot create a partition between correct nodes. We further prove that each node’s sample converges to an independent uniform one over time. To our knowledge, no such properties were proven for gossip protocols in the past.
Based on joint work with Edward Bortnikov, Maxim Gurevich, Gabriel Kliot, and Alexander Shraer, published in PODC 2008.
Abstract: When a specification cannot be realized by any system, it is often hard to see why. I will discuss how to help a user debug an unrealizable specification by constructing a counterstrategy. The counterstrategy explains unrealizability: if the environment adheres to it, no behavior of the system can fulfill the specification. Counterstrategies may be complex and I will show several ways to simplify them. I will also describe a heuristic to construct a system-independent counterstrategy, i.e., a single input trace that shows unrealizability. The final strategy is presented to the user in explicit form as a graph and implicitly as an interactive game. I will show that our approach leads to simple explanations for unrealizable specifications given in “Generalized Reactivity(1)”.
Interface theories have been proposed to support incremen- tal design and independent implementability. Incremental design means that the compatibility checking of interfaces can proceed for partial system descriptions, without know- ing the interfaces of all components. Independent imple- mentability means that compatible interfaces can be refined separately, maintaining compatibility. We show that these interface theories provide no formal support for component reuse, meaning that the same component cannot be used to implement several different interfaces in a design. We add a new operation to interface theories in order to support such reuse. For example, different interfaces for the same component may refer to different aspects such as functional- ity, timing, and power consumption. We give both stateless and stateful examples for interface theories with component reuse. To illustrate component reuse in interface-based de- sign, we show how the stateful theory provides a natural framework for specifying and refining the PCI bus.
Abstract: The unsettled question of the exact complexity of solving parity games has led to the invention of several (quite different) algorithms for this problem. They are often compared w.r.t. the analytical terms describing their worst-case runtime behaviour. This is of course hardly meaningful for practical purposes: solving parity games is for example a crucial ingredient of decision procedures for branching time temporal logics.
In this talk I will give a brief introduction to the theory of parity games, present existing algorithms and try to answer the question “which one is the best in practice?” I will present the PGSolver library - a collection of algorithms and heuristics for solving parity games - present the “meta”-algorithm of this library that incorporates several optimisations useful for all other algorithms, report on experiments, etc.
This is joint work with Oliver Friedmann.
Abstract: Compilers often use transformations to optimize performance of a program. Soundness of these transformations is critical for the correctness of the compiler. The subject has been widely studied in a sequential program setting. Interestingly, many of these compiler transformations proven to be sound in a sequential setting, are incorrect in a concurrent setting. Moreover, the soundness of a compiler transformation in a concurrent setting may depend on the underlying memory model.
This work builds a formalism to verify soundness of compiler transformations in a concurrent setting. This includes a denotational semantics for concurrent programs, a definition of memory model transformation based on rewrite rules, and definition of memory models. The formalism is then used to build a verification tool, Transformation Verifier (TRAVER). The tool, TRAVER, can semi-automatically verify or falsify a given compiler transformation on a given memory model. Moreover, the formalism and the tool help us to discover interesting relations between the soundness of compiler transformations and relaxation of memory models.
This work was done in an internship at Microsoft Research Redmond this summer. This is joint work with Sebastian Burckhardt and Madan Musuvathi.
Abstract: Understanding the processes involved in multi-cellular pattern formation is a central problem of developmental biology, hopefully leading to many new insights, e.g. in the treatment of various diseases. Defining suitable computational techniques for development modelling, able to perform in-silico simulation experiments, however, is an open and challenging problem.
In this talk we present a modelling technique based on discrete Petri nets, equipped with a notion of (maximal) parallelism. We created a large (hundreds of nodes) model of a developing vulva of the C.elegans worm. In spite of the simplicity of the formalism and the size of the network, we were able to correctly simulate 46 out of 48 experiments of this intricate process. We recorded statistical accurracy of the results and agreement between simulated protein levels and biological evidence. Furthermore, we used our model to test in-silico two insightful biological theories involved in the regulation of the process, involving recently discovered microRNA interactions.
Abstract: We develop foundations for proving properties relating two programs. Our formalization is based on a suitably adapted notion of program invariant for a single program. First, we give an abstract formulation of the theory of program invariants based on the notion of assertion function: a function that assigns assertions to program points. Then, we develop this abstract notion further so that it can be used to prove properties between two programs. We describe two applications of the theory. One application is in the translation validation for optimizing compilers, and the other application is in the certification of smart-card application in the framework of Common Criteria. The latter application is part of an industrial project conducted at Verimag laboratory.
The C Object System is a C library which implements high-level concepts available in Objective-C, CLOS and in other object-oriented programming languages: uniform object model (class, meta-class and property meta-class), generic functions, multimethods, delegation, exceptions, contracts and closures. It relies on the expressiveness of the C programming language to extend its syntax and to implement the aforementioned concepts as first-class objects. COS aims at satisfying several general principles like simplicity, flexibility, extensibility, reusability, efficiency and portability which are rarely met in a single programming language. Its design is tuned to provide in the context of multi-methods, an efficient and portable implementation of message dispatch and message forwarding which are the cornerstone of code flexibility and extensibility. Hence, COS concepts should significantly simplify adaptive programming as well as the development of large scale software and distributed systems.
This presentation assumes the knowledge of the C programming language. The knowledge of another programming language like Objective-C, C++, C#, Java, D or Eiffel would be a must.
Abstract: This talk presents the key concepts of the Zonnon language project (Prof J.Gutknecht, Dr E.Zouev, ETH, 2003-2005) and the major principles and technical solutions in its implementation for the .NET platform.
The Zonnon language is a member of the family of Pascal, Modula-2 and Oberon; it is modern, compact, easy to learn and use. It supports modularity (with importing units and exporting unit members), OO approach based on definition/implementation paradigm and refinement of definitions, and concurrency based on the notion of active objects and syntax-based communication protocols.
The Zonnon compiler is written in C#. The front end uses conventional compilation techniques (recursive descent parser with full semantic checks). The back end is implemented using the CCI (Common Compiler Infrastructure) framework from Microsoft as a code generation utility. The compiler is fully integrated into the Visual Studio environment using CCI's integration support.
The presentation assumes some knowledge of the C# language, .NET and Visual Studio and of the compiler construction basic principles.
Abstract: This is a rehearsal of my talk at ASE next week.
We present and evaluate a framework and tool for combining multiple program analyses which allows the dynamic (on-line) adjustment of the precision of each analysis depending on the accumulated results.
For example, the explicit tracking of the values of a variable may be switched off in favor of a predicate abstraction when and where the number of different variable values that have been encountered has exceeded a specified threshold.
The method is evaluated on verifying the SSH client/server software and shows significant gains compared with predicate abstraction-based model checking.
Abstract: This thesis presents NC(T), an extension of the DPLL(T) scheme for decision procedures for quantifier-free first-order logics. In DPLL(T), a general Boolean DPLL engine is instantiated with a theory solver for the theory T. The DPLL engine is responsible for computing Boolean implications and detecting Boolean conflicts, while the theory solver detects implications and conflicts in T, and the communication between the two parts is done through a standardized interface. The Boolean reasoning is done on a set of constraints represented as clauses, meaning that formulas have to be converted to conjunctive normal form before they can be processed. The process results in the addition of variables and a general loss of structure. NC(T) removes this constraint by extending the Boolean engine to support the detection of implications and conflicts on non–clausal constraints, using techniques working on graphical representations of formulas in negation normal. Conversion to negation normal form preserves the size and structure of the input formula and does not introduce new variables.
The above scheme NC(T) has been implemented as a tool called FSTP, where the theory T under consideration is the quantifier–free theory of uninterpreted function and predicate symbols with equality. We describe our implementation and give early experimental results.
The MOBIUS project [http://mobius.inria.fr] develops techniques for certifying the resource usage of Java applications running on mobile phones. A particularly interesting kind of resource on mobile phones is network access, e.g. in the form of sending text messages. For a
number of reasons, these resources must be tightly controlled. Yet, as the cost of sending text messages may vary depending on the recipients, tracking exact resource usage requires more elaborate data structures than just simple counters.
In this talk, we present a Java resource management library for tracking and controlling at runtime resources such as sending text messages, trapping attempts to abuse resources. The library induces a notion of dynamic resource safety, classifying applications that never attempt to abuse resource. Such applications can do without runtime resource tracking, i.e. they can have have their resource managers erased without altering their behaviour.
We supplement the resource management library by a type system for certifying the absence of attempts to abuse resources, complementing dynamic with static resource safety. The type system relies on an expressive constraint language, and type checking is undecidable in general. In the last part of the talk we explore decision procedures for fragments of the constraint language, yielding fragments of the type system with decidable type checking.
Abstract: The Java programming language has become a viable platform for real-time systems with applications in avionics, shipboard computing, audio processing, industrial control and the financial sector. High performance real-time Java virtual machines (RT JVMs) are now available from multiple vendors.
One of the main challenges in using a high level programming language, such as Java or C#, to program hard-real time system is to deal with heap-allocated data structures. Traditional techniques such as pre-allocation and object pooling are ill-suited to modern software engineering practices. In this talk I describe two approaches that we have experimented with in the context of the Ovm real-time Java virtual machine: region-based allocation and real-time garbage collection. I will demonstrate that for tasks which can tolerate latencies on the order of 1 millisecond real-time collectors are perfectly adequate, but, in order to obtain sub-milliseconds latencies other approaches are required. The talk will also give an overview of new results in non-blocking concurrent garbage collection.
Abstract: To increase reliability, developers have long used assertions–logical statements that are expected to be true–as lightweight specifications of crucial properties of code. Assertions have predominantly served two purposes: documentation and runtime checking.
We envision a far broader role of assertions where they are the cornerstone of a wide range of analyses that uniformly apply across the spectrum of software design, development, and maintenance, and synergistically promise to bring about a significant increase in reliability.
This talk gives an overview of our ongoing work on assertion-driven development and analyses, and explains the details of how we use assertions for error recovery. Our key insight is to turn violated assertions into recovery routines by using an assertion as a basis of repairing an erroneous program state. Experimental results show that our repair algorithm efficiently handles complex data structures and enables systems to recover from potentially crippling errors.
Biography: Sarfraz Khurshid is an Assistant Professor in the Electrical and Computer Engineering department at the University of Texas at Austin, where he leads the Software Verification and Testing Group. He obtained his PhD in Computer Science from MIT in 2004. He received a BSc in Mathematics and Computer Science from Imperial College London, and read Part III of the Mathematical Tripos at Trinity College Cambridge. His current research focuses on software testing, specification languages, code conformance, model checking, data structure repair, and applications of heuristics in program analysis.
14:00 Damien Zufferey, CSIsat: Interpolation for LA+EUF, CAV rehearsal (15min)
Abstract: We present CSIsat, an interpolating decision procedure for the quantifier-free theory of rational linear arithmetic and equality with uninterpreted function symbols. Our implementation combines the efficiency of linear programming for solving the arithmetic part with the efficiency of a SAT solver to reason about the boolean structure.
This is joint work with Dirk Beyer (SFU) and Rupak Majumdar (UCLA).
We study the application of modular static analysis to prove program
safety and detection of program errors. In particular, we shall consider
imperative programs that rely on numerical invariants.
To handle the challenges of disjunctive analyses, we introduce the
notion of affinity to characterize how closely related is a pair of
disjuncts. Finding related elements in the conjunctive (base) domain
allows the formulation of precise hull and widening operators lifted to
the disjunctive (powerset extension of the) base domain. We have
implemented a static analyzer based on the disjunctive polyhedral
analysis where the relational domain and the proposed operators can
progressively enhance precision at a reasonable cost. Our second
objective is to support either a proof of the absence of bugs in the
case of a valid program or bug finding in the case of a faulty program.
We propose a dual static analysis that is designed to track concurrently two over-approximations: the success and the failure outcomes. Due to the concurrent computation of outcomes, we can identify two significant input conditions: a never-bug condition that implies safety for inputs that satisfy it and a must-bug condition that characterizes inputs that lead to true errors in the execution of the program.
Abstract: The work presented in this talk is motivated by exportation of property-based formal techniques to the validation of analog circuits. We develop a temporal logic STL/PSL for describing properties of analog and mixed signal behaviors, which relies on the real-time logic MITL with atomic propositions being predicates on real-variables.
In the first part of the presentation, I describe a compositional construction of timed automata that accept exactly models of MITL formulae. This method is based on a translation of MITL formulae to temporal testers, which can be seen as timed transducers that output at every time point whether the formula is satisfied at that instant. This approach can be used to model check real-time properties on timed systems.
Since the formal verification of non-trivial continuous systems remains very difficult, we resort to a lighter validation technique, that is property-based monitoring. In the second part of the talk, I describe Analog Monitoring Tool (AMT) that checks whether analog and mixed-signal simulation traces satisfy STL/PSL specifications that define their correct behavior. Finally, I illustrate this approach by an industrial and realistic DDR2 memory interface case study.
Abstract: The interrelations of a set of software objects are usually manifold and complex. Common object-oriented programming languages provide constructs for structuring objects according to shared properties and behavior, but fail to provide abstraction mechanisms for the interactions of objects. Roles are a promising solution to these issues, as they focus on the behavior of an object in a certain context. Combining multiple roles yields collaborations, an interesting abstraction and reuse unit. However, existing approaches towards roles in programming languages require vast extensions of the underlying language or even propose new languages.
We propose a programming technique that enables role-based programming with commonly available language constructs. Thus, programmers can express roles and collaborations by simply using a library, and hence, without the need to change the language, its compiler, and tools. We explain our proposal on a language-independent level. Moreover, we provide an implementation in form of a library for the Scala programming language. Finally, we apply our approach to design patterns and analyze to which extent they can be expressed and reused with roles.
Abstract: We introduce a new decidable logic for reasoning about infinite arrays of integers. The logic is in the $\exists^* \forall^*$ first-order fragment and allows (1) Presburger constraints on existentially quantified variables, (2) difference constraints as well as periodicity constraints on universally quantified indices, and (3) difference constraints on values. In particular, using our logic, one can express constraints on consecutive elements of arrays (e.g. $\forall i ~.~ 0 \leq i < n \rightarrow a[i+1]=a[i]-1$) as well as periodic facts (e.g. $\forall i ~.~ i \equiv_2 0 \rightarrow a[i] = 0$). The decision procedure follows the automata-theoretic approach: we translate formulae into a special class of Buchi counter automata such that any model of a formula corresponds to an accepting run of the automaton, and vice versa. The emptiness problem for this class of counter automata is shown to be decidable, as a consequence of earlier results on counter automata with a flat control structure and transitions based on difference constraints. We show interesting program properties expressible in our logic, and give an example of invariant verification for programs that handle integer arrays.
Joint work with Peter Habermehl (LSV, Paris) and Tomas Vojnar (FIT, Brno)
Abstract: Process algebras are formalisms for abstract modeling of systems for the purpose of qualitative verification and quantitative evaluation. The purpose of verification is to show that the system behaves correctly, whereas the quantitative analysis gives an approximation how well the system performs. Originally, process algebras were only developed for qualitative modeling, but gradually they have been extended with time, probabilities, and Markovian (exponential) and generally-distributed stochastic time. The extensions up to stochastic time typically conservatively extended previously well-established theories. However, mostly due to the nature of the underlying (non-)Markovian performance models, the stochastic process algebras were built from scratch. These extensions were typically carried out as orthogonal extensions of untimed process theories with exponential delays or stochastic clocks.
In the talk, we discuss several issues: (1) Is it possible to show that the abstraction using the weak behavioral equivalence in Markovian process theories (and other modeling formalisms as well) is performance preserving and is such an approach compositional? (2) How can we do performance analysis using discrete-time and probabilistic choices? (3) What is the relationship between discrete real and generally-distributed stochastic time in process theories? (4) Is it possible, and if so, how, to extend timed process theories with stochastic time? (5) Reversely, is it possible, and if so, how, to embed discrete real time in generally distributed process theories?
During this midterm presentation of my Master project I will discuss the solution that I develop to check soundness of Business Process Models.
More and more enterprises rigorously model their business processes to allow them to rapidly implement them on their IT infrastructure. Such a formal model of a business process also facilitates its simulation, optimization, and quality control. The more often a model is used for these purposes, the more important it is that it is free of errors, e.g. deadlocks. The absence of many errors can be automatically verified. This master project is aiming to provide a soundness checker for those models. A process is considered as sound when it is free of control flow errors. This work concretizes in a plug-in for IBM WebSphere Business Modeler.
During this presentation I will explain how the control flow is abstracted from the business model. Then I will explain how properties that we defined are checked on the state space of this abstraction in order to determine the soundness of the process with a parenthesis on the techniques used to reduce the state explosion problem. I will finally discuss some intermediate results and conclude giving the directions and agenda of the future work.
Abstract: This talk is about an open-source, hard real-time operating system called Tiptoe, which aims at providing a process model that is fully compositional and (constant-time) predictable in a temporal and spatial sense. The goal is to have Tiptoe processes read sensors, compute something, allocate and free memory, write actuators but also access disks and networks, all in real time, without affecting each others' real-time behavior. Moreover, the Tiptoe system is meant to predict in constant time remaining resource capacities such as the available CPU time, memory, and I/O bandwidth for end-to-end real-time guarantees on all relevant process activities. The strong temporal and spatial isolation of Tiptoe processes will enable more principled and scalable real-time and embedded software engineering. We have already obtained encouraging research results in our prototype implementation with a compositional and (constant-time) predictable, real-time process scheduler and memory management system, which, unlike existing approaches, also guarantees low bounds on memory fragmentation in real time. The talk will focus on the process model and scheduler but will also provide some general insight in the challenges of designing a compositional real-time operating system.
Probabilistic finite automata as acceptors for languages over finite words have been studied by many researchers. In this talk, we discuss probabilistic automata for recognizing languages over infinite words. The idea is to resolve the choices in a nondeterministic omega-automaton by probabilities and to require positive probabilities for the accepting runs. Surprisingly, probabilistic Buechi automata are more expressive than non-deterministic omega-automata. However, a certain subclass of probabilistic Buechi automata can be identified that has exactly the power of omega-regular languages. Concerning the efficiency, probabilistic $\omega$-automata are not comparable with their nondeterministic counterparts. There are examples of omega-regular languages that have probabilistic Buechi automata of polynomial size, while any nondeterministic omega-automata with Streett, Rabin or Buechi acceptance is of exponential size.
The talk will introduce the formal notion of probabilistic automata with Buechi and other acceptance conditions and discuss some basic properties, such as expressiveness, efficiency, composition operations and decision problems.
In this talk, I will present a number of type systems and algorithms designed to guarantee a number of desirable properties for a typical client-server interaction.
In particular, we want the server to be ready to handle arbitrarily many requests and at any time, all requests should eventually get answered, the processing of a request should eventually terminate, and given a protocol governing negotiation of resources between the client and the server, both parties should obey to it.
Having given and informal description of these properties as well as the language being used (the pi-calculus), I will describe and compare a number of type systems enforcing these properties in processes:
I will start by describing and demonstrating an existing implementation of a type system written by Naoki Kobayashi, “TyPiCal”.
Secondly, I will describe the work António Ravara and I have been doing on the topic.
Finally, I will cover a number of other type systems (Deng and Sangiorgi's type system for termination, and Kobayashi and Sangiorgi's lock-freedom analysis), comparing their expressive power.
Abstract: With the inherent problems in writing correct and efficient concurrent code, a recent concurrent programming paradigm called 'transactional memory' has gained popularity. Model checking software transactional memories (STMs) is difficult because of the unbounded number, length, and delay of concurrent transactions and the unbounded size of the memory. We reduce this verification problem to a finite state problem in two steps. First, we show that every STM that enjoys certain structural properties either violates a safety or liveness requirement on some program with two threads and two shared variables, or satisfies the requirement on all programs. Second, we use a model checker to prove the requirement for the STM applied to a most general program with two threads and two variables. In the safety case, the model checker constructs a simulation relation between two carefully constructed finite-state transition systems, one representing the given STM applied to a most general program, and the other representing a most liberal safe STM applied to the same program. In the liveness case, the model checker analyzes fairness conditions on the given STM transition system. We illustrate the use of the method by proving the correctness of several STMs, including two-phase locking, DSTM, TL2, and optimistic concurrency control.
Abstraction is routinely used to compute sound but approximate solutions to undecidable or intractable program analysis and verification problems. However, abstraction alone is often insufficient and extrapolation techniques, such as acceleration, learning, or widening are necessary to expedite and ensure termination of the analysis. Unlike abstraction, the basic principles behind the design and analysis of extrapolation operators is not well understood. We present a framework for defining and analyzing extrapolation operators for automata. Heuristic observations used to compute approximations are encoded as binary relations. An extrapolation operator is generated using such a relation and the quotient construction for automata. Further operators are generated by composing relations and operators. The theoretical benefit of this framework is that extrapolation operators and their properties such as precision, convergence, termination, and completeness can be rigorously analyzed. On the practical side, analysis tools can be used to automatically generate and refine the extrapolation operators to obtain successively better approximations of a fixed point.
Alternating tree register automata Ranko Lazic, University of Warwick
A data tree is an unranked ordered tree whose every node is labelled by a letter from a finite alphabet and an element (“datum”) from an infinite set. Logics and automata over data trees and their algorithmic properties have recently attracted considerable attention. That is partly because data trees can serve as elegant models of XML documents.
At PODS '06, Bojanczyk et al. studied first-order logic over data trees and obtained a decidable fragment. Our results presented at LICS '07 show that alternating automata with registers are a powerful competitor, to which variants of logics such as CTL are easily translatable.
I shall present two decidability results, and corresponding complexity lower bounds which are fascinatingly high.
(This is joint work with Marcin Jurdzinski.)
Abstract: This paper proposes a free-for-all execution scheme for read-write code, whether wait-free, t-resilient, or one that utilizes set-consensus. It employs two new novel simulating techniques: Extended-BG-Simulation, and Simulation-by-Value. Its architecture consists of 3 layers : At the bottom, real processors just signal their participation, at the middle layer, virtual Extended-BG-simulators execute the active codes free-for-all, and at the top layer, the original processors cooperatively use Simulation-by-Value to simulate the virtual BG-simulators.
The Extended-BG-simulation removes a drawback from the BG-simulation. In the BG-simulation each simulator may be interested in the output of a particular code, but the simulation guarantees an output of some code, rather than a specific one. The modified simulation removes this drawback. The Simulation-by-Value which subsumes the BG-simulation, allows us to show that if in run-time disagreement happens at some point to be low it can be kept that way afterwards.
The combination of the two simulations allows for a clean separation of concerns. It is the BG-simulators which are programmed to decide which original code to execute next among all active codes. The processors just pick-up BG-simulators that will progress, from a prefix of an ordered sequence of possibly infinite simulators, with all the rest effectively faulty.
Besides many interesting ramifications of the new execution scheme, the most important ones are that: The use of k-Set Consensus is tantamount to k-concurrency, while resiliency is wait-free with some level of \synchrony“ just at the first step.
Abstract: Conventional specifications for object-oriented (OO) programs must adhere to behavioral subtyping in support of class inheritance and method overriding. However, this requirement inherently weakens the specifications of overridden methods in superclasses, leading to imprecision during program reasoning. To address this, we advocate a fresh approach to OO verification that focuses on the distinction and relation between specifications that cater to calls with static dispatching from those for calls with dynamic dispatching. We formulate a novel specification subsumption that can avoid code re-verification, where possible.
Using a predicate mechanism, we propose a flexible scheme for supporting class invariant and lossless casting. Our aim is to lay the foundation for a practical verification system that is precise, concise and modular for sequential OO programs. We exploit the separation logic formalism to achieve this.
Abstract: In 1957, William Craig introduced the concept of interpolation, providing a link between proof theory and model theory. Given an inconsistent pair (A, B) of first order logic formulas, a Craig interpolant C is a formula that is implied by A, inconsistent with B, and the non-logical symbols it contains occur in A as well as in B. Intuitively, the interpolant C is an abstraction of A.
Recently, Craig interpolation has been rediscovered in the context of automated verification. For instance, interpolants are used to approximate inductive invariants of symbolic transition systems. For this purpose, it is necessary to extract interpolants from the formulas that arise during the program analysis. Many of the highly optimized decision procedures for the logics used in program verification are based on a translation of the word-level structure to a propositional formula. Unfortunately, these solvers do not generate word-level interpolants, and bit-level interpolants are often hard to use in the original context.
Therefore, we propose to lift propositional resolution proofs obtained from bit-flattened formulas to word-level proofs, enabling the computation of word level interpolants. We present an algorithm that demonstrates the feasibility of this approach for equality logic.
Abstract: Probabilistic models are now getting increasing interest among computer scientists dealing with distributed systems. Main successful use is for the study of protocols involving random trials (such as security protocols). Risk analysis and failure models are another area of application. For some cases (e.g., probabilistic protocols), mixed nondeterministic/probabilistic models are required; known examples are Markov Decision Processes, also known as Probabilistic Automata. For other cases, however (e.g., in some failure models), pure probabilistic models are needed. Now, there is a deep difficulty with such models (when untimed versions of them are considered), namely: interleaving semantics for distributed models artificially creates nondeterminism as a consequence of interleaving.
In the first part of my talk I shall in particular discuss the very basic case of the product of two Markov chains. Markov chains are the simplest probabilistic model beyond coin tossing; they are as basic as automata are for computer scientists. Nevertheless, constructing a purely probabilistic product for them raises surprising difficulties. I will show that the right way is to see trajectories of the product as partial orders of events obtained by synchronizing trajectories of the two chains on shared states. And I will explain how to construct a probability for these trajectories, that offers a Markov property as well as a special form of probabilistic independence for the local excursions of the two chains.
In the second part, I shall consider the general case of randomizing (safe) Petri nets with true concurrency semantics, or event structures. I shall discuss the construction of probabilities and I shall also present a few results of probabilistic nature, namely a concurrent law of large numbers and renewal theory.
Abstract: We present a new method for underapproximation in model-checking based on the following idea:
For three naturals k,m,n such that k<m<n, a circuit with m inputs and n outputs is called k-universal if for every subset U of k outputs, all 2^k possible valuations for U are reachable under some assignment to the inputs.
Now consider a design M with n inputs that we wish to model-check. Connecting the inputs of M to the outputs of a k-universal circuit gives us a new design M' with m (<n) inputs, that underapproximates M.
This is an attractive feature for underapproximation in model-checking: on one hand the combined design M' has fewer inputs than M, and on the other hand it is expected to have enough “freedom” to find an error state if one exists.
We show how to construct k-universal circuits efficiently, while making sure that k is large.
We also report initial experimental results with bounded model checking of industrial designs.
Joint work with Ofer Strichman.
Abstract: Software vendors collect bug reports from customers to improve the quality of their software. These reports should include the inputs that make the software fail, to enable vendors to reproduce the bug. However, vendors rarely include these inputs in reports because they may contain private user data. We describe a solution to this problem that provides software vendors with new input values that satisfy the conditions required to make the software follow the same execution path until it fails, but are otherwise unrelated with the original inputs. These new inputs allow vendors to reproduce the bug while revealing less private information than existing approaches. Additionally, we provide a mechanism to measure the amount of information revealed in an error report. This mechanism allows users to perform informed decisions on whether or not to submit reports. We implemented a prototype of our solution and evaluated it with real errors in real programs. The results show that we can produce error reports that allow software vendors to reproduce bugs while revealing almost no private information.
Abstract: We introduce Phase Type Processes (PTPs), a novel stochastic modeling approach that can express probabilistic and nondeterministic choices as well as random delays following phase type distributions, a generalization of exponential distributions. Action-labeled transitions are used to react on external stimuli and they are clearly separated from phase type transitions. The semantics of PTPs are defined in terms of path probabilities with respect to schedulers that resolve nondeterministic choices based on the timed process history. The main emphasis of this work is to discuss parallel composition in the context of a partial memoryless property. Moreover, the contribution includes various notions of equivalence for PTPs, amongst others stochastic extensions of trace, failure trace, testing and bisimulation equivalence.
This is joint work with Christel Baier
Abstract: Whereas formal verification of timed systems has become a very active field of research, the idealized mathematical semantics of timed automata cannot be faithfully implemented on digital, finitely precise hardwares. In particular, the properties proved on a timed automaton might not be preserved in any of its implementations.
In this talk, I'll briefly recall the “robust model-checking” approach to checking implementability [DDR04,DDMR04] on the one hand, and the channel-automata technique for timed model-checking [BMOW07] on the other hand. I'll then explain how both techniques can be combined for robust model-checking of timed properties.
[DDR04] Martin De Wulf, Laurent Doyen and Jean-François Raskin. Almost ASAP Semantics: From Timed Models to Timed Implementations. In HSCC'04, LNCS 2993, pages 296-310. Springer, 2004.
[DDMR04] Martin De Wulf, Laurent Doyen, Nicolas Markey and Jean-François Raskin. Robustness and Implementability of Timed Automata. In FORMATS'04, LNCS 3253, pages 118-133. Springer, 2004.
[BMOW07] Patricia Bouyer, Nicolas Markey, Joël Ouaknine and James Worrell. The Cost of Punctuality. In LICS'07, pages 109-118. IEEE Comp. Soc. Press, 2007.
[BMR08] Patricia Bouyer, Nicolas Markey and Pierre-Alain Reynier. Robust Analysis of Timed Automata via Channel Machines. In FoSSaCS'08, LNCS. Springer, to appear.
Abstract: The omega-rational languages play an important role in automata theory, and the Wagner hierarchy is nowadays known to be the most refined classification these languages. But the algebraic approach to automata theory also reveals a relevant correspondence between these omega-rational languages and some algebraic structures called finite omega-semigroups. Therefore, within this algebraic framework, we give a complete description of the algebraic counterpart of the Wagner hierarchy, by building an appropriate hierarchy of finite pointed omega-semigroups. We adopt a hierarchical game approach, by translating the Wadge theory from the omega-rational language to the omega-semigroup context.
Abstract: For certain specifications, a correct system is very hard to obtain, inefficient, unnecessary or even impossible. Therefore, we consider a generic relaxation of correctness: fair correctness. The task of fair model checking is to check whether a given model of a concurrent or reactive system is fairly correct with respect to a given specification.
Courcoubetis and Yannakakis proposed an algorithm for this problem. Their proof of correctness is based on arguments of probability theory. We developed an alternative proof by analysing the algorithm from a game-theoretic point of view. Our proof of correctness has a modularised structure, which makes it easier to find extensions that improve the algorithm and to figure out limitations of the approach the algorithm follows.
We address extensions serving the following purposes:
In the case of classical model checking, this question has already been solved: many model checkers create a counterexample if the model is not correct; that is a run of the model that violates the specification. Such counterexamples support the developers in locating errors in the model.
Abstract: The minimal coverability set (MCS) of a Petri net is a finite representation of the downward-closure of its reachable markings. The minimal coverability set allows to decide several important problems like coverability, semi-liveness, place boundedness, etc. The classical algorithm to compute the MCS constructs the Karp&Miller tree . Unfortunately the K&M tree is often huge, even for small nets. An improvement of this K&M algorithm is the Minimal Coverability Tree (MCT) algorithm , which has been introduced 15 years ago, and implemented since then in several tools such as Pep . Unfortunately, we show in this paper that the MCT is flawed: it might compute an under-approximation of the reachable markings. We propose a new solution for the efficient computation of the MCS of Petri nets. Our experimental results show that this new algorithm behaves much better in practice than the K&M algorithm.
Joint work with JF Raskin and Laurent Van Begin Presented at ATVA07