english only
EPFL > I&C > Tresor > Seminars
RESOURCES
Home
People
Seminars
Courses
Research Topics
Events
Useful Links

Upcoming Talks
Previous years: 2008, 2007, 2006, 2005


2009


In chronological order

Stochastic hybrid models for DNA replication in the fission yeast

Abstract: Motivated by the observation that several biological processes can be conveniently modeled through the interaction of continuous and discrete phenomena, there has been a growing interest recently in the application of hybrid systems techniques to biological modeling and analysis. It has also been recognized that many biological processes are intrinsically uncertain; stochastic phenomena have in fact been shown to be instrumental in improving the robustness of certain biological processes, or in inducing variability in an otherwise homogeneous population. In this talk we will describe the development of a stochastic hybrid model for DNA replication, one of the most fundamental processes behind the life of every cell. We will discuss how the model was instantiated for the fission yeast and present analysis results that suggest that the predictions of the model do not match conventional biological wisdom and experimental evidence. Interestingly, the problem appears to be not in the model, but in conventional biological wisdom. We will discuss how this observation has motivated follow-on experiments (in vitro and in silico) to test two competing biological hypotheses that could explain the discrepancy.


Bio: John Lygeros received a B.Eng. degree in Electrical Engineering and an M.Sc. degree in Automatic Control from Imperial College, London, U.K., in 1990 and 1991, respectively. He then received a Ph.D. degree from the Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, in 1996. He held a series of postdoctoral research appointments with the National Automated Highway Systems Consortium, the Massachusetts Institute of Technology, and the University of California, Berkeley. In parallel, he was also a part-time Research Engineer with SRI International, Menlo Park, CA, and a Visiting Professor with the Department of Mathematics, Universite de Bretagne Occidentale, Brest, France. Between July 2000 and March 2003, he was a university lecturer with the Department of Engineering, University of Cambridge, Cambridge, U.K. and a fellow of Churchill College. Between March 2003 and July 2006, he was an assistant professor with the Department of Electrical and Computer Engineering, University of Patras, Patras, Greece. In July 2006, he joined the Automatic Control Laboratory, ETH Zurich, Switzerland as an associate professor. His research interests include modeling, analysis, and control of hierarchical, hybrid and stochastic systems with applications to biochemical networks and large-scale engineering systems such as highway and air traffic management.

The Progress of Transactional Memory (A Tutorial)



Caches in WCET Analysis

Embedded systems as they occur in application domains such as automotive, aeronautics, and industrial automation often have to satisfy hard real-time constraints. Safe and precise bounds on the worst-case execution time (WCET) of each task have to be derived.

In this talk I will discuss the influence of cache replacement policies on the precision and soundness of WCET analyses, by (i) evaluating predictability metrics that capture the inherent uncertainty in any cache analysis. (ii) introducing the notion of relative competitiveness, which allows to derive new cache analyses that are optimal w.r.t. the predictability metrics (iii) investigating the soundness of measurement-based WCET analysis in the presence of caches.
Finally, I will give a brief overview of my current research interests.





Formal Specification and Verification of Bluespec Programs

With the increase in the complexity of embedded systems, there has been research on designing more expressive languages in higher levels of abstraction. One of these successful languages is Bluespec, designed at MIT and Bluespec company, which models concurrency essentially by rewriting systems. In this research we address the problem of formal specification and verification of Bluespec designs. The suggested formal framework is Maude, a tool supported language based on rewriting logic. The mapping turns out to be smooth and natural, and lets us to verify a set of case studies. The meta theory of Maude is also helpful to model the underlying scheduler of the rules in Bluespec.

Model Checking Structured Infinite Markov Chains

In the past probabilistic model checking hast mostly been restricted to finite state models. We explore the possibilities of model checking with continuous stochastic logic (CSL) on infinite-state Markov chains. We present an in-depth treatment of model checking algorithms for two special classes of infinite-state CTMCs: (i) Quasi-birth-death processes (QBDs) are a special class of infinite-state CTMCs that combines a large degree of modeling expressiveness with efficient solution methods. (ii) Jackson queuing networks (JQNs) are a very general class of queueing networks that find their application in a variety of settings. The state space of the CTMC that underlies a JQN, is highly structured, however, of infinite size in as many dimensions as there are queues, whereas the underlying state-space of a QBD can be seen as infinite in one dimension.

Using a new property-driven independency concept that is adapted to QBDs and JQNs, accordingly, we provide model checking algorithms for all the CSL operators. Special emphasis is given to the time-bounded until operator for which we present a new and efficient computational procedure named uniformization with representatives. By the use of an application-driven dynamic termination criterion, the algorithms stop whenever the property to be checked can be certified (or falsified).

This new techniques have been proven to be usefull for the analysis of bottlenecks in wireless two-hop ad hoc networks. The results of our analysis were compared with extensive simulations and show excellent agreement for throughput, mean number of active sources and mean buffer occupancy at the bottleneck station.

Automatic reduction of ODE semantics for protein-protein interaction networks by abstract interpretation (in BC 229)

Room: BC 229

Modelers of molecular signaling networks must cope with the combinatorial explosion of protein states generated by post-translational modifications and complex formation. Rule-based models provide a powerful alternative to approaches that require an explicit enumeration of all possible molecular species of a system. Such models consist of formal rules stipulating the (partial) contexts for specific protein-protein interactions to occur. These contexts specify molecular patterns that are usually less detailed than molecular species. Yet, the execution of rule-based dynamics requires stochastic simulation, which can be very costly. It thus appears desirable to convert a rule-based model into a reduced system of differential equations by exploiting the lower resolution at which rules specify interactions. We present a formal (and automated) abstract interpretation-based method for constructing a coarse-grained and self-consistent dynamical system aimed at molecular patterns that are distinguishable by the dynamics of the original system as posited by the rules. The method is formally sound and never requires the execution of the rule-based model. The coarse-grained variables do not depend on the values of the rate constants appearing in the rules, and typically form a system of greatly reduced dimension that can be amenable to numerical integration and further model reduction techniques.

Ptolemaic Abstraction for Parameterized Systems

Abstract: Concurrent systems are notoriously error-prone and hard to analyze, both for human engineers and verification tools. In this talk, we describe Ptolemaic environment abstraction, a method for automated verification of concurrent systems with an unbounded number of processes. In environment abstraction, the abstract state space describes properties of the concurrent system from the point of view of individual processes. We argue that for systems designed by human programmers this Ptolemaic viewpoint yields precise and feasible abstract models. We present examples of distributed algorithms and cache coherence protocols which where successfully verified by environment abstraction.

Cartesian Abstraction Refinement

Abstract:Thread-modular verification is a promising compositional approach for avoiding the state explosion during the verification of concurrent programs. The method is polynomial in the number of threads. It can be phrased as a program analysis with a fixed `Cartesian abstraction'. We propose a flexible refinement of this abstraction that allows one to adapt the precision to the given program. Our experimental results demonstrate the practical potential of the resulting method. Under weak assumptions the refinement scheme is applicable to other abstractions.

Bio: Since 2004, Alexander Malkis has been a PhD student under the supervision of Andreas Podelski, first at the Max Planck Institute in Saarbrücken, then at the University of Freiburg, Germany. His field of interests include concurrent program analysis and verification.

Verification of Infinite-State Probabilistic Systems

Abstract: Probabilistic program models can be used to describe systems that exhibit uncertainty, such as communication protocols over unreliable channels, randomized algorithms in distributed systems, or fault-tolerant systems. Their semantics can be defined in terms of Markov chains, Markov decision processes or stochastic games. The usage of resources (time, power, memory, bandwidth, etc.) can be modeled by assigning a reward (or cost) to individual transitions, or, more generally, to whole computation paths. The resulting Markov reward model can be analyzed to verify safety, liveness and performance properties. For example: “What is the distribution/expectation/variance of the time/cost needed to reach a given target state? More generally, such properties can be expressed in stochastic logics for probabilistic systems (e.g., PCTL, and PRCTL) and verified by model checking techniques. We give an overview over new techniques for verifying quantitative properties of general infinite-state probabilistic systems (with unbounded counters, buffers, process creation, or recursion). These techniques use special structural properties of the underlying system models, such as sequential decomposition, finite attractors, or partial orders and monotonicity properties.

Bio of Richard Mayr: PhD in computer science, TU-Munich, 1998, with Javier Esparza. Postodcs in Edinburgh (UK) and Paris 7 (France). Habilitation for computer science, Univ. Freiburg, 2002. Assistant professor at North Carolina State University, USA, 2004-2007. Lecturer at University of Edinburgh since 2008.

Scheduling Deterministic Parallel Programs

Parallelism abounds! To continue to improve performance, programmers must use parallel algorithms to take advantage of multi-core and other parallel architectures. Deterministic parallelism enables programmers to express these algorithms without concern for how parallel tasks are interleaved. While this simplifies reasoning about the correctness of parallel programs, the performance of these programs still depends on many aspects of the language implementation, including the scheduling policy. Scheduling affects performance both by interrupting the application and by increasing the resource requirements of the application itself. For example, choice of scheduling policy can asymptotically increase the amount of memory required to run an application.

In this talk, I'll give some background on scheduling and present a methodology for understanding the performance of parallel programs. At the core of this work is a cost semantics that enables programmers to reason formally about different scheduling policies and how they affect performance, including memory use.

This cost semantics is the basis for a suite of prototype profiling tools. These tools enable programmers to simulate and visualize program execution under different scheduling policies. My cost semantics also provides a specification for an implementation of the language. I have extended MLton, a compiler for Standard ML, with support for parallelism and implemented several different scheduling policies. Using my costs semantics and profiler, I found a memory leak caused by a bug in one of the existing optimizations in MLton.

Functional verification of industrial hardware designs using OneSpin 360 MV


As hardware designs grow more and more, traditional verification techniques relying on testbenches are no longer sufficient. Bugs need to be more quickly isolated in order to decrease debugging time, coverage closure becomes more and more important. Assertion-based verification enables these productivity increasements by allowing to specify temporal behavior, pin-pointing bugs, and enabling formal model checking techniques to automate stimulus generation. In the talk I will present how assertion-based verification fits in the general verification flow of hardware designs. There are different levels of functional verification (from automatically derived assertion to gap-free verification). I will talk about these different levels and how OneSpin 360 MV is applied on these different levels. During the talk a demo of OneSpin 360MV is given using a DMA controller.

Separation Logic Verification of C Programs with an SMT Solver


Abstract: Separation logic is a promising approach to deductive software verification tailored for specifying and verifying properties of heap allocated data. It supports local reasoning by allowing small specifications that speak only about memory footprints. In this talk, I will present a methodology for automated modular verification of C programs against specifications written in separation logic. The distinguishing features of the approach are representation of the C memory model in separation logic by means of rewrite rules suitable for automation, and the careful integration of an SMT solver behind the separation logic prover to guide the proof search. The presented methodology has been implemented in a prototype tool that uses Z3 SMT solver and a dedicated separation logic prover. This is joint work with Matthew Parkinson and Wolfram Schulte.

Model-Based Testing Using Test Abstractions


The standard approach to generating test suites, manual generation of the test inputs in the suite, is tedious and error-prone, especially for code that takes complex test inputs. This talk presents our approach that automates generation of test suites using test abstractions. Conceptually, test abstractions provide a high-level description of desired test suites; testers need not manually write large large suites of individual tests but instead write test abstractions from which tools automatically generate individual tests. This approach has helped testers in both academia and industry to discover errors in several real applications.

This talk illustrates the approach with imperative test abstractions for testing refactoring engines. Refactorings are behavior-preserving program transformations that improve program design, and refactoring engines are tools that automate the application of refactorings. Refactoring engines are a key component of modern IDEs such as Eclipse or NetBeans. A bug in a refactoring engine can have severe consequences as it can erroneously change large bodies of source code. Test inputs for refactoring engines are programs, and the core of our testing of refactoring engines is a framework for generation of abstract syntax trees that represent Java programs. Using this framework, we discovered several dozens new bugs in Eclipse and NetBeans.

Short Bio: Darko Marinov is an Assistant Professor in the Department of Computer Science at the University of Illinois at Urbana-Champaign. He obtained his Ph.D. from MIT in 2005. His main research interests are in Software Engineering, with focus on improving software reliability using software testing and model checking. His work is supported by NSF and Microsoft. Home page: http://mir.cs.uiuc.edu/~marinov

Software model checking for Confidentiality


Abstract: Protecting confidentiality of data manipulated by programs is a growing concern in various application domains. In particular, for extensible software platforms that allow users to install third party plugins, there is a need for an automated method that can verify that programs do not leak confidential information. Software model checking has emerged as an effective technique for checking programs with respect to correctness requirements. However, the existing methods and tools are not applicable for specifying and verifying confidentiality properties. We developed a specification framework for confidentiality, novel decision procedures for finite state systems as well as for classes of programs, and an abstraction-based program analysis technique. We evaluated the latter technique by analyzing bytecode of a set of methods of J2ME midlets for mobile devices and demonstrated that it can be successfully used for certification of these programs.

Deductive Verification of Concurrent Java Programs with the KeY System


We present an approach aiming at full functional deductive verification of concurrent Java programs, based on symbolic execution. We define a Dynamic Logic and a deductive verification calculus for a restricted fragment of multi-threaded Java (atomic loops only). The calculus is implemented in the KeY System and allows verifying unbounded systems.

Talk Slides

Quantitative Model Checking of Continuous-Time Markov Chains Against Timed Automata Specifications

Abstract: We study the following problem: given a continuous-time Markov chain C, and a linear real-time property provided as a deterministic timed automaton A, what is the probability of the set of paths of C that are accepted by A? It is shown that this set of paths is measurable and computing its probability can be reduced to computing the reachability probability in a piecewise deterministic Markov process. The reachability probability is characterized as the least solution of a system of integral equations and is shown to be approximated by solving a system of partial differential equations. For the special case of single-clock deterministic timed automata, it turns out that the system of integral equations can be transformed into a system of linear equations where the coefficients are solutions of ordinary differential equations.

Decision procedures for software verification

Dipartimento di Informatica, Universita` degli Studi di Verona

Applications in software verification often require determining the satisfiability of first-order formulae, including quantifiers, with respect to some background theories. Superposition-based inference systems are strong at reasoning with equality, universally quantified variables, and Horn clauses. Satisfiability modulo theories (SMT) solvers are strong at reasoning with propositional logic, including non-Horn clauses, ground equalities and integrated theories such as linear arithmetic. This talk gives a high-level survey of two approaches to combine these complementary strengths: (1) using the superposition-based inference system as a pre-processor for the SMT-solver, and (2) integrating the superposition-based inference system in the SMT-solver. Since during software development conjectures are usually false, it is desirable that the theorem prover terminates on satisfiable instances. In the integrated approach termination can be enforced by introducing additional axioms in such a way that the system detects and recovers from any ensuing unsoundness.

(1) is joint work with Mnacho Echenim (2) is joint work with Leonardo de Moura and Chris Lynch

Context-bounded analysis for concurrent programs with dynamic creation of threads

Abstract: Context-bounded analysis has been shown to be both efficient and effective at finding bugs in concurrent programs. According to its original definition, context-bounded analysis explores all behaviors of a concurrent program up to some fixed number of context switches between threads. This definition is inadequate for programs that create threads dynamically because bounding the number of context switches in a computation also bounds the number of threads involved in the computation. In this paper, we propose a more general definition of context-bounded analysis useful for programs with dynamic thread creation. The idea is to bound the number of context switches for each thread instead of bounding the number of switches of all threads. We consider several variants based on this new definition, and we establish decidability and complexity results for the analysis induced by them.

This is joint work with Ahmed Bouajjani and Shaz Qadeer.

TPTP, TSTP, CASC, etc. - Automated Reasoning in Practice

(Watch the video )

Abstract: This talk gives an overview of activities and products that stem from the Thousands of Problems for Theorem Provers (TPTP) problem library for Automated Theorem Proving (ATP) systems. These include the TPTP itself, the Thousands of Solutions from Theorem Provers (TSTP) solution library, the TPTP language, the CADE ATP System Competition (CASC), tools such as my semantic Derivation Verifier (GDV) and the Interactive Derivation Viewer (IDV), meta-ATP systems such as the Smart Selective Competition Parallelism (SSCPA) system and the Semantic Relevance Axiom Selection System (SRASS), online access to automated reasoning systems and tools through the SystemOnTPTP web service, and applications in various domains. Current work extending the TPTP to higher-order logic will be introduced.

A novel bicriteria scheduling heuristics providing a guaranteed global system failure rate

Abstract: We propose a new framework for the (length,reliability) bicriteria static multiprocessor scheduling problem. Our first criterion remains the schedule's length, crucial to assess the system's real-time property. For our second criterion, we consider the global system failure rate, seen as if the whole system were a single task scheduled onto a single processor, instead of the usual reliability, because it does not depend on the schedule length like the reliability does (due to its computation in the classical exponential distribution model). Therefore, we control better the replication factor of each individual task of the dependency task graph given as a specification, with respect to the desired failure rate.

To solve this bicriteria optimization problem, we take the failure rate as a constraint, and we minimize the schedule length. We are thus able to produce, for a given dependency task graph and multiprocessor architecture, a Pareto curve of non-dominated solutions, among which the user can choose the compromise that fits his requirements best.

Compared to the other bicriteria (length,reliability) scheduling algorithms found in the literature, the algorithm we present here is the first able to improve significantly the reliability, by several orders of magnitude, making it suitable to safety critical systems.


Qualitative Modeling and Simulation of Bacterial Regulatory Networks

The adaptation of microorganisms to their environment is controlled at the molecular level by large and complex networks of biochemical reactions involving genes, RNAs, proteins, metabolites, and small signalling molecules. In theory, it is possible to write down mathematical models of these networks, and study these by means of classical analysis and simulation tools. In practice, this is not easy to achieve though, as quantitative data on kinetic parameters are usually absent for most systems of biological interest. Moreover, the models consist of a large number of variables, are strongly nonlinear and include different time-scales, which make them difficult to handle both mathematically and computationally.

We have developed methods for the reduction and approximation of kinetic models of bacterial regulatory networks to simplified, so-called piecewise-linear differential equation models. The qualitative dynamics of the piecewise-linear models can be studied using discrete abstractions from hybrid systems theory. This enables the application of model-checking tools to the formal verification of dynamic properties of the regulatory networks. The above approach has been implemented in the publicly-available computer tool Genetic Network Analyzer (GNA) and has been used to analyze a variety of bacterial regulatory networks.

I will illustrate the application of GNA by means of the network of global transcription regulators controlling the adaptation of the bacterium Escherichia coli to environmental stress conditions. Even though E. coli is one of the best studied model organisms, it is currently little understood how a stress signal is sensed and propagated through the network of global regulators, and leads the cell to respond in an adequate way. Qualitative modeling and simulation of the network of global regulators has allowed us to identify essential features of the transition between exponential and stationary phase of the bacteria and to make new predictions on the dynamic behavior following a carbon upshift. I will show some results of the experimental validation of these predictions.





Linguistics as Physics

In this talk I will present my view point that linguistics should be studied as we study physics. The main point here is that just like physical entities are governed by the laws of physics which we cannot change at will, I wish to suggest that both natural and artificial languages are governed by the laws of linguistics which hold intrinsically by the very nature of languages.

More precisely, I would like to point out fundamental semantical and syntactical principles of languages which we should be able to find in every language be it natural or artificial. The talk is not technical and I wish to express my philosophy based on which I do my research on developing a computer environment for verifying computer programs and mathematical proofs.

Symbolic and Algebraic Methods in Computational Origami

In this talk I will show the importance of symbolic and algebraic methods in computational origami (art of paper fold). I discuss Huzita's axiomatization of origami and the algebraic graph rewriting of abstract origami. The former is used with relation to the algorithmic treatment of origami foldability and origami geometrical theorem proving. I will explain origami geometric theorem proving by some examples. The latter is used to model origami fold. On this, I would like to emphasize the importance of symbolic computation on discrete geometrical objects, i.e. hypergraphs of origami, and of the separation of the domain of concern between symbolic and numeric computations. It leads to clearer and more abstract formulation of origami construction.

ABC: Analyzing Bound and Complexity

This talk is a short summary of my semester project. During the talk I describe ABC, a software tool I developed for analyzing iteration bounds and time complexity of for-loops. The presentation explains the method used to derive the symbolic formula expressing a bound on the number of iterations of a for-loop, and details some of the implementation issues.

Faster Symmetry Discovery using Sparsity of Symmetries

Many computational tools have recently begun to benefit from the use of the symmetry inherent in the tasks they solve, and use general-purpose graph symmetry tools to uncover this symmetry. However, existing tools suffer quadratic runtime in the number of symmetries explicitly returned and are of limited use on very large, sparse, symmetric graphs. This paper introduces a new symmetry-discovery algorithm which exploits the sparsity present not only in the input but also the output, i.e., the symmetries themselves. By avoiding quadratic runtime on large graphs, it improves state-of the-art runtimes from several days to less than a second.

Sliding Window Abstraction for Infinite Markov Chains

We present an on-the-fly abstraction technique for infinite-state con- tinuous-time Markov chains. We consider Markov chains that are specified by a finite set of transition classes. Such models naturally represent biochemical reac- tions and therefore play an important role in the stochastic modeling of biological systems. We approximate the transient probability distributions at various time in- stances by solving a sequence of dynamically constructed abstract models, each depending on the previous one. Each abstract model is a finite Markov chain that represents the behavior of the original, infinite chain during a specfic time inter- val. Our approach provides complete information about probability distributions, not just about individual parameters like the mean. The error of each abstrac- tion can be computed, and the precision of the abstraction refined when desired. We implemented the algorithm and demonstrate its usefulness and efficiency on several case studies from systems biology.

Invariants for Arrays and Matrices (Thibaud Hottelier)

We present a loop property generation method for loops iterating over multi-dimensional arrays. When used on matrices, our method is able to infer their shapes (also called types), such as upper-triangular, diagonal, etc. To generate loop properties, we first transform a nested loop iterating over a multi-dimensional array into an equivalent sequence of unnested loops. Then, we infer quantified loop invariants for each unnested loop using a generalization of a recurrence-based invariant generation technique. These loop invariants give us conditions on matrices from which we can derive matrix types using a first-order theorem prover. Invariant generation is implemented in the software package Aligator and the type derivation is performed by the theorem prover Vampire. When run on the Java matrix package JAMA, our tool based on this method was able to infer automatically matrix all types describing the matrix shapes guaranteed by JAMA’s API.

We have also applied our method to a large number of loops over one-dimensional arrays, extracted from C programs. Non-trivial invariants can be inferred in many cases. The experimental evaluations provides practical justifications for the scalability and time-efficiency of our approach.

Verification of Concurrent Asynchronous Message-passing Programs (Damien Zufferey)

Writing concurrent programs is a very delicate task. The nondeterminism introduced by the scheduler makes bugs in concurrent programs hardly reproducible.Formal verification can help the programmer to find such bugs. In this thesis we focus on actor programs. Actors provide a model for concurrent programs based on asynchronous message passing. The contribution of this report is twofold. First we detail our investigation of static systems of actors, i.e. actors programs where the number of created actors is finite and known at compile time. We develop a technique for proving deadlock freedom of such systems. We achieve a good level of scalability through a heuristic that detects potential deadlocks. Finally we explore the possibility of extending the static model with dynamic actor creation. Due to the undecidability of all verification related problems for the general class of system with actor creation, we restrict ourself to certain systems which we call star topologies. In star topologies, only a finite set of actors can create other actors and the topology is restricted to a star shape, where dynamically created actors cannot communicate with each other. In practice, programs following a client-sever kind of communication fall into this category of star topologies.




Program Verification and Synthesis over Predicate Abstraction

Static determination of program correctness (verification) and automatic program construction (synthesis) are two important challenges in programming languages research. Due to the undecidable nature of both problems no easy solutions are feasible. In this talk we present our work on program verification and synthesis, presenting techniques for both using Satisfiability Modulo Theory (SMT) solvers. Very efficient SMT solvers are now available and we use them to construct powerful verification techniques. Verification can be accomplished by discovering appropriate invariants–facts that hold in every execution of the program.

We present three novel algorithms that discover sophisticated invariants using SMT solvers. Two of these algorithms use an iterative approach, one computing the weakest invariants and the other the strongest invariants, while the third employs a constraint-based approach to encode the invariant as a SAT instance.

We have implemented the techniques developed under the umbrella project, Verification and Synthesis using SMT Solvers (VS3), in a tool by the same name. Preliminary experiments using VS3 show promising results. In particular, we discuss its power in verifying full correctness for all major sorting algorithms, which are considered some of the most difficult benchmarks for verification. Verifying full correctness for sorting algorithms requires inferring \forall\exists invariants for permutation properties and \forall invariants for sortedness properties.

Time permitting, we will briefly discuss a new principled approach to program synthesis that builds on program verification. We demonstrate encouraging preliminary results for program synthesis.

Project Webpage: http://www.cs.umd.edu/~saurabhs/pacs



Analyzing Physical Properties of Embedded Systems via Game-Theoretic Learning

Abstract: Predicting physical properties of software, such as execution time, is central to the design of reliable embedded systems. However, the analysis of such properties is made difficult by their heavy dependence on the environment of the program, such as the processor it runs on. Modeling the environment by hand can be tedious, error-prone, and time consuming. In this talk, I will present a new, game-theoretic approach to estimating physical properties of software that is based on performing systematic measurements to automatically learn a model of the environment. We model the estimation problem as a game between our algorithm (player) and the environment of the program (adversary), where the player seeks to accurately predict program properties while the adversary sets environment parameters to thwart the player. I will present both theoretical and experimental evidence for the utility of our game-theoretic approach. On the theoretical side, we show that given an execution path, we can predict the physical property of the program for that path with probability greater than 1-\delta by making a number of measurements that is polynomial in ln (1/\delta) and the program size. Experimental results for execution time analysis demonstrate that our approach can predict worst-case execution time of a program as well as traditional approaches and with far greater portability.

Brief bio: Sanjit A. Seshia is an assistant professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. He received an M.S. and a Ph.D. in Computer Science from Carnegie Mellon University, and a B.Tech. in Computer Science and Engineering from the Indian Institute of Technology, Bombay. His research interests are in dependable computing and computational logic, with a current focus on applying automated formal methods to problems in embedded systems, computer security, and electronic design automation. He has received a Presidential Early Career Award for Scientists and Engineers (PECASE), an Alfred P. Sloan Research Fellowship, and the School of Computer Science Distinguished Dissertation Award at Carnegie Mellon University.


Of Scripts and Programs: Tall tales, Urban Legends, and Future Prospects

Scripting languages are playing an increasing role in today's software landscape. While the many applications are web-centric, scripting languages are used in non-traditional domains such as space exploration and administration of the pension benefits entire countries. Considering their importance to the practice of computing, it is surprising how little attention scripting languages have attracted in the academic community. This talk will present on going work carried out in a collaboration between Purdue, IBM Research and INRIA. I will start with some interesting success stories of scripting languages used in unusual places. Then I will try to dispel some misconceptions about the nature of scripting languages with results from a large corpus analysis of programs written in one of those languages. Finally, I will talk about the design of new language intended to bridge scripting and programming.

Modeling the Heap: A Practical Approach

The ability to accurately model the evolution of the program heap during the execution of a program is critical for many types of program optimization, refactoring and error detection applications. Past work on heap analysis has identified a number of important heap properties that are useful for many client applications such as: connectivity, shape, heap-based dependence information (frames), and heap regions. However, the high computational cost of obtaining this information has severely limited its utility in practice.

In order to analyze real world programs in a computationally efficient manner while extracting useful heap information we treat the problem holistically, solving problems via the construction of the abstract domain, the introduction of widening heuristics, or modifications to the interprocedural dataflow analysis as needed. This talk provides an overview of the abstract domain used in the analysis. This domain is based on the classic storage shape graph model, which naturally captures many of the connectivity properties we are interested in, and we then augment this structure with a number of instrumentation predicates to more precisely model structural information (and to capture intrinsic properties such as heap based use/mod information).

Based on this approach we have implemented a heap analysis that is capable of analyzing most of the Java 1.5 language including major language features (arrays, virtual calls, interfaces, exceptions) as well as the most commonly used standard Java libraries. No restrictions are placed on the heap structures that are constructed or how these structures are connected/shared. The analysis is able to provide the information needed for a wide range of client applications, including aliasing, connectivity, shape, region identification, and frame information. Our experimental results show that this information can be computed with high accuracy, over 90% of the properties extracted by the static analysis are maximally precise wrt. the observed states of the program during execution, and efficiently, ~15KLoc programs in 112 seconds and 122MB of memory.

Hierarchic Decision Procedures for Verification

Hierarchical decision procedures decide satisfiability problems in complex theories by reducing them to equisatisfiable problems in simpler theories. One approach to do this is reasoning in local theory extensions, where a given decidable theory is extended with new function symbols, which are defined by a set of universal axioms. While in general SMT problems containing universally quantified formulas are undecidable, locality of a set of axioms allows us to efficiently reduce satisfiability problems from the extended theory to the base theory by finite instantiation of the axioms.

In this talk I will introduce local theory extensions, give some new extensions that satisfy locality properties, and show how instantiation of axioms can be made more efficient by an incremental, semantically guided procedure. Finally, I will talk about some applications of local reasoning in the verification of parameterized systems.

Fail-aware untrusted storage (FAUST)

We consider a set of clients collaborating through an online service provider that is subject to attacks, and hence not fully trusted by the clients. We introduce the abstraction of a fail-aware untrusted service, with meaningful semantics even when the provider is faulty. In the common case, when the provider is correct, such a service guarantees consistency (linearizability) and liveness (wait-freedom) of all operations. In addition, the service always provides accurate and complete consistency and failure detection.

We illustrate our new abstraction by presenting a Fail-Aware Untrusted STorage service (FAUST). Existing storage protocols in this model guarantee so-called forking semantics. We observe, however, that none of the previously suggested protocols suffice for implementing fail-aware untrusted storage with the desired liveness and consistency properties (at least wait-freedom and linearizability when the server is correct). We present a new storage protocol, which does not suffer from this limitation, and implements a new consistency notion, called weak fork linearizability. We show how to extend this protocol to provide eventual consistency and failure awareness in FAUST.

Joint work with Alexander Shraer and Idit Keidar.

The languages of Isabelle: Isar, ML, and Scala

(Watch the video )

More than 20 years ago, Isabelle was introduced by Larry Paulson as a “logical framework”, to facilitate experimentation with various calculi according to general ideas of higher-order Natural Deduction. Over time the system has widened its scope, to become a general platform for building applications based on formal logic, with fully formal proof checking by a trusted kernel. We give an overview of the Isabelle platform of 2009 from the perspective of the main languages involved: Isar (Intelligible semi-automated reasoning), Standard ML, and Scala/JVM.

Isabelle/Isar is presented to end-users as a language for human-readable proofs (and specifications). It is firmly rooted on the Pure logical framework, and imposes certain policies on core inferences by means of rich extra-logical infrastructure. Thus Isar enhances the original framework of primitive proof rules towards one of structured proof texts. The concrete proof language can be seen as an application of more general principles provided internally: the greater part of the Isar language is implemented as derived elements. Further “domain specific proof languages” can be implemented as well.

Isabelle/ML is both the implementation language and extension language of the framework. Isabelle follows the original “LCF-approach” (Robin Milner, 1979). This means that the key notions are modeled as abstract datatypes in ML, and users may implement extensions on top without affecting soundness. Isabelle/ML is embedded into the Isar source language such that the proof context is available at compile time. Antiquotations within ML source allow robust references to formal entities. Using Poly/ML, the baseline performance for typical symbolic computations is very good; recent moves towards multicore support improve upon this further. In Isabelle2009, both theories and proofs are scheduled in parallel by the system, with reasonable scalability for current home machines (4-8 cores).

Isabelle/Scala has been recently added as a wrapper around the Isabelle/Isar/ML process. The idea is to reach out into the “real world”, as represented by the JVM platform with its existing facilities for user interfaces, web services etc. Thus we overcome the traditional TTY-based interaction with the raw ML process, replacing it by an editor-oriented proof document model with rich markup provided by the prover in the background. The existing concepts of parallel proof checking are eventually generalized towards an asynchronous interaction model that is expected to increase end-user productivity significantly. This general API for “Isabelle system programming” in Scala is used in our ongoing implementation of Isabelle/jEdit.



Finding Data Efficiently in Mobile Ad Hoc Network

A collection of devices, each equipped with wireless communication capabilities, can form an ad-hoc network. That is, a single hop ad-hoc network is obtained by having each device communicating with other devices in its transmission range. Moreover, if some devices occasionally volunteer to forward messages for other devices, then a multiple hop ad-hoc network is created. The claimed benefits of ad-hoc networks are their infrastructure independence, the zero air-time costs, high bandwidth and low latency.

Given that ad-hoc networks do not rely on any infrastructure, they must create their own services, such as routing and data location services. These services must be formed in a distributed fashion and must cope with mobility, which causes constant changes to the network topology. Also, the protocols implementing the services should send messages parsimoniously, in order to reduce power consumption and avoid network congestion.

In this talk I will describe 3DLS, a density driven data location service. Specifically, 3DLS maintains a virtual topography that is derived from nodes' local densities. Both publications of data items and lookups are propagated by performing greedy biased walks over the virtual topography. The result is a highly efficient local protocol for data location in ad-hoc networks, as we show through extensive simulations.

This is a joint work with Noam Mori.

Automated Termination Analysis of Programs using Dependency Pairs

(Watch the video )

Abstract: The question whether a given program terminates for all its inputs is one of the fundamental problems in program verification. Thus it has been researched quite thoroughly in the past and many techniques and tools have been developed, many of them based on the dependency pair framework developed in the term rewriting community.

However, until very recently, hardly any of these techniques could be used for real programming languages. Instead of starting from scratch and developing completely new techniques for each programming paradigm and language, we want to take advantage of existing powerful tools for automated termination analysis based on the dependency pair framework.

In this talk, we describe recent results which permit the reduction of termination problems for declarative programming languages to problems in the dependency pair frame work. We present such approaches for the functional language Haskell and the logic language Prolog. Our results have been implemented in the termination prover AProVE. Our resulting termination analyzers are currently the most powerful ones for both Haskell and Prolog.

(Joint work with Jürgen Giesl, René Thiemann, Alexander Serebrenik, Stephan Swiderski, and Thomas Ströder)

Sequential Verification of Serializability

Abstract: Serializability is a commonly used correctness condition in concurrent programming. When a concurrent module is serializable, certain other properties of the module can be verified by considering only its sequential executions. In many cases, concurrent modules guarantee serializability by using standard locking protocols, such as tree locking or two-phase locking. Unfortunately, according to the existing literature, verifying that a concurrent module adheres to these protocols requires considering concurrent interleavings. In this paper, we show that adherence to a large class of locking protocols (including tree locking and two-phase locking) can be verified by considering only sequential executions. The main consequence of our results is that in many cases, the (manual or automatic) verification of serializability can itself be done using sequential reasoning.

Joint work with G. Ramalingam and Noam Rinetzky, to appear in POPL 2010

Automatic Inference of Synchronization

(Watch the video )

Abstract:

Current practices for developing correct and efficient concurrent programs are rather limited and error-prone. In practice, manual ad-hoc use of low-level synchronization often leads to programs that are inefficient or incorrect (or both). High-level constructs (e.g. transactional memory) are not clearly easier to use.

In this talk I will briefly discuss techniques for automatically inferring correct and efficient synchronization: atomic sections, scheduler control, guards in Hoare's CCRs, and memory fences in weak memory models. We have implemented these techniques and used them to: create new practical linearizable concurrent data structures, new memory management algorithms, find and automatically correct errors in concurrent programs due to weak memory models.

I will then discuss abstraction-guided synthesis of synchronization (an upcoming POPL'10 paper). In this work we combine abstraction interpretation with synchronization inference. We add the symmetrical choice to traditional abstraction refinement: at any step during abstract interpretation, we can refine the abstraction or change the program by adding more synchronization. This means for example that even if the original program is incorrect or unprovable under the abstraction, we may be able to add synchronization to make it provable.

Throughout the talk, I will also mention various insights obtained in this work that pertain to concurrency verification, synthesis and program representation in general.

(Joint work with Eran Yahav and Greta Yorsh)

Short biography: Dr. Martin Vechev is a researcher at the IBM Watson Center in New York. He obtained his PhD in 2007 from Cambridge University. His interests are in concurrency, synthesis, verification and dynamic program analysis.