Formal verification of parameterized protocols such as cache coherence protocols is a significant challenge. In this paper, we propose an automatic proving approach and its prototype paraVerifier to handle this challenge within a unified framework: (1) in order to prove the correctness of a parameterized protocol, our approach automatically discovers auxiliary invariants and the corresponding causal relations among the discovered invariants and protocol rules from a small instance of the verified protocol; (2) the discovered invariants and causal relation information are then automatically generalized into a parameterized form to construct a parameterized formal proof for a theorem prover (i.e., Isabelle). As a side product, the final verification result of a protocol is provided by a formal and human-readable proof. Our approach has been successfully applied to a number of benchmarks including snoopy-based and directory-based cache coherence protocols.

This paper presents a new method for obtaining small algebras to check the admissibility---equivalently, validity in free algebras---of quasi-identities in a finitely generated quasivariety. Unlike a previous algebraic approach of Metcalfe and Röthlisberger that is feasible only when the relevant free algebra is not too large, this method exploits natural dualities for quasivarieties to work with structures of smaller cardinality and surjective rather than injective morphisms. A number of case studies are described here that could not be be solved using the algebraic approach, including (quasi)varieties of MS-algebras, double Stone algebras, and involutive Stone algebras.

For finite automata as coalgebras in a category C we study languages they accept, and varieties of such languages. This generalizes Eilenbergs concept of a variety of languages which corresponds to choosing as C the category of boolean algebras. Eilenberg established a bijective correspondence between pseudovarieties of monoids and varieties of regular languages. In our generalization we work with a pair C / D of locally fi nite varieties of algebras that are predual, i.e. dualize on the level of fi nite algebras, and we prove that pseudovarieties of D-monoids bijectively correspond to varieties of regular languages in C . As one instance, Eilenbergs result is recovered by choosing D = sets and C = boolean algebras. Another instance, Pins result on pseudovarieties of ordered monoids, is covered by taking D = posets and C = distributive lattices. By choosing as C = D the self-predual category of join-semilattices we obtain Poláks result on pseudovarieties of idempotent semirings. Similar, using the self-preduality of vector spaces over a finite field K , our result covers that of Reutenauer on pseudovarieties of K-algebras. Several new variants of Eilenbergs theorem arise by taking other predualities, e.g. between the categories of non-unital boolean rings and of pointed sets. In each of these cases we also prove a local variant of the bijection, where a fixed alphabet is assumed and one considers local varieties of regular languages over that alphabet in the category C .

Binary Decision Diagrams (BDDs) and in particular ROBDDs (Reduced Ordered BDDs) are a common data structure for manipulating Boolean Expressions Integrated Circuit design, type inferencers, model checkers, and many other applications. Although a lightweight data structure to implement, the memory-wise behavior of ROBDDs may not be obvious to the program architect. We explore experimentally and theoretically the typical and worst-cases ROBDD sizes in terms of number of nodes and residual compression ratios as compared to un-reduced BDDs; in addition, we provide an algorithm to calculate the worst-case size. Finally, we present an algorithm for constructing a worst-case ROBDD of a given number of variables. Our approach may be useful to projects deciding whether the ROBDD is the appropriate data structure to use, and in building worst-case examples to test their code.

We consider three relatively strong families of subsystems of AC^0[2]-Frege for which exponential lower bounds on proof size are known. In order of increasing strength, these subsystems are: AC^0-Frege with parity axioms and the treelike and daglike versions of systems introduced by Krají ek which we call PK^{c}_{d}(\oplus). In a PK^{c}_{d}(\oplus)-proof, lines are cedents in which all formulas have depth at most d, parity connectives can only appear as the outermost connectives in formulas, and all but c formulas contain no parity connective at all. We give simple arguments proving that AC^0[2]-Frege is exponentially stronger than daglike PK^{O(1)}_{O(1)}(\oplus), which is exponentially stronger than treelike PK^{O(1)}_{O(1)}(\oplus). On the other hand, we point out that the best available technique for comparing the performance of such systems on De Morgan formulas, due to Impagliazzo and Segerlind, only leads to superpolynomial separations. In fact, we prove that treelike PK^{O(1)}_{O(1)}(\oplus) is quasipolynomially but not polynomially equivalent to AC^0-Frege with parity axioms. This leads us to ask the question whether any of our systems is quasipolynomially equivalent to AC^0[2]-Frege on De Morgan formulas; a positive answer would imply an exponential lower bound for AC^0[2]-Frege. We also study Itsykson and Sokolov's system Res-Lin. We prove that an extension of treelike Res-Lin is polynomially simulated by a system related to daglike PK^{O(1)}_{O(1)}(\oplus), and obtain an exponential lower bound for this system.

In this work we explore the connections between (linear) nested sequent calculi and ordinary sequent calculi for normal and non-normal modal logics. By proposing local versions to ordinary sequent rules we obtain linear nested sequent calculi for a number of logics, including to our knowledge the first nested sequent calculi for a large class of simply dependent multimodal logics, and for many standard non-normal modal logics. The resulting systems are modular and have separate left and right introduction rules for the modalities, which makes them amenable to specification as linear logic clauses. While this granulation of the sequent rules introduces more choices for proof search, we show how linear nested sequent calculi can be restricted to blocked derivations, which directly correspond to ordinary sequent derivations.

This paper presents an extension of temporal epistemic logic with operators that quantify over agent strategies. Unlike previous work on alternating temporal epistemic logic, the semantics works with systems whose states explicitly encode the strategy being used by each of the agents. This provides a natural way to express what agents would know were they to be aware of some of the strategies being used by other agents. A number of examples that rely upon the ability to express an agent's knowledge about the strategies being used by other agents are presented to motivate the framework, including reasoning about game theoretic equilibria, knowledge-based programs, and information theoretic computer security policies. Relationships to several variants of alternating temporal epistemic logic are discussed. The computational complexity of model checking the logic and several of its fragments are also characterized.

This paper presents, for the first time, a Geometry of Interaction (GoI) interpretation using Hughes-vanGlabbeek (HvG) proof-nets for multiplicative additive linear logic (MALL). Our GoI captures dynamically HvGs geometric correctness criterionthe toggling cycle condition in terms of algebraic operators. Our new ingredient is a scalar extension of the *-algebra in Girard s *-ring of partial isometries over a boolean polynomial ring with literals of eigen- weights as indeterminates. In order to capture feedback arising from cuts, we construct a finer grained execution formula. The expansion of this execution formula is longer than that for collections of slices for multiplicative GoI, hence is harder to prove termination. Our GoI gives a dynamical, semantical account of boolean valuations (in particular, pruning sub-proofs), conversion of weights (in particular, ±-conversion), and additive (co)contraction, peculiar to additive proof-theory. Termination of our execution formula is shown to corre- spond to HvG s toggling criterion. The slice-wise restriction of our execution formula (by collapsing the boolean structure) yields the well known correspondence, explicit or implicit in previous works on multiplicative GoI, between the convergence of execution formulas and acyclicity of proof-nets. Feedback arising from the execution formula by restricting to the boolean polynomial structure yields autonomous definability of eigenweights among cuts from the rest of the eigenweights.

We analyse how the standard reductions between constraint satisfaction problems affect their proof complexity. We show that, for the most studied propositional, algebraic, and semi-algebraic proof systems, the classical constructions of pp-interpretability, homomorphic equivalence and addition of constants to a core preserve the proof complexity of the CSP. As a result, for those proof systems, the classes of constraint languages for which small unsatisfiability certificates exist can be characterised algebraically. We illustrate our results by a gap theorem saying that a constraint language either has resolution refutations of constant width, or does not have bounded-depth Frege refutations of subexponential size. The former holds exactly for the widely studied class of constraint languages of bounded width. This class is also known to coincide with the class of languages with refutations of sublinear degree in Sums-of-Squares and Polynomial Calculus over the real-field, for which we provide alternative proofs. We then ask for the existence of a natural proof system with good behaviour with respect to reductions and simultaneously small size refutations beyond bounded width. We give an example of such a proof system by showing that bounded-degree Lovász-Schrijver satisfies both requirements. Finally, building on the known lower bounds, we demonstrate the applicability of the method of reducibilities and construct new explicit hard instances of the graph 3-coloring problem for all studied proof systems.

We present a novel static analysis to infer the parallel cost of distributed systems. Parallel cost differs from the standard notion of serial cost by exploiting the truly concurrent execution model of distributed processing to capture the cost of synchronized tasks executing in parallel. It is challenging to analyze parallel cost because one needs to soundly infer the parallelism between tasks while accounting for waiting and idle processor times at the different locations. Our analysis works in three phases: (1) It first performs a block-level analysis to estimate the serial costs of the blocks between synchronization points in the program; (2) Next, it constructs a distributed flow graph (DFG) to capture the parallelism, the waiting and idle times at the locations of the distributed system; Finally, (3) the parallel cost can be obtained as the path of maximal cost in the DFG. A prototype implementation demonstrates the accuracy and feasibility of the proposed analysis.

The novel field of quantum computation and quantum information has gathered significant momentum in the last few years. It has the potential to radically impact the future of information technology and influence the development of modern society. The construction of practical, general purpose quantum computers has been challenging, but quantum cryptographic and communication devices have been available in the commercial marketplace for several years. Quantum networks have been built in various cities around the world and a dedicated satellite has been launched by China to provide secure quantum communication. Such new technologies demand rigorous analysis and verification before they can be trusted in safety- and securitycritical applications. Experience with classical hardware and software systems has shown the difficulty of achieving robust and reliable implementations. We present a concurrent language for describing quantum systems, and develop verification techniques for checking equivalence between processes described in the language. The language has well-defined operational and superoperator semantics for protocols that are functional, in the sense of computing a deterministic input-output relation for all interleavings arising from concurrency in the system. We have implemented a tool which takes the specification and implementation of quantum protocols, described in the input language, and automatically checks their equivalence. For efficiency purposes, we restrict ourselves to Clifford operators in the stabilizer formalism, but we are able to verify protocols over all input states. We have specified and verified a collection of interesting and practical quantum protocols ranging from quantum communication and quantum cryptography to quantum error correction.

In the last years, model checking with interval temporal logics is emerging as a viable alternative to model checking with standard point-based temporal logics, such as LTL, CTL, CTL*, and the like. The behavior of the system is modelled by means of (finite) Kripke structures, as usual. However, while temporal logics which are interpreted "point-wise" describe how the system evolves state-by-state, and predicate properties of system states, those which are interpreted "interval-wise" express properties of computation stretches, spanning a sequence of states. A proposition letter is assumed to hold over a computation stretch (interval) if and only if it holds over each component state (homogeneity assumption). A natural question arises: is there any advantage in replacing points by intervals as the primary temporal entities, or is it just a matter of taste? In this paper, we study the expressiveness of Halpern and Shoham's interval temporal logic (HS) in model checking, in comparison with those of LTL, CTL, and CTL*. To this end, we consider three semantic variants of HS: the state-based one, introduced by Montanari et al., that allows time to branch both in the past and in the future, the computation-tree-based one, that allows time to branch in the future only, and the trace-based variant, that disallows time to branch. These variants are compared among themselves and to the aforementioned standard logics, getting a complete picture. In particular, we show that HS with trace-based semantics is equivalent to LTL (but at least exponentially more succinct), HS with computation-tree-based semantics is equivalent to finitary CTL*, and HS with state-based semantics is incomparable with all of them (LTL, CTL, and CTL*).

MMT is a framework for designing and implementing formal systems in a way that systematically abstracts from theoretical and practical aspects of their type theoretical and logical foundations. Thus, definitions, theorems, and algorithms can be stated independently of the foundation, and language designers can focus on the essentials of a particular foundation and inherit a large scale implementation from MMT at low cost. Going beyond the similarly-motivated approach of meta-logical frameworks, MMT does not even commit to a particular meta-logic---that makes \mmt level results harder to obtain but also more general. We present one such result: a foundation-independent type reconstruction algorithm. It realizes the foundation-independent aspects generically relative to a set of rules that supply the foundation-specific knowledge. Maybe surprisingly, we see that the former covers most of the algorithm, including the most difficult details. Thus, we can easily instantiate our algorithm with rule sets for several important language features including, e.g., dependent function types. Moreover, our design is modular such that we obtain a type reconstruction algorithm for any combination of these features.

The notions of strong/weak approximations are studied extensively in recent years. These approximations are based on a structure of the form $(U,\{R_i\}_{i\in N})$, called the multiple-source approximation system (MSAS), where $R_i$ is an equivalence relation on $U$, and $N$ is an initial segment of the set $\mathbb{N}$ of natural numbers. The aim of the current article is twofold. On the one hand, we propose and explore a simple modal language and semantics that can be used to reason about the strong/weak approximations of concepts. On the other hand, we take a coalgebraic approach with the aim to connect two different areas of research viz. coalgebra and rough set theory. Further, our study is not confined to collection of equivalence relations only, but other types of relations are also considered. This study is important keeping in view the notions of generalized approximation spaces with relations other than equivalence.

The complexity of large-scale distributed systems, particularly when deployed in physical space, calls for new mechanisms to address composability and reusability of collective adaptive behaviour. Computational fields have been proposed as an effective abstraction to fill the gap between the macro-level of such systems (specifying a systems collective behaviour) and the micro-level (individual devices actions of computation and interaction to implement that collective specification), thereby providing a basis to better facilitate the engineering of collective APIs and complex systems at higher levels of abstraction. This paper proposes a full formal foundation for field computations, in terms of a core (higher-order) calculus of computational fields containing a few key syntactic constructs, and equipped with typing, denotational and operational semantics. Critically, this allows formal establishment of a link between the micro- and macro-levels of collective adaptive systems, by a result of full abstraction and adequacy for the (aggregate) denotational semantics with respect to the (per-device) operational semantics.