CLIP Lab
The Computational logic, Languages, Implementation, and Parallelism Laboratory

DOVES

DOVES: Development Of Verifiable and Efficient Software

General Data

  • Type of project: Individual, Plan Nacional de I + D + I (2008-2011)
  • Program: INVESTIGACIÓN FUNDAMENTAL
  • Subprogram: Proyecto de Investigación Fundamental no orientada
  • Project code: 2008-05624/TIN
  • Dates: January 1, 2009 - December 31, 2013

Summary

The relevance of software grows each day, partially due to the fact that it is embedded in an ever-increasing number of devices, often controlling safety-critical applications. Therefore, it is crucial to have the means for developing, in a simple way, software which is both reliable and efficient. The overall aim of this project is to devise the required methods and tools for enabling the development of verifiable and efficient software.

Programming language technology has received considerable attention in the last few decades, with important advances in language design, verification, transformation,etc., as well as in emerging techniques such as the use of proof-carrying code. An essential enabling technology for these advances is static analysis, often formalized in terms of abstract interpretation, a semantics-based approach which allows inferring a wide range of properties about program behavior which are applicable to ensuring both software correctness and efficiency, and in a way that is at the same time rigorous and highly practical. These properties include for example safe approximations of the use of different resources such as time complexity or memory usage.

The level of maturity of abstract interpretation allows it to be applied to realistic programs, as proposed in this project. Another important motivation behind this proposal on the efficiency side is parallelism. With the advent of the end of Moore's Law, multi-core architectures are now mainstream, with an expected continuous increase in the number of parallel processing units in the coming years: Chip manufacturers are turning to multi-core processor designs rather than scalar-oriented frequency increases in order to improve performance. Automatically (and reliably) exploiting these parallelism capabilities in an efficient manner is an important and inevitable challenge today. It turns out that the analysis techniques and the properties to be inferred used by parallelizing compilers are the same or closely related to those mentioned above.

This project brings together researchers from UPM and UCM (as well as a number of key foreign researchers) with wide expertise in the areas of static analysis, program verification, and automatic parallelization.

The project focuses on developing fundamental technologies, which make them widely applicable to different programming languages and paradigms. More concretely, the researchers plan to work both on (i) their own development environment, Ciao, in order to maximize scientific progress; Ciao is used because the clean semantics of declarative programming allows concentrating on the really important aspects of the problems at hand, and (ii) Java bytecode, in order to maximize the impact of the tools and techniques developed, since Java bytecode plays nowadays a crucial role in many applications, including mobile code.

  • Keywords: Static Analysis, Verification, Software Development, Optimization, Automatic Parallelization, Abstract Inter- pretation, Proof-Carrying Code
  • Description of the research programme

  • Static Analysis:

    Static analysis infers, at compile-time, properties about the run-time behaviour of programs. For performing static analysis we will use Abstract Interpretation, which produces results which are guaranteed to be safe approximations of the program behaviour. In particular, we will study resource consumption (including time complexity, memory usage, and user-defined resources) and program termination.

    Taking as a starting point our previous results in this area, we plan to devise analyses which are practical and applicable to a range of programming languages. This will require improving the state of the art in static analysis in several ways: scalability of the techniques, full-language coverage, generality, tool support for computation of upper and lower bounds, user interfaces, etc. As regards novelty, our results will represent a significant advance over the state of the art. There is currently no system capable of producing useful static information about (possibly user-defined) resources for realistic programs in a fully automatic way.

    The relevance of resource analysis is clear: it has multiple applications in software engineering, program optimization and synthesis, resource consumption certification, task scheduling in parallel systems, etc.

  • Verification:

    Program verification aims at proving that programs satisfy certain properties of interest. Traditionally, such properties concern what the program computes, i.e., whether the results correspond to the user's expectations, often described by means of assertions. In this research line we will also consider properties related to how such results are computed, in terms of resource consumption. Concrete objectives include (i) the definition of assertion languages which allow users to describe their expected resource consumptions, together with the definition of the resources of interest for a particular program, and (ii) the design and implementation of automatic tools for checking whether the analysis results guarantee the expected resource usage, as indicated by the assertions. Though there is some previous work on verification of resource usage, most of it considers rather simple programming languages or are restricted to certain complexity classes and notions of resources. The availability of development environments with support for verification of resource usage holds the promise of having a tremendous impact on software development.

    In addition to its application in resource usage verification of safety critical applications, it will allow choosing among different implementations based on their efficiency w.r.t. the resources of interest.

  • Proof-Carrying Code:

    Traditionally, verification is performed at the developer side. Ideally it should also be performed at the deployer side. Before executing a program (or code), we can either trust its developers or check the code w.r.t. certain expected properties. Proof-Carrying Code (PCC) proposes that developers add certain verifiable evidence to code which deployers can later use in order to check the code automatically and efficiently. We plan to develop, implement, and evaluate an Abstraction Carrying Code (ACC)framework, where abstract interpretation is used for generating and checking verifiable evidence about programs. This framework will allow checking Ciao and Java bytecode programs w.r.t. a variety of properties, including resource usage. The existing PCC frameworks which generate verifiable evidence automatically are based on relatively simple type-systems. Our proposal will allow introducing the automation and genericity of abstract interpretation-based verification as a PCC architecture. Being able to reason about bytecode is quite relevant, especially in the context of mobile code, since in most such applications the associated source code is not available at the deployer side.

  • Optimization:

    The aim of program optimization is, given a program, to obtain another program which produces the same results but has lower resource consumption. Unfortunately, reasoning a priori about the benefits of program transformations is a daunting task, especially when the performance of a program is evaluated w.r.t. the use of several resources. We plan to use our expertise in Partial Evaluation and resource analysis in order to perform resource aware specialization and compilation, whereby not only we optimize programs w.r.t. the number of execution steps, but we also take into account other factors such as program size, memory usage, etc. This is a novel approach since very few approaches to optimization are resource-aware and the ones which exist use profiling techniques (which we could also benefit from) rather than static analysis in order to guide the optimization process when choosing among different possible optimized programs.

  • Automatic Parallelization:

    Obtaining the maximum performance from multi-core architectures requires running several tasks in parallel. Developing parallel applications by hand is rather complex and impractical and, therefore, automatic parallelization techniques are once again receiving considerable attention. We plan to exploit the group's significant expertise in automatic parallelization (including granularity control), resource analysis, and implementation in order to develop novel parallelizing compilers generating optimized parallel code, as well as efficient run-time support machinery for running such code on current multi-core architectures. An important novelty of the proposed approach will be the use of the newly available resource analysis information during the parallelization process. This is expected to produce much higher quality parallelizations, exploiting multi-core machines more efficiently, and resulting in greatly increased speedups.

    Participants:

    • Manuel Hermenegildo (IP)
    • Germán Puebla Sánchez
    • Francisco Bueno Carrillo
    • Manuel Carro Liñares
    • Elvira Albert Albiol
    • Purificación Arenas Sánchez
    • Pedro López García
    • James Lipton
    • Nikolaus G Swoboda
    • Samir Genaim
    • Jesús Correas Fernández
    • Miguel Gómez-Zamalloa Gil
    • Edison Fernando Mera Menéndez
    • José Francisco Morales Caballer
    • Diana Vanessa Ramírez Deantes
    • Pablo Palmier Campos
    • Pablo Chico de Guzmán Huerta
    • Jorge A. Navas Laserna
    • Amadeo Casas Cuadrado
    • Cesar Sánchez Sánchez
    • Remy Haemmerle
    • Mario Mendez Lojo
    • Roberto Giacobazzi
    • Michael Codish
    • Andy King
    • John Gallagher
    • Isabella Mastroeni
    • Damiano Zanardini