Skip to main content Skip to navigation

Programme

Schedule

11.00 - 11.30: Welcome and refreshments

11.30 - 12.00: Granule: A language for fine-grained program reasoning with graded modal types - Dominic Orchard and Vilem-Benjamin Liepelt, University of Kent

12.00 - 12.30: A DSL Approach to Reconcile Equivalent Divergent Program Executions - Luís Pina, Imperial College London

12.30 - 13.00: From Lenses to Proof-Relevant Bisimulation - Jeremy Gibbons, University of Oxford

13.00 - 14.00: Lunch

14.00 - 15.00: Naïve Type Theory (Invited talk) - Thorsten Altenkirch, University of Nottingham

15.00 - 15.30: Coffee

15.30 - 16.00: You could have invented profunctor optics - Guillaume Boisseau, University of Oxford

16.00 - 16.30: Accelerating Naperian functors - Nick Hu, University of Oxford

16.30 - 17.00: Coffee

17.00 - 17.30: Algebraic Effects for Calculating Compilers - Luke Geeson, University of Oxford

17.30 - 18.00: Idiomatic Programming Language Translation via Partial Evaluation - Christopher Elsby, University of Cambridge

Naïve Type Theory (Invited talk) - Thorsten Altenkirch, University of Nottingham

Many people view Type Theory as a formalism but real mathematics needs to be done in classical Set Theory. I propose to use Homotopy Type Theory (HoTT) in informal reasoning. Indeed, informal HoTT is quite close to mathematical practice and unlike set theory — it hides details of definitions supporting abstraction. Intensional type Theory (ITT) which is the base of many existing implementations (Coq, Agda, . . . ) can deal with data (inductive types) but it fails to deal with codata (coinductive types, functions) and universes in an appropriate way. HoTT is nicely symmetric and can deal with codata as well as with data. The reason is that we view types as endowed with their equality types, semantically this corresponds to weak ω groupoids. This is also reflected in univalent universes which come with an extensional notion of equality of types. Since equality is part of the type structure we can also incluse constructors for equalities when defining inductive tyes leading to so called higher inductive types. I will discuss how to define the integers as a higher inductive types and close with a problem concerning the free group construction in HoTT.

Granule: A language for fine-grained program reasoning with graded modal types - Dominic Orchard and Vilem-Benjamin Liepelt, University of Kent

Guarantees about resource usage (e.g., energy, bandwidth, time, memory) and data integrity (e.g. passwords, location data, photos, banking information) can be as important as the functional input-output behaviour of a computer program. Various type-based solutions have been provided for reasoning about and controlling the consumption of resources and the dissipation of information. A general class of program behaviours called coeffects has been proposed as a unified framework for capturing different kinds of resource analysis in a single type system. To gain experience with such type systems for real-world programming tasks—and as a vehicle for further research—we are developing Granule, a functional programming language based on the linear λ-calculus augmented with graded modal types. This talk will comprise some discussion of the type theory and a demo of the language.

A DSL Approach to Reconcile Equivalent Divergent Program Executions - Luís Pina

Multi-Version Execution (MVE) deploys multiple versions of the same program, typically synchronizing their execution at the level of system calls. By default, MVE requires all deployed versions to issue the same sequence of system calls, which limits the types of versions which can be deployed.

In this talk, I present our recent work on a Domain-Specific Language (DSL) to reconcile expected divergences between different program versions deployed through MVE. We evaluate the DSL by adding it to an existing MVE system (Varan) and testing it via three scenarios: (1) deploying the same program under different configurations, (2) deploying different releases of the same program, and (3) deploying dynamic analyses in parallel with the native execution. I also present an algorithm to automatically extract DSL rules from pairs of system call traces. Our results show that each scenario requires a small number of simple rules (at most 14 rules in each case) and that writing DSL rules can be partially automated.

This work was published at USENIX ATC'17 https://srg.doc.ic.ac.uk/publications/varan-dsl-atc-17.html

From Lenses to Proof-Relevant Bisimulation - Jeremy Gibbons, University of Oxford

You may be familiar with "asymmetric lenses”, which provide read and write access to a component in a larger data structure, such as a field of a record. I will tell a story about several consecutive generalizations of this idea: via “symmetric lenses” and “delta lenses” to “dependently typed bidirectional transformations”, which turn out to be precisely the so-called "proof-relevant bisimulations”.

You could have invented profunctor optics - Guillaume Boisseau, University of Oxford

Profunctor optics are a very elegant Haskell tool to manipulate nested datatypes in a composable and intuitive way. However, like monads and many other neat Haskell constructs, this intuitive usage depends on complex higher-order types that can be tricky to grasp. This talk will aim at making profunctor optics easier to understand, by showing how they naturally arise from generalizing simple optics to make them more composable.

Accelerating Naperian functors - Nick Hu, University of Oxford

Naperian functors provide a high level abstraction for array data programming in Haskell reminiscent of APL. Unfortunately, Haskell is not sufficiently performant for many of the intended use cases. In this talk, I present an introduction to Naperian functors and hypercuboids, and an embedding into the Accelerate Array structure - a DSL for GPU programming.

Algebraic Effects for Calculating Compilers - Luke Geeson, University of Oxford

Bahr and Hutton proposed a method to derive compiler and virtual machine definitions that are correct by construction. More specifically, given a source language, evaluation semantics and correctness specification we can apply the method to get compiler and virtual machine definitions that satisfy the correctness specifications. This method is powerful yet one difficult task remains: defining the correctness specification. Correctness specifications can require advanced knowledge of the source language, target domain and formal verification techniques which are not always easy to identify. The correctness specification in this case arises from the source language and evaluation semantics; and the complexity of the specification arises from the computational effects present in the source and the machine configuration. Solving this specification problem then, will then further simplify the method presented by Bahr and Hutton so that it is possible to fix the correctness specifications for languages with effects and hence simplify a component needed in the calculation process.

This work applies algebraic effect handlers to calculating compilers in order to fix correctness specification and solve the specification problem. We approach the problem by capturing computational effects in the source language and machine configuration with the implementation of a series of distinct handlers, one for each effect. Shifting the handling of effects from the correctness specification to explicit handlers enables the user to fix the correctness specification and solve the problem. Compiler correctness is then captured by a typeclass (with rules). Using this approach, the user only needs a source language, semantics and handlers for each of the computational effects present.

This work presents a compelling narrative for compiler designers who may wish to use effect handlers in the implementation of compilers, where the interpretation of evaluation semantics as free monad abstract syntax trees corresponds directly to compiler IR ASTs. Interpreting algebraic handlers as folds over syntax trees corresponds to evaluating abstract computations on machine dependent configurations, separating the concerns of syntax (IR representation) from semantics (machine dependent hardware configurations etc…). Lastly, the adoption of datatypes a la carte simplifies the calculation process as a contribution to the automation of Bahr and Hutton’s method and the adoption of algebraic effects forms a good example of structured/origami programming.

Idiomatic Programming Language Translation via Partial Evaluation - Christopher Elsby, University of Cambridge

Different programming languages have different strengths and weaknesses. Therefore programmers may want to convert (or transpile) a program between different languages. However, traditional transpilation often gives unidiomatic output code that is difficult to work on.

Partial evaluation is a technique that specialises a program if part of its input is fixed. This often improves performance, and when applied to an interpreter it also achieves transpilation.

We present a new partial evaluator for a mutation-free subset of Java, using specialisation of classes to represent partially-static data. We combine this with an interpreter for a subset of ML. This achieves transpilation from the ML subset into Java.

We find that this transpilation naturally converts sum types with matching into a class hierarchy with dynamic dispatch, crossing an apparent idiomatic gap between the languages. We relate this transformation to the expression problem, a long-standing question about extensibility of programs.