School of Computer Science, University of St Andrews
Lecture Room 2, Gateway Building
Monday 15th June 2015, 1145 AM - 5 PM
Please fill in the doodle poll if you intend to attend: http://doodle.com/5mrwt9239zemm29i.
Parametricity is one of the foundational principles which underpin our understanding of modern programming languages. Roughly speaking, parametricity expresses the hidden invariants of programs satisfy by formalising the intuition that programs map related inputs to related outputs. Traditionally parametricity is formulated with proof-irrelevant relations but programming in Type Theory requires requires an extension to proof-relevant relations. But then one might ask … can our proofs be parametric?
Program testing can be used to show the presence of bugs, but never to show their absence!" says Dijkstra. It is true that extensional equality of functions cannot be decided unless their domains are compact, so testing a function against a reference implementation seems a rather hopeless endeavour. But appearances can be deceptive. With just a little bit of information about the implementation of some functions, we can demonstrate the completeness of finitary testing for them. I'll prove a simple result about folds over binary trees and speculate about the prospects for a more general theory in which bounding "peculiarity" enables complete testing.
Concurrent programming is notoriously difficult, due to needing to reason not only about the sequential progress of any algorithms, but also about how information moves between concurrent agents. What if programmers were able to reason about their concurrent programs and statically verify both sequential and concurrent guarantees about those programs’ behaviour? That would likely reduce the number of bugs and defects in concurrent systems.
Erlang’s existing type system cannot produce particularly strong guarantees, especially not with regards to concurrent systems. In this talk I will describe work to integrate a dependently-typed language (Idris) with Erlang, in order to integrate dependently-typed code with Erlang programs. This work included both a new Idris code generator, and libraries for reasoning about concurrent Erlang programs, including some OTP behaviours.
Dependencies are an unavoidable part of programming and a likely hurdle when introducing parallelism. Program Slicing is a technique that calculates all dependencies within a program, and can be used to discover those dependencies obstructive to the parallelisation process. We extend the Erlang slicing tool Slicerl, and introduce refactorings to automate the breaking of obstructive dependencies found through slicing.
Computing systems have become increasingly complex with the emergence of heterogeneous hardware combining multicore CPUs and GPUs. These parallel systems exhibit tremendous computational power at the cost of increased programming effort resulting in a tension between performance and code portability. Typically, code is either tuned in a low-level imperative language using hardware-specific optimizations to achieve maximum performance or is written in a high-level, possibly functional, language to achieve portability at the expense of performance.
We propose a novel approach aiming to combine high-level programming, code portability, and high-performance. Starting from a high-level functional expression we apply a simple set of rewrite rules to transform it into a low-level functional representation close to the OpenCL programming model from which OpenCL code is generated. Our rewrite rules define a space of possible implementations which we automatically explore to generate hardware-specific OpenCL implementations. We formalise the system with a core dependently-typed lambda-calculus along with a denotational semantics which we use to prove the correctness of the rewrite rules.
We test our design in practice by implementing a compiler which generates high performance imperative OpenCL code. Our experiments show that we can automatically derive hardware-specific implementations from simple functional high-level algorithmic expressions offering performance on par with highly tuned implementations for multicore CPUs and GPUs written by experts.
Tracing JIT compilation generates units of compilation that are easy to analyse and are known to execute frequently. The AJITPar project aims to investigate whether the information in JIT traces can be used to make better scheduling decisions or perform code transformations to adapt the code for a specific parallel architecture. To achieve this goal, a cost model must be developed to estimate the execution time of an individual trace.
This talk presents the design and implementation of a system for extracting JIT trace information from the Pycket JIT compiler. We define four increasingly parametric cost models for Pycket traces. We test the accuracy of these cost models for predicting the cost of individual traces on a set of loop-based micro-benchmarks. We also compare the accuracy of the cost models for predicting whole program execution time over the Pycket benchmark suite. Our preliminary results show the two simplest models provide only limited accuracy. The accuracy of the more com- plex models depends on the choice of parameters; in ongoing work we are systematically exploring the parameter space.
Information on travelling to the School of Computer Science of the University of St Andrews can be found at http://www.st-andrews.ac.uk/visiting/gettingtostandrews/ .
The Gateway is located near the School of Computer Science (map)