PhD
I’m interested in keeping an eye on AI. By a stroke of luck my PhD advisor is Laurence Aitchison. I’m at the Interactive Artificial Intelligence Centre for Doctoral Training at the University of Bristol.
Covid modelling
I started my PhD just before Covid. In a strange turn, a bunch of computer scientists invited me to do a little bit of writing on their big Bayesian model of what policies worked against the bug. I had no epidemiology background. 12 months later, we’d produced a series of 7 leading papers on important questions which weren’t being treated with the proper uncertainty.
Yes, this was the least neglected research topic in the world. Yes, it is strange that noobs could do this.
Probabilistic programming
My original project was Tensorised Probabilistic Programming.
Exact inference is intractable in many realistic latent variable models. Of the available approximations, variational inference is fast, but underestimates the variance; and Markov Chain Monte Carlo estimates the variance well but is far too slow in large models (Bishop 2006, Betancourt, 2020). For policy applications, where the variance must be accurate to prevent large irreversible decisions, we thus need new methods. Extending Aitchison’s 2019 work on speeding up variational autoencoders, we seek to generalise the use of tensor products for approximate inference.
The end goal is multi-sample inference for any such scheme, and we aim to implement this in a probabilistic programming language (PPL) to maximise usability and impact. There are already ‘tensorised‘ PPLs, in the weak sense of using tensor operations for arbitrary probabilistic programs with one inference scheme (e.g. Bingham et al., 2019, which uses stochastic variational inference for all runs). We seek a further abstraction for any inference scheme. In our project, ‘tensorised’ denotes the tensor products used to achieve the speedup.
The original plan has passed to a colleague, but I’ll be back.
AI safety
Here’s my sceptic’s guide to AI risk. (For relative sceptics.) I also contributed a couple thousand words to the main wiki page. I currently work with the Alignment of Complex Systems Group, Charles University.
At the first AI Safety Camp I worked with a team on inverse reinforcement learning, designing environments to help us probe the limits of such reward learning. Our work was reused by a team at Deepmind and in an AIES paper.
Before starting on probabilistic programming, I played with an odd alternative ML paradigm called inductive logic programming. This led to my first paper, a negative result.
I also helped on a wee paper with a sort of counsel of despair about algorithmic fairness.
I’ve also written about the likely overlap between work on current systems and future systems.
Metascience
Over Christmas, instead of studying for quals I started listing all the failed replications in psychology I’d heard of. This ballooned into a list of hundreds, and was taken up by the volunteer org FORRT for permanent maintenance.