I’m a final year PhD student in the Human Information Processing Lab at the University of Oxford, where I investigate the neural code that allows us to learn multiple tasks without interference. My work combines machine learning, neuroimaging and computational modeling of human behaviour. More specifically, I apply deep learning theory as mathematical toolkit to derive predictions of how neuronal populations represent information, and test these predictions in recordings of human behaviour and brain activity.
PhD in Experimental Psychology (Computational Neuroscience)
University of Oxford
PGDip in Computational Statistics & Machine Learning, 2018
UCL
BSc in Cognitive Science, 2015
Universitaet Osnabrueck
Neural Networks are often used as blackbox architectures. They do the job surprisingly well, but their inner workings remain a mystery. This is potentially dangerous when applying them as models of information processing in Neuroscience. I seek to understand how seemingly arbitrary choices such as the variance of weights at initialisation affect how and what a neural network learns.
There’s a long-standing debate in Neurosciene and Machine Learning concerning the geometry of representations. Some argue that populations of neurons encode information in high-dimensional and task-agnostic manner, whereas others propose that they learn task-specific and low-dimensional representations. I conduct theoretical work to understand the costs and benefits of those two schemes, and compare my findings to recordings of human fMRI data and electrophysiological recordings from Macaque brains.
We learn new tasks throughout our lifetime, whereas vanilla neural networks overwrite past knowledge when trained on new tasks. In this strand of research, I use insights from Neuroscience to develop biologically-inspired learning algorithms that overcome this limitation.