Artifical neural networks from scratch


There is this famous quote by Richard Feynman, who once said “what I cannot create I do not understand”

Deep Learning can often feel like a black-box. I have studied the theory behind neural networks for many years, but in my everyday life as a researcher, I rely on toolboxes such as Tensorflow and Pytorch to quickly implement various neural network architectures. The aim of this project was to write my very own neural network toolbox, dubbed ‘TimoFlow’, from scratch, only using basic packages for numerical computing, such as Scipy and Numpy.

The toolbox is modular and should adhere to good software engineering practices in Python.


Here’s a simple example The neural network is defined as class, and the network architecture is passed as Python dictionary:

import timoflow as tif

myModel = tif.nnet.myNet({'layers':[tif.nnet.module_linear(256,128),

Once the network is initialised, we can propagate a matrix with inputs through the network to get predictions:

y_hat = myModel.fprop(x_in)                

To compute the loss (e.g. for plotting), call fprop function of the loss module:

loss = myModel.loss.fprop(y_hat,y_true)

Finally, to update the weights, perform backpropagation (which computes the loss internally):


This toolbox supports fully connected feedforward neural networks with a variety of activation functions (ReLU, sigmoid, tanh) and loss functions (cross entropy, mse). More complex architectures such as CNNs or even LSTMs were beyond the scope of this project. I hope you find this useful. And as always, if you have any suggestions, feel free to open an issue on github!

Cheers, Timo

Timo Flesch
Timo Flesch
PhD Candidate