Sunday, March 13, 2016

Simple Autoencoder on MNIST dataset

So, I had fun with Theano and trained an Autoencoder on a MNIST dataset.

Autoencoder is a simple Neural network (with one hidden layer) which reproduces the input passed to it. By controlling the number of hidden neurons, we can learn interesting features from the input and data can be compressed as well (sounds like PCA). Autoencoders can be used for unsupervised feature learning. Data trasnformed using autoencoders can be used for supervised classification of datasets.

More about Autoencoders is available here. More variants of Autoencoders exist (Sparse, Contractive, etc.) are available with different constraints on the hidden layer representation.

I trained an vanilla Autoencoder for 100 epochs with 16 mini batch size and learning rate of 0.01

Here are the figures for digit 7 with hidden size 10 and 20 (original data was MNIST training dataset with  784 unit length feature vector). Each of the digits (whose value was 8) were passed to the autoencoder. Each of the hidden units were visualized (after computing the mean).








Wednesday, March 9, 2016

simple Convolutional Neural Net based object recognition

I made a simple object recognition module over the last weekend. I wrote a Theano based convolutional neural network. I wrote a simple OpenCV based image segmentation program.

The output from the image segmentation program is passed to the Theano based Convolutional Neural Network.

I used 10,000 images for training and 16,000 images for testing.

The black circles in the video indicate the regions where the classifier is looking and red circles indicate true positives found by the algorithm.