Principle: unsupervised learning

We iterate over a population of vector to identify (at each step) the closest "winner" among a trained set of representative vectors.
In the following chart we sample a 2d Gaussian Mixture with 2 components, then we "forgot" the simulated mixture states applied the WTA algorithm to estimate the center of the 2 components.



Example: Winner Take All training on MNIST images

A training sample of 3000 MNIST images (28x28) is used to train a WTA with 20 representative vectors.
The training is done over 25000 iterations in one single loop.
Once training is complete we cycle over a testing sample to extract (randomly at each iteration) 10 images and compare them with there closest match among the WTA vectors.



Real digit image vector   Closest matching vector   Real digit image vector   Closest matching vector   Real digit image vector   Closest matching vector