Main Page | See live article | Alphabetical index

Statistical learning theory

Statistical learning theory was developed during 1960-1990 mainly by Vladimir Vapnik and Alexey Chervonenkis. The theory explains the learning process from a statistical point of view.

This foundational theory unifies such disparate algorithms such as neural networks, principal components analysis, and maximum likelihood.

The theory covers four parts (extracted from "The Nature of Statistical Learning Theory"):

The last part of the theory introduced an well-known learning algorithm: the support vector machine.

Statistical learning theory contains important concepts such as the VC dimension and structural risk minimization. This theory is foundation of a real understanding of machine learning.

This theory is related to mathematical subjects such as:

References