From Shannon's *Mathematical Theory of Communication* (1949) we know that in an optimal code, the message length of an event E, MsgLen(E), where E has probability P(E), is given by MsgLen(E) = −log_{2}(P(E)).

From Bayes' theorem we know that the probability of a hypothesis (H) given evidence (E) is proportional to P(E|H) P(H), which is just P(H & E). We want the model with the highest such probability.

Therefore, we want the model which generates the shortest description of the data! Since MsgLen (H & E) = −log_{2}(P(H & E)), the most probable model will have the shortest such message. The message breaks into two parts: −log_{2}(P(H & E)) = −log_{2}(P(H)) + −log_{2}(P(E|H)). The first is the length of the model, and the second is the length of the data, given the model.

So what? MML naturally and precisely trades model complexity for goodness of fit. A more complicated model takes longer to state (longer first part) but probably fits the data better (shorter second part). So an MML metric won't choose a complicated model unless that model pays for itself.

Key points about MML:

- MML is a method of Bayesian model comparison. It gives every model a score.
- MML is scale-invariant! Unlike many Bayesian selection methods, MML doesn't care if you change from measuring length to volume.
- MML accounts for the precision of measurement. It uses the Fisher information to optimally discretize continuous parameters. Therefore the posterior is always a probability, not a probability density.
- MML has been in use since 1968. MML coding schemes have been developed for several distributions, and many kinds of machine learners including: unsupervised classification, decision trees and graphs, DNA sequences, Bayesian networks, Neural networks (one-layer only so far), image compression, image and function segmentation, etc.

- Minimum description length -- a non-Bayesian alternative
- Kolmogorov complexity -- absolute complexity; MML is a computable approximation
- Algorithmic information theory

- Minimum Message Length (MML)
- Minimum Message Length and Kolmogorov Complexity (by C.S. Wallace and D.L. Dowe, Computer Journal, Vol. 42, No. 4, 1999).
- Message Length as an Effective Ockham's Razor in Decision Tree Induction, by S. Needham and D. Dowe, Proc. 8th International Workshop on AI and Statistics (2001), pp253-260. (Shows how Ockham's razor works fine when interpreted as MML.)
- Monash MML Wiki has a bit of info and if it catches on might get some more. Lloyd's page is still better.
- MML researchers and links

*This article is a stub. You can help Wikipedia by fixing it.*