Correlation function (astronomy)
Astronomers describe the distribution of
galaxies in the universe by means of a
correlation function. By default,
correlation function refers to the two-point autocorrelation function. For a given distance, the two-point autocorrelation function is a
function of one
variable (distance) which describes the
probability that two galaxies are separated by this particular distance. It can be thought of as a lumpiness factor - the higher the value for some distance scale, the more lumpy the universe is at that distance scale.
The following definition (from Peebles 1980) is often cited:
- Given a random galaxy in a location, the correlation function describes the probability that another galaxy will be found within a given distance.
However, it can only be correct in the statistical sense that it is averaged over a large number of galaxies chosen as the first,
random galaxy. If just one
random galaxy is chosen, then the definition is no longer correct, firstly because it is meaningless to talk of just one "random" galaxy, and secondly because the function will vary wildly depending on which galaxy is chosen, in contradiction with its definition as a
function.
The n-point autocorrelation functions for n greater than 2 or cross-correlation functions for particular object types are defined similarly to the two-point autocorrelation function.
The correlation function is important for theoretical models of cosmology because it provides a means of testing models which assume different things about the contents of the universe. Computer models which calculate the formation of galaxies seem to favor cold dark matter as the model with the most support.