The ideas here flourished in the period 1955-1965 (which was roughly the time at which the requirements of algebraic topology were met but those of algebraic geometry were not). From the point of view of category theory the work of comonads of Beck was a summation of those ideas. The difficulties of algebraic geometry with passage to the quotient are acute: it is like doing the non-commutative geometry of Connes, to mention the currently-fashionable theory in the area of 'bad quotients', but with polynomials to separate points, rather than general continuous functions. The urgency (to put it that way) of the problem for the geometers accounts for the title of the 1959 Grothendieck seminar *TDTE* on *theorems of descent and techniques of existence* connecting the descent question with the representable functor question in algebraic geometry in general, and the moduli problem in particular. As with a number of the more abstract flights of the Grothendieck school, later work relied on some of this and bypassed other parts (to the extent that the papers, published only in mimeographed form, may have already become hard to find). The work a few years later of David Mumford in his geometric invariant theory spectacularly mixed scheme and categorical techniques with more concrete geometry, to construct moduli spaces for curves and abelian varieties (for the first time, in the required technical sense of 'moduli').

The case of the construction of vector bundles from data on a disjoint union of topological spaces is a straightforward place to start. We suppose given a space X, an open cover X_{i} of X and set Y to be the disjoint union of the X_{i}, so that a natural mapping p:Y -> X is given too. We think of Y as 'above' X, with the X_{i} projection 'down' onto X. With this language, *descent* implies a vector bundle on Y (so, a bundle given on each X_{i}), and our concern is to 'glue' those bundles V_{i}, to make a single bundle V on X. What we mean is that V should, when restricted to X_{i}, give back V_{i}, up to a bundle isomorphism.

The data needed is then this: on each overlap X_{ij}, intersection of X_{i} and X_{j}, we'll require mappings f_{ij} to use to identify V_{i} and V_{j} there, fiber by fiber. Further the f_{ij} must satisfy conditions based on the reflexive, symmetric and transitive properties of an equivalence relation (gluing conditions). For example f_{ij}of_{jk} = f_{ik} for transitivity (and choosing apt notation). The f_{ii} should be identity maps and hence the symmetry becomes invertibility of f_{ij} (so that it is fiberwise an isomorphism).

These are indeed standard conditions in fiber bundle theory. One important application to note is *change of fiber*: if the f_{ij} are all you need to make a bundle, then there are many ways to make an associated bundle. That is, essentially same f_{ij}, acting on different fibers.

Another major point is the relation with the chain rule: the discussion of the way there of constructing tensor fields can be summed up as 'once you learn to descend the tangent bundle, for which transitivity is the Jacobian chain rule, the rest is just naturality of tensor constructions'.

To move towards the abstract theory we need to interpret the disjoint union of the X_{ij} now as Yx_{X}Y, the fiber product (here an equalizer) of two copies of the projection p. The bundles on the X_{ij} that we must control are actually V' and V", the pullback to the fiber of V via the two projection maps to X.

Therefore by going to a more abstract level one can eliminate the combinatorial side (that is, leave out the indices) and get something that makes sense for p not of the special form with which we began. This then allows a category theory approach: what remains to do is to re-express the gluing conditions.