The sub-problems are independently solved and their solutions are combined to give a solution to the original problem. This technique has been applied to many well-known problems, such as sorting (e.g. quicksort and merge sort) and fast Fourier transforms. The divide-and-conquer approach provides (at least) three potential benefits, in algorithm design, complexity, and cache locality.
First, it can provide a simple approach to solving conceptually difficult problems, such as the classic Tower of Hanoi puzzle, by reducing the problem size recursively until a trivial base case is reached.
Second, even if a solution to the full problem is already known (e.g. for sorting, a brute-force comparison of all elements with one another), divide and conquer can substantially reduce the computational cost (complexity). For example, if the original problem requires O(n²) work for a problem of size n, and the combination (conquer) step of two halves only requires O(n) work, then a divide-and-conquer algorithm reduces the complexity to O(n log n).
Third, explicitly recursive subdivision can have cache benefits, since eventually the problem reaches a size that fits in cache, whatever the size of the cache(s) might be. Using divide-and-conquer, one can even design an algorithm to use all levels of the cache hierarchy in an asymptotically optimal fashion, without knowing the size of the cache or including it as an explicit parameter. Such an algorithm is called (optimal) cache-oblivious, and this approach has been successfully applied to sorting, FFTs, matrix multiplication, and other problems. (The standard alternative approach, blocking the problem into chunks the size of the cache, requires that a program be altered whenever it is run a new machine with a different set of cache sizes.)
See: Akra-Bazzi Method