Main Page | See live article | Alphabetical index

Dynamic programming

The term Dynamic programming originally only applied to solving certain kinds of operational problems, quite outside the area of Computer Science, just as Linear programming did. In this context it has no particular connection to programming at all, and there is a mere coincidence of naming.

In the 1940s Richard Bellman, an American mathematician, used the term dynamic programming to describe the process of solving problems where one needs to find the best decisions one after another.


In the context of programming, Dynamic programming is an important technique which belongs to the theory of optimization.

Dynamic programming is efficient in finding optimal solutions for cases with lots of overlapping subproblems. It solves problems by recombining solutions to subproblems, when the subproblems themselves may share sub-subproblems.

In order to avoid solving these sub-subproblems several times, their results are gradually computed and memorized, starting from the simpler problems, until the overall problem itself is solved. Thus, dynamic programming is simply memorisation of results of a recurrence, so that time is not spent trying to solve the same subproblem (or problem) repeatedly.

Dynamic programming can only be applied when the problem under concern has optimal substructure. Optimal substructure means that the optimal solutions of local problems can lead to the optimal solution of the global problem. In simple terms, that means that the problem can be solved by breaking it down, and solving the simpler problems.

A simple example for a possible application of dynamic programming would be calculating fibonacci numbers. The nth fibonacci number is calculated on the formula F(n) = F(n-1) + F(n-2). Assuming we did not use dynamic programming, we would probably generate the following recursive pseudo-code :

function fibonacci(n)
 if n is less than or equal to 2, return 1, otherwise
 return the value of fibonacci(n-1)+fibonacci(n-2)
end function

From this function, the recurrence relation (relation of the local problems to the global one) is evident, thus dynamic programming is applicable here. It is probably obvious to the reader that the above function wastes much time recalculating the value of various fibonacci numbers over and over again. Dynamic programming would eliminate the need to do so through memorisation. Thus, we can generate the following pseudo-code:

global array fibonacci_numbers
fibonacci_numbers[0,1,2] = 1;

for i = 2 to requested number do fibonacci_numbers[i] = fibonacci_numbers[i-1]+fibonacci_numbers[i-2]

After applying dynamic programming, our code is now much faster as it memorises results of previous subproblems which are used to solve the global problem.

In summary, dynamic programming makes use of:

Table of contents
1 Approaches
2 Algorithms that use dynamic programming
3 External links

Approaches

Dynamic programming usually comes in two approaches:

Algorithms that use dynamic programming

External links