Main Page | See live article | Alphabetical index

Buddy memory allocation

The buddy memory allocation technique is a memory allocation technique that divides memory into partitions to try and satisfy a memory request as suitably as possible. This system makes use of splitting memory into equal halves to try to give a best-fit.

Advantages

Compared to the memory allocation techniques (such as
paging) that modern operating systems such as Microsoft Windows and Linux use, the buddy memory allocation is relatively easy to implement, and does not have the hardware requirement of a memory management unit (MMU). Thus, it can be implemented on 286 and below computers.

In addition, in comparison to other simpler techniques such as dynamic allocation, the buddy memory system has little internal fragmentation, and has little overhead trying to do compaction of memory.

Disadvantages

Because of the way the buddy memory allocation technique works, there may be a moderate amount of external fragmentation - memory wasted because the memory requested is a little larger than a small block, but a lot smaller than a large block. (For instance, a program that requests 65K of memory would be allocated 128K, which results in a waste of 63K of memory).

How it works

The buddy memory allocation technique allocates memory in powers of 2, i.e 2x, where x is an integer. Thus, the programmer has to decide on, or to write code to obtain the upper limit of x. For instance, if the system had 2000K of physical memory, the upper limit on x would be 10, since 210 would be the best fit, since it is lesser than 2000K. Sadly, this results in a waste of about 976K of memory. (There are ways of countering this, of course : the programmer could choose to just divide the memory into halves, ignoring the power of 2 rule. That would result in the optimal usage of memory).

After deciding on the upper limit (let's call the upper limit u), the programmer has to decide on the lower limit, i.e. the smallest memory block that can be allocated. This lower limit is necessary so that the overhead of storing used and free memory locations is minimised. If this lower limit did not exist, and many programs request small blocks of memory like 1K or 2K, the system would waste a lot of space trying to remember which blocks are allocated and unallocated. Typically this number would be a moderate number (like 2, so that memory is allocated in 22 = 4K blocks), large enough to keep wastage low, but small enough to avoid overhead. Let's call this lower limit l).

Now we have got our limits right, let us see what happens when a program requests for memory. Let's say in this system, l = 6, which results in blocks 26 = 64K in size, and u = 10, which results in a largest possible allocatable block, 210 = 1024K in size. The following shows a possible state of the system after various memory requests.

64K64K64K64K64K64K64K64K64K64K64K64K64K64K64K64K
1024K
A-64K64K128K256K512K
A-64K64KB-128K256K512K
A-64KC-64KB-128K256K512K
A-64KC-64KB-128KD-128K128K512K
A-64K64KB-128KD-128K128K512K
128KB-128KD-128K128K512K
256KD-128K128K512K
1024K

This allocation could have occured in the following manner

  1. Program A requests memory 33K..64K in size
  2. Program B requests memory 65K..128K in size
  3. Program C requests memory 33K..64K in size
  4. Program D requests memory 65..128K in size
  5. Program C releases its memory
  6. Program A releases its memory
  7. Program B releases its memory
  8. Program D releases its memory

As you can see, what happens when a memory request is made is as follows Look for a memory slot of a suitable size
  1. If it is found, it is allocated to the program
  2. If not, it tries to make a suitable memory slot. The system does so by trying the following:
    1. Split a free memory slot larger than the requested memory size into 2 equal halves
    2. If the lower limit is reached, then allocate that amount of memory
    3. Go back to step 1 (look for a memory slot of a suitable size)
    4. Repeat this process until a suitable memory slot is found

Free the block of memory
  • Look at the neighbouring block - is it free too?
  • If it is, combine the two, and go back to step 1 and repeat this process until either the upper limit is reached (all memory is freed), or until a non-free neighbour block is encountered

  • This method of freeing memory is rather efficient, as compaction is done relatively quickly, with the maximal number of compactions required equal to 2u / 2l.

    Typically the buddy memory allocation system is implemented with the use of a binary tree to represented used or unused split memory blocks.