Blitting is similar to sprite drawing, in that both system reproduce a pattern, typically a square area, at different locations on the screen. Sprites have the advantage of being stored in separate memory, and therefore don't disturbe the main display. This allows them to be moved about the display, representing the "background", with no effect on it. Blitting moves the same types of patterns about the screen, but does so by writing into the same memory as the rest of the display. This means every time the pattern is placed on the screen the display "under" it is overwritten, or "damaged". It is up to the software to clean this damage up by blitting twice, once to remove the damage, and then again to place the bit in its new location.
As one might imagine, this makes blitting somewhat slower than sprite manipulation. However blitting has one very big advantage, there's no physical limit to the number of patterns you can blit. Thus you can use blitting to display anything on the screen, including simulating sprites (through the double-write pattern noted above) whereas sprites are limited in number.
When first introduced the computers CPU typically had difficultly moving the bitmaps around in memory fast enough to be able to use blitting as the primary method of text display. For some time in the 1980s many home computers included either a co-processor or a special-purpose chip known as a blitter for this task. The CPU would send the bit blit operations to the blitter, which would then carry out the operation much faster than the CPU could. The later solution was used on the Atari ST and Amiga for instance.
Modern graphics accelerators can be regarded as descendants of the early "blitters".