Huffman coding is one of the basic compression methods, that have proven useful in image and video compression standards. When applying Huffman encoding technique on an Image, the source symbols can be either pixel intensities of the Image, or the output of an intensity mapping function.
The first step of Huffman coding technique is to reduce the input image to a ordered histogram, where the probability of occurrence of a certain pixel intensity value is as
prob_pixel = numpix/totalnum
where numpix is the number of occurrence of a pixel with a certain intensity value and totalnum is the total number of pixels in the input Image.
Let us take a 8 X 8 Image
The pixel intensity values are :
This image contains 46 distinct pixel intensity values, hence we will have 46 unique Huffman code words.
It is evident that, not all pixel intensity values may be present in the image and hence will not have non-zero probability of occurrence.
From here on, the pixel intensity values in the input Image will be addressed as leaf nodes.
Now, there are 2 essential steps to build a Huffman Tree :
- Build a Huffman Tree :
- Combine the two lowest probability leaf nodes into a new node.
- Replace the two leaf nodes by the new node and sort the nodes according to the new probability values.
- Continue the steps (a) and (b) until we get a single node with probability value 1.0. We will call this node as root
- Backtrack from the root, assigning ‘0’ or ‘1’ to each intermediate node, till we reach the leaf nodes
In this example, we will assign ‘0’ to the left child node and ‘1’ to the right one.
Now, let’s look into the implementation :
Step 1 :
Read the Image into a 2D array(image)
If the Image is in .bmp format, then the Image can be read into the 2D array, by using this code given in this link here.
Create a Histogram of the pixel intensity values present in the Image
Find the number of pixel intensity values having non-zero probability of occurrence
Since, the values of pixel intensities range from 0 to 255, and not all pixel intensity values may be present in the image (as evident from the histogram and also the image matrix) and hence will not have non-zero probability of occurrence. Also another purpose this step serves, is that the number of pixel intensity values having non-zero probability values will give us the number of leaf nodes in the Image.
Calculating the maximum length of Huffman code words
As shown by Y.S.Abu-Mostafa and R.J.McEliece in their paper “Maximal codeword lengths in Huffman codes”, that, If , then in any efficient prefix code for a source whose least probability is p, the longest codeword length is at most K & If , there exists a source whose smallest probability is p, and which has a Huffman code whose longest word has length K. If , there exists such a source for which every optimal code has a longest word of length K.
Here, is the Fibonacci number.
Gallager  noted that every Huffman tree is efficient, but in fact it is easy to see more generally that every optimal tree is efficient
Fibonacci Series is : 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, …
In our example, lowest probability(p) is 0.015625
1/p = 64
For K = 9, F(K+2) = F(11) = 55 F(K+3) = F(12) = 89
1/F(K+3) < p < 1/F(K+2) Hence optimal length of code is K=9
Define a struct which will contain the pixel intensity values(pix), their corresponding probabilities(freq), the pointer to the left(*left) and right(*right) child nodes and also the string array for the Huffman code word(code).
These structs is defined inside main(), so as to use the maximum length of code(maxcodelen) to declare the code array field of the struct pixfreq
Define another Struct which will contain the pixel intensity values(pix), their corresponding probabilities(freq) and an additional field, which will be used for storing the position of new generated nodes(arrloc).
Declaring an array of structs. Each element of the array corresponds to a node in the Huffman Tree.
Why use two struct arrays?
Initially, the struct array pix_freq, as well as the struct array huffcodes will only contain the information of all the leaf nodes in the Huffman Tree.
The struct array pix_freq will be used to store all the nodes of the Huffman Tree and the array huffcodes will be used as the updated (and sorted) tree.
Remember that, only huffcodes will be sorted in each iteration, and not pix_freq
The new nodes created by combining two nodes of lowest frequency, in each iteration, will be appended to the end of the pix_freq array, and also to huffcodes array.
But the array huffcodes will be sorted again according to the probability of occurrence, after the new node is added to it.
The position of the new node in the array pix_freq will be stored in the arrloc field of the struct huffcode.
The arrloc field will be used when assigning the pointer to the left and right child of a new node.
Step 4 continued…
Now, if there are N number of leaf nodes, the total number of nodes in the whole Huffman Tree will be equal to 2N-1
And after two nodes are combined and replaced by the new parent node, the number of nodes decreases by 1 at each iteration. Hence, it is sufficient to have a length of nodes for the array huffcodes, which will be used as the updated and sorted Huffman nodes.
Initialize the two arrays pix_freq and huffcodes with information of the leaf nodes.
Sorting the huffcodes array according to the probability of occurrence of the pixel intensity values
Note that, it is necessary to sort the huffcodes array, but not the pix_freq array, since we are already storing the location of the pixel values in the arrloc field of the huffcodes array.
Building the Huffman Tree
We start by combining the two nodes with lowest probabilities of occurrence and then replacing the two nodes by the new node. This process continues until we have a root node. The first parent node formed will be stored at index nodes in the array pix_freq and the subsequent parent nodes obtained will be stored at higher values of index.
How does this code work?
Let’s see that by an example:
After the First Iteration
As you can see, after first iteration, the new node has been appended to the pix_freq array, and it’s index is 46. And in the huffcode the new node has been added at its new position after sorting, and the arrloc points to the index of the new node in the pix_freq array. Also, notice that, all array elements after the new node (at index 11) in huffcodes array has been shifted by 1 and the array element with pixel value 188 gets excluded in the updated array.
Now, in the next(2nd) iteration 170 and 174 will be combined, since 175 and 188 has already been combined.
Index of the lowest two nodes in terms of the variable nodes and n is
In 2nd iteration, value of n is 1 (since n starts from 0).
For node having value 170
For node having value 174
Hence, even if 175 remains the last element of the updated array, it will get excluded.
Another thing to notice in this code, is that, if in any subsequent iteration, the new node formed in the first iteration is the child of another new node, then the pointer to the new node obtained in the first iteration, can be accessed using the arrloc stored in huffcodes array, as is done in this line of code
Backtrack from the root to the leaf nodes to assign code words
Starting from the root, we assign ‘0’ to the left child node and ‘1’ to the right child node.
Now, since we were appending the newly formed nodes to the array pix_freq, hence it is expected that the root will be the last element of the array at index totalnodes-1.
Hence, we start from the last index and iterate over the array, assigning code words to the left and right child nodes, till we reach the first parent node formed at index nodes. We don’t iterate over the leaf nodes since those nodes has NULL pointers as their left and right child.
Encode the Image
Another important point to note
Average number of bits required to represent each pixel.
The function codelen calculates the length of codewords OR, the number of bits required to represent the pixel.
For this specific example image
Average number of bits = 5.343750
The printed results for the example image
pixel values -> Code 72 -> 011001 75 -> 010100 79 -> 110111 83 -> 011010 84 -> 00100 87 -> 011100 89 -> 010000 93 -> 010111 94 -> 00011 96 -> 101010 98 -> 101110 100 -> 000101 102 -> 0001000 103 -> 0001001 105 -> 110110 106 -> 00110 110 -> 110100 114 -> 110101 115 -> 1100 118 -> 011011 119 -> 011000 122 -> 1110 124 -> 011110 125 -> 011111 127 -> 0000 128 -> 011101 130 -> 010010 131 -> 010011 136 -> 00111 138 -> 010001 139 -> 010110 140 -> 1111 142 -> 00101 143 -> 010101 146 -> 10010 148 -> 101011 149 -> 101000 153 -> 101001 155 -> 10011 163 -> 101111 167 -> 101100 169 -> 101101 170 -> 100010 174 -> 100011 175 -> 100000 188 -> 100001
Encoded Image :
0111010101000110011101101010001011010000000101111 00010001101000100100100100010010101011001101110111001 00000001100111101010010101100001111000110110111110010 10110001000000010110000001100001100001110011011110000 10011001101111111000100101111100010100011110000111000 01101001110101111100000111101100001110010010110101000 0111101001100101101001010111
This encoded Image is 342 bits in length, where as the total number of bits in the original image is 512 bits. (64 pixels each of 8 bits).
Image Compression Code
Code Compilation and Execution :
First, save the file as “huffman.c“.
For compiling the C file, Open terminal (Ctrl + Alt + T) and enter the following line of code :
gcc -o huffman huffman.c
For executing the code enter
Image Compression Code Output :
Huffman Tree :
Attention reader! Don’t stop learning now. Get hold of all the important CS Theory concepts for SDE interviews with the CS Theory Course at a student-friendly price and become industry ready.