Write a Matlab function that takes an 8-bit image as input, and returns the following five values:
The entropy of the input image
The entropy obtained by coding horizontally adjacent (non-overlapping) pairs of pixels, divided by 2
since we have half as many double-size “pixels”
The entropy obtained by coding vertically adjacent (non-overlapping) pairs of pixels, divided by 2 since
we have half as many double-size “pixels”
The entropy of the image obtained by taking the horizontal differences between adjacent pixels (wrapping from the end of one row to the beginning of the next)
The entropy of the image obtained by taking the vertical differences between adjacent pixels (wrapping
from the end of one column to the beginning of the next)
While there’s a built-in Matlab function called entropy, you can’t use it directly, since (for example) the difference images will no longer contain positive integers. I suggest you write an entropy function from scratch
that operates on generic integer (e.g., int32) input. You can make new “images” with a larger dynamic range
for the second two computations using something like 256*(left pixel) + (right pixel). The syntax
im(:) may be useful here; i.e., rearranging a square matrix, column by column, into a single long column
vector
Apply your function above to the three input images in noise.png, stripes.png, and unpacking.png.
Provide the five entropy values in each case, and (most importantly) interpret the results. Note that each of
these images only has 16 gray levels, so without any coding, we would use 4 bits per pixel. Each entropy
corresponds to the best-case number of bits per pixel we could hope to obtain using the corresponding image/scheme. Clearly, you need to look at the source images and think about the corresponding probability
distributions in order to interpret the entropies.