What is Image Compression?

 In the field of image processing, image compression is an important step before starting to process larger images or videos. Image compression is carried out by an encoder and generates a compressed form of an image. In compression processes, mathematical transformations play a vital role. A flow chart of the image compressor process can be represented as:

In this article, we try to explain the overview of the concepts involved in image compression techniques. The general representation of the image on a computer is as a vector of pixels. Each pixel is represented by a fixed number of bits. These bits determine the intensity of the color (grayscale for a black and white image and has three RGB channels for color images).



Why do we need image compression?

Consider a black and white image that has a resolution of 1000 * 1000 and each pixel uses 8 bits to represent intensity. So the total number of bits required = 1000 * 1000 * 8 = 80,00,000 bits per image. And consider if it is a video with 30 frames per second of the above-mentioned type images, then the total bits for a 3-second video is: 3 * (30 * (8,000,000)) = 720,000, 000 bits

As we can see, just to store a 3-second video we need a lot of bits, which is very large. So we also need a way to have a proper representation to store the information about the image in a minimum number of bits without losing the character of the image. Therefore, image compression plays an important role.

Basic steps in image compression:

  • Apply image transformation
  • Quantization of the levels.
  • Sequence coding.

Transforming the image

What is a transformation (mathematically):

It is a function that maps from a domain (vector space) to another domain (another vector space). Suppose that T is a transform, f (t): X-> X 'is a function, then T (f (t)) is called the transform of the function.

In a simple sense, we can say that T changes the form (representation) of the function, since it is a mapping from one vector space to another (without changing the basic function f (t), that is, the relationship between the domain and the codomain).

We generally carry out the transformation of the function from one vector space to another because when we do it in the newly projected vector space we infer more information about the function.

A real life example of a transformation:

Here we can say that the prism is a transformation function in which it divides the white light (f (t)) into its components, that is, the representation of white light.

And we note that we can infer more information about light in its component representation than white light. This is how transform helps in understanding functions in an efficient way.

Transformations in image processing

Image is also a function of the pixel location. that is, I (x, y) where (x, y) are the coordinates of the pixel in the image. So we generally transform an image from the spatial domain to the frequency domain.

Why image transformation matters:

It becomes easy to know all the main components that make up the image and help in the compressed representation.

Facilitates calculations.

Example: find the convolution in the time domain before the transformation:

So we can see that the computational cost has dropped as we shift to the frequency domain. We can also see that in the time domain the convolution was equivalent to an integration operator but in the frequency domain, it becomes equal to the simple product of terms. So in this way the computation cost is reduced.

In this way, when we transform the image from one domain to another, performing the spatial filtering operations becomes easier.

Quantization

The quantization process is a vital step in which the various intensity levels are grouped into a particular level based on the mathematical function defined in the pixels. Generally, the newest level is determined by taking a fixed filter size of "m" and dividing each of the filter's "m" terms and rounding them to the nearest integer and multiplying by "m" again.

Basic quantization function: [pixel value / m] * m

Hence, the closest pixel value approaches a single level, hence the number of distinct levels involved in the image becomes less. Therefore, we reduce redundancy at the intensity level. So, quantization helps reduce the various levels.

For example: (m = 9)

Therefore, we see in the example above that both intensity values ​​are rounded to 18, so we reduce the number of different levels (characters involved) in the image specification.

Symbol encoding

The symbol stage consists in that the different characters that intervene in the image are coded in a way that the not. The number of bits needed to represent a character is optimal depending on the frequency of the character's appearance. In simple terms, at this stage keywords are generated for the different characters present. In doing so, our goal is to reduce the no. bits needed to represent intensity levels and represent them in an optimal number of bits.


There are many encoding algorithms. Some of the most popular are:


Variable-length Huffman encoding.

Run-length encoding.

In Huffman's coding scheme, we try to find the codes in such a way that neither code is the prefix of the other. And based on the probability of the appearance of the character, the length of the code is determined. To have an optimal solution, the most probable character has the shortest code length.


Example:

We see the actual 8-bit representation as well as the new shorter codes. The code generation mechanism is:

So we see how the storage requirement for the number of bits is reduced as:

Initial representation: average code length: 8 bits per intensity level.

After encoding: average code length: (0.6 * 1) + (0.3 * 2) + (0.06 * 3) + (0.02 * 4) + (0.01 * 5) + (0.01 * 5) = 1.56 bits per intensity level

Therefore, the number of bits required to represent pixel intensity is drastically reduced.

Thus, in this way, the quantization mechanism helps in compression. Once the images are compressed, it is easy to store them on a device or transfer them. And depending on the type of transforms used, the type of quantization and the encoding scheme, the decoders are designed based on the inverse logic of compression so that the original image can be reconstructed based on the data obtained from the compressed images.

Learn CS Theory concepts for BDS interviews with the CS Theory Course at an affordable price for students and get ready for industry.

Comments

Popular posts from this blog

WHAT IS AN SEO SPECIALIST?

What is the Impact of Social Media on Your Marketing

E-Commerce Website Builders 2019