Introduction to Cameras and Image Processing

Cameras are a biological copy of your eye extended to see what you cannot

– Waiss Kharni SM

The first thing when we talk about a camera that comes to our mind these days, are the different types of mobile phone camera modes available on smartphones, i.e., wide, ultrawide, macro, bokeh, front / etc. Some might even think of professional DSLR cameras and CCTV cameras that are used for surveillance. But in general, when we consider cameras and Image processing, the scope and types of cameras are many and differentiated based upon the use case and end customer. Based on the profession and use cases, cameras can be broadly classified in the field of:

  • Photography & Digital Arts
  • Law & Enforcement
  • Industrial Applications
  • Medical Image Processing
  • Space and Astronomy

We understand concepts better when we are able relate it to something we already know and comprehend. For our analogy on cameras and to understand how it works, we will be comparing it every now and then with our human eye. There is a separate scope of scientific research and development that is done based on study and understanding of our surroundings and nature. This type of Study is known as ‘Biomimicry’ which mainly deals with inventing ground breaking technology based on concepts we learn from nature. ‘Camera’ is one such device that was mimicked studying the Human eye and its operation.

Background

Before introducing what, a camera does and how Image Processing plays a major role lets first touch a few concepts of physics. When we talk about our vision, we already know that we cannot see when there is no light, thus light is the fundamental component required for vision. Visible light in technical terms denotes the visible spectrum of the Electromagnetic Spectrum.

Ref : Electromagnetic-Waves.jpg (1024×537) (scienceabc.com) 

The Electromagnetic Spectrum ranges from Extremely Low Frequency (ELF) waves to Extremely High Frequency waves generally known as Gamma rays. The term ‘Electromagnetic’ denotes the dual property of light being both a ‘particle’ and a ‘wave’. The light contains an electrical component and a perpendicular magnetic component. As humans our view is currently restricted to the visible spectrum ranging from a wavelength of 380 nanometers to 780 nanometers, I.e., everything that we see and visualize is electromagnetic waves that our eyes receive in the visible spectrum region. Interestingly, this means that we are not seeing the whole picture. Our vision is so narrowed down that we only perceive 10% of the actual world.

But the scope of electronics and embedded systems has made it possible for us to artificially create devices and sensors that can detect, convert and create images in other spectrum regions as well. A very good example is the use of X-ray imaging in medical practices, and Radio wave imaging in space studies where the electromagnetic spectrum regions such as X-ray (High Frequency) and Radio wave (low Frequency) are received through a sensor and digitally converted into an image. Ever wondered how the images taken of space looks like when you have access to seeing it through different frequencies? Here’s a glimpse:

Ref : Multiwavelength-Astronomy

Biological Comparison

To understand how cameras work, let’s first understand how the eyes function.

Ref: Cross-Section-of-Eye

The light enters into our eyes through Cornea, a dome shaped transparent layer, which bends light that helps in focus. It also acts as a protective barrier to protect our eyes. The amount of light that enters into the eyes is then controlled by Iris (colored part of the eye, for example, blue, brown, black, etc). The Iris helps in shrinking and expanding the Pupil, which is the opening of the eye. This light falls into the lens which makes sure the light entered inside falls on the Retina, a photosensitive layer containing millions of Photoreceptors. The light falling on the eyes contain all wavelengths of color and just blindly converting it to electrical signals would not help us distinguish different colors. To overcome these problems, the photoreceptors are further divided into two types, Cone cells and Rod Cells. Cone Cells are sensitive to different colors and Rod cells are sensitive to the intensity of light. These Photoreceptors together help in converting the visible light into electrical signals that is transferred to the brain via the Optic Nerve.

Phew… to much Biology! I get it. To wrap things up lets just keep the following parts of the eye in our heads because we are soon going to compare it with how cameras work:

  • Cornea – Bends Light
  • Iris – Controls the opening of eye
  • Pupil – the Opening of the eye
  • Retina – Contains Photoreceptors that convert light into electrical signals
  • Optic Nerve – Transfers signals to the brain

Camera Cross Section

Ref: DSLR-Illustration-PattarawitChompipat

The light entering the camera first passes through a set of Convex lens, that helps bend light. These light rays are then subjected to a series of complex lens systems to help focus light. At the other end of the lens system, a Diaphragm is placed which helps in managing how big or how small the opening of the lens(Aperture) is supposed to be. The light then falls on to the Image Sensor, which contains a series of photodiodes that convert light into electrical signals. A photodiode does not contain the capability to distinguish color. It only provides an equivalent amount of light signal for the amount of light falling on it. Thus, whenever we capture an image from the sensor, we will get an image without colors when we process it, resulting in a ‘Black and White’ Image or in technical terms, known as a Grayscale Image. Every photodiode in the image sensor is covered by a color filter which mainly is of three main colors – Red, Green and Blue. The main reason is that the combination of all three colors can produce other colors. This is similar to what we have all done when we were kids using color pencils. We would mix Blue and green to produce yellow. The filters on the photodiodes are mounted on a micro lens and thus the entire network of Color Filters on top of the sensor is known as the ‘Color Filter Array’. These electrical signals are then transferred to the Image Signal Processor(ISP) through an Analog to Digital Converter(ADC). This digital data generated and processed can be stored and viewed by any application.

Now that we have understood how cameras and the eyes work, let’s compare them:

OperationEyeCamera
Bends light insideCorneaCorvex Lens
Controls how much light entersIrisDiaphragm
Opening of eye/lensPupilAperture
Converting Light to Electrical SignalRetina containing Photoreceptors
(Cone cells – Color, Rod Cells – Brightness/Intensity)
Image Sensor containing Photo Diodes
(Color Filters Array – Color)

Now, that’s a wrap… In the next blog let’s dive a little more deep into different terminologies and controls used in cameras along with how they function.

Do drop a comment or like if you found this useful 🙂


Comments

2 responses to “Introduction to Cameras and Image Processing”

  1. […] Introduction to Cameras and Image Processing […]

    Like

  2. Rahul Vishva S Avatar
    Rahul Vishva S

    Thanks for sharing. It is useful and informative.

    Like

Leave a comment