DSLR (digital single lens-reflex) cameras are becoming more and more popular due to the market receiving cheaper and less-equipped models suited for beginners, allowing users now to buy DSLR’s with lower budgets. It was not long ago when DSLR’s were considered bleeding-edge technology and at the pinnacle of personal imaging, with high costs. Though they are now popular even among the entry-level photographers, still the inner workings, their components and their relation to each other is somewhat of a mystery to many people, which leads to confusions, false interpretations, and ultimately using the devices far below their potential. Let’s arrive at this issue from the very basics and break down how the different components are related.
General Properties of Light
When photons, the particles which “carry light” arrive from the Sun and initially Earth, they impact objects on Earth. Part of the photons’ energy is absorbed into the material of the object, and other parts are scattered away. Two important components of light are phase velocity and frequency, which is described as wavelength. All the colors the light contains is called a spectrum, and spectrum is composed of multiple wavelengths, ie. lights with different frequency, so spectrum contains a certain amount of different colors of light. A pure light is white light, which is a combination of all colors. The phase velocity is in simple terms a direction which light travels through a material or from one material to another and is usually denoted as refractive index or Index of Refraction (IOR). Different light sources have a different spectrum, thus containing only some colors. For example a green leaf is green because the leaf’s material properties cause the green wavelength to be scattered away from the leaf and other wavelengths absorbed in it. The scattering determines the glossiness of the surface. For example mirrors scatter light a minimal amount, so the bounce returns the same way it impacted the surface. And objects with matte or flat finish scatter light much more, sending the bounces into all directions.
When light-rays impact object or surface, the light is reflected and absorbed. The color definition of the object comes from the difference of which parts of the light is absorbed into the object and which are reflected away. An simplified example would be that light with a spectrum containing wavelengths of green and violet impact a leaf. The violet wavelength is absorbed in the leaf, and the green wavelength is scattered away. So all colors are merely an combination of absorption and reflection and based on the spectrum of the light that illuminates it, and not an absolute property of the material
Scattering is closely related to reflection. Scattering defines a term glossiness of a surface, which essentially means how large the scattering is. When light scatters little from an object or a surface, the object appears glossy or specular, or in everyday terms mirror-like. It is said that it is a reflective surface. When light scatters away, the light is spread to many directions, and the surface appears to be diffuse, matte or flat in finish. Then the scattering is low.
Absorption is the effect when light impacts an object or a surface and it’s energy is transferred into the matter, usually electrons of the matter, causing the light to lose it’s intensity, which is called the attenuation of light.
Other important property of light is refraction. Refraction means the change of direction of light as it passes through a medium. A great everyday example of refraction is when light passes from air into the water, or from water into air. The phase velocity of the light changes, but it’s frequency remains constant. This means that the direction of light changes upon entering the medium, but the color of the light does not. Refraction is the cause why images appear distorted or their proper distance is falsified when looked from air into the water. This property of light is essentially what makes digital imaging possible.
Diffraction (or in some contexts also called “interference” is a phenomenon of light “bending” around small obstacles or going through small openings. Diffraction is what causes image degradation in some camera optics and in the end defines an ultimate limit to optical resolution.
Components of a DSLR
Main components of a digital camera includes a lens, a sensor, a shutter, and an image processor. By combining the phenomenons of light, the workings of a camera can be understood. When reflected light, for example from a building wall enters the camera lens, the optical elements (glass) changes the phase velocity of the light entering the lens system, thus changing it’s direction, guiding it to the sensor in the camera’s core.
Digital camera sensor is essentially same in principle as the human eye. Human eye refracts light into the light-sensitive cells in the bottom of the eye. Digital camera sensor is alike composed of millions of light-sensitive points, called pixels. If camera is said to have 18 megapixels, it means that there is 18 million individual points sensing the incoming light, and these pixels are what defines the image “resolution”. And like the optic nerve transports the light-signal from the eye into the brain to interpreted into an visual image, the light-energy is transformed into an electron stream, and transported into the image processor where it is reconstructed as a visual image.
Whenever the shooting button is pressed on the camera, it opens a element called shutter which resides in the front of the lens-system. The shutter opens for a diminutive amount of time, and lets light into the lens system to be bounced off eventually into the sensor. The time that how long the shutter remains open is called the shutter speed. The larger the shutter speed value, the longer it will stay open to let light in, and the smaller the value, more fast it is in operation. Shutter speed values are usually denoted as hundreds of fractions of a second, for example shutter speed of 1/100 is 100th fraction of a second. 1/60 is 60th fraction of a second, and 1/2 is half of a second and so on. Values above one denotes shutter speeds of second or more. The time the shutter remains open is called exposure time, the time the sensor is exposed into the incoming light.
A property, which in digital cameras which have built in lens is a constant, but is a variable in DSLR’s where lenses are interchangeable, is an feature called aperture. When shutter stays open for example a time of 1/100, it opens up to a certain radius. This radius is denoted as a f-number or f-stop. The larger the f-stop value, to a larger radius the shutter opens, letting more light. For example f1.4 is currently the most “fastest” lens as it is said. f3.5 to f5.6 are really basic in lenses, and are fine for most purposes. The relation to the shutter speed and aperture is that it is always balancing between the two. In the end, image sharpness comes down to the possibility of how fast it can be exposed. If shutter speed is open for a four seconds, and subject moves his or her hand across the image during the exposure, the hand appears blurry and not well drawn because light continuously reflects off from the moving hand and into the sensor, the sensor capturing light from all the different phases of the hands movement, eventually creating usually undesirable blurry features. And this is why shutter speed is important. While smaller shutter speeds time “stops” the moment better and produces sharper images, it also means that light does not have long to enter into the lens, and the pictures are the dimmer the lower the shutter speed time is. And this is where the aperture comes in. Usually one desires for lenses with large aperture, to compensate for the shutter speed. While the shutter is open for 1/100 seconds, f2.8 lens lets in alot more light than f4.5 lens. If the lens has only a maximum aperture of f4.5 for example, one way to compensate this is to accelerate the light-sensitivity of the camera sensor.
The light sensitivity is denoted in ISO-values. ISO-values comes straight from the SLR-era, when cameras used films. The ISO-speed, as it is called, referred to the “film speed”, the higher the ISO-amount of the film, the faster it could expose the image. This analogy is same with the lenses with larger aperture being “faster”. But there is a trade off using this method of signal acceleration. When the signal gain is brought up, the signal gain algorithm can’t differentiate between the image component and the noise component of the signal, so both of them are accelerated by the denoted value. Noise is the iconic red-blue-green point “mess” present in all digital imaging systems.
Difference Between Sensor Sizes
The problem of noise can be counteracted by acquiring DSLR of a greater sensor-size. Usually the largest sensor sizes are 35mm sensors, or so called “full-frame” sensors (the analogy of the term comes from the size of the film frame used in old SLR’s). The full-frame sensor annihilates the problem of noise by significant amount, for the fact that the individual pixels are larger, and distributed over largr area, minimizing the pixels’ interference to each other. Also larger sensor receives always greater photon flux at given parameters than smaller sensor, increasing the signal-to-noise ratio.
Smaller sensors, in addition to being more noisy, have smaller field of view. For example, many semi-pro cameras (like Canon EOS 60D) have an APS-C sensor which has a crop factor of 1.6. This means that the focal length of any given lens needs to be multiplied by the factor of 1.6. For example 50mm lens is in actuality a 80mm lens, and 18mm lens is 28,8mm and so forth.
Depth of Field
Larger sensors have better depth of field due to their physical increase in size compared to the smaller 1.6 crop factor sensors.