In 1960 Novotny and Alvis performed the first successful fluorescein angiogram on a human being using black-and-white film. That event marked the advent of modern ophthalmic photography. Subsequent development of sophisticated instrumentation has made it possible to consistently produce precise documentation of subtle changes within the eye. Intravenous fluorescence angiography has contributed greatly to a better understanding of the posterior segment of the eye and continues to be of particular importance in the diagnosis and treatment of many of the diverse disease processes that affect it. Since 1976, endothelial specular photomicrography has made possible the documentation of the cell density of the posterior layer of the cornea of the living eye. Moreover, photography plays an indispensable role in many areas of research and teaching. With the techniques of optical coherence tomography (OCT), external photography, photo slit-lamp biomicrography, fundus photography, fluorescein and indocyanine green angiography, video recording, and endothelial specular photomicrography, ophthalmic photography today is a vital, well-established adjunct to ophthalmology.
The quality of digital imaging has improved dramatically and has replaced film. Very few clinics continue to use film; the discussion here is geared toward ophthalmic photography using digital imaging.
Although this chapter cannot cover the full scope of ophthalmic photography, it provides an introduction to its most practical applications and the essential concepts for understanding digital imaging. OCT has become a major diagnostic imaging tool in ophthalmology, but its use is not covered in this chapter (see Ch. 41 ).
Even with the changeover to digital media, much of the basic film terminology still applies to ophthalmic photography. This chapter focuses on digital imaging and makes comparisons to film when helpful and appropriate.
Standard, handheld digital cameras have either fixed or interchangeable lenses. From there, they can be broken down into three basic types, based on their viewfinder configuration: single-lens reflex (SLR), those that use a liquid-crystal display (LCD) screen as a viewfinder, and the mirrorless reflex cameras that use an electronic viewfinder. Numerous lenses are available, and their important characteristics include focal length, lens speed, depth of field , and resolution. Exposure is determined with an exposure meter and regulated by means of an adjustable diaphragm or “f-”stops on the lens and shutter speeds in the camera body.
The imaging sensor and film
The digital imaging sensor is the light-gathering component in digital photography. It comes in different sizes and types, but whereas the choice of film and developing combinations was vast and had to be established before the photograph was made, once a digital camera with a certain sensor is purchased the choices for photography are made using settings available on the camera. Choice of color balance, black-and-white imaging, and light sensitivity can be made at the camera. This is wildly different from the days of film that required stocking different film types to accommodate different lighting situations. Once a roll of film was placed in the camera, all the pictures taken on that roll of film had to be taken based on the characteristics of that film type. Now, with digital, each image taken can be set to entirely different settings, frame by frame if desired.
The focal length of a lens is the distance between the lens and the imaging plane in the camera (measured from the principal plane of the lens when focused at infinity). The focal length, expressed in millimeters, is engraved on most lens systems, along with the serial number and trade name. The normal, or standard, camera lens is that which will produce an image of a scene in the same perspective as that seen by the unaided eye. The normal focal length for a camera is roughly equivalent to the diagonal measurement of the image sensor. Different-sized cameras often have different-sized digital sensors, and consequently, the lenses used have different magnification settings.
Digital cameras still refer to their focal length in comparison to 35-mm film. For a 35-mm film camera (with a film image area measuring 24 × 36 mm), the standard lens is 50 mm in focal length and has approximately a 45- to 55-degree angle of view. A wide-angle lens, such as a 24 mm, has a short focal length and an angle of view of about 74 degrees. A telephoto lens, such as a 135 mm, has a long focal length and is restricted to an angle of 18 degrees. Digital cameras often refer to their image magnification size in relation to the 35-mm standard. For example, the digital SLR camera has an image sensor that is two-thirds the size of a 35-mm frame of film. Consequently, a 40-mm lens on a digital SLR is equivalent to a 60-mm lens on a film camera.
Lens “speed,” or a lens’s widest aperture, refers to the maximum light-gathering power of the lens. It is expressed in the form of an “f” number, which represents the ratio of the diameter of the lens to the focal length. Engraved on the lens system, it may appear as f2.5 or f1:2.5 (or merely 1:2.5) ( Fig. 39.1 ). The diameter of an f1 lens is equal to its focal length and it is termed a fast lens because it permits a great amount of light to reach the film. Such a lens is well suited for photography in available light. Most lenses have adjustable f-stops, operated by a diaphragm between lens elements.
Depth of field
The distance between the nearest and farthest objects from a camera that are acceptably sharp is called the depth of field. Depth of field is a function of focal length and lens aperture (or f-stop). It increases with the use of larger f-stops (smaller apertures) and decreases with smaller f-stops (larger apertures) ( Fig. 39.2 ). Depth of field is also greater for a wide-angle lens (short focal length) and less for a telephoto lens (long focal length) when both are set at the same f-stop. Depth of field is further affected by the camera-to-subject distance. The effective distance covered by the depth of field when a lens is focused at or near infinity is far greater than that of the same lens focused on a very near object. The closer a lens is focused, the more “shallow” becomes the depth of field.
Because most photography in ophthalmology is done at high magnifications with a short distance between the camera and the subject, natural limitations in depth of field must be compensated for by using the highest possible f number (referred to as stopping down) and by using a light source that is sufficiently bright to provide the necessary exposure. The fundus camera, by design, has a single maximum aperture. Consequently, retinal photographs have a very shallow depth of field, and good focus can be difficult to achieve.
Resolution, or resolving power, is the ability of a lens, a a digital sensor, or the eye to distinguish fine detail. This resolving ability is measured by the highest number of line pairs per millimeter that are definable without blurring together.
Shutter speed is measured by fractions of a second, indicated by numbers, such as 30, 60, and 125, which stand for 1/30, 1/60, and 1/125 of a second. These numbers represent the length of time the shutter is open when the shutter release is activated: the interval during which the light passing through the lens strikes the film. A slow shutter speed allows more light to pass to the image sensor than a faster shutter speed.
A specific area should be set aside for ophthalmic photography if possible. The photographic environment, including background and room lighting, can thus be better controlled, providing more consistent results. A degree of privacy, an important courtesy to the patient, will also be ensured.
The sensitivity of an imaging sensor refers to its sensitivity to light, expressed as ISO numbers, established by the International Organization Standards. A higher number indicates that the sensor is more sensitive or requires less light to create an image. The sensitivity of a digital sensor is established during its manufacture, and usually has a base sensitivity of 100 to 200 ISO. This number can be adjusted higher to optimize results in low light situations. Set to a low ISO number, an image sensor will produce less image noise and higher resolution than when it is set to a higher ISO number. The sharp detail offered by low ISO settings makes these valuable to scientific or diagnostic study, particularly when large prints are to be made. Color fundus photographs are normally taken with the camera set to 100 ISO. Because the exciter and barrier filters used in fluorescein angiography absorb a lot of light, the fundus camera software is set to a higher ISO setting (400–800 ISO) to compensate.
Digital imaging chips
The charged coupled device (CCD), complementary metal oxide semiconductor (CMOS), and Foveon chips are the three currently competing technologies within digital imaging. The CCD used in still photography was adopted from video technology and, with the benefit of considerable fine-tuning since the mid-1980s, has evolved into a high-quality imaging standard. CMOS is a more recent challenger. Although far less expensive to produce than a CCD, it initially suffered from poor contrast and image noise. These problems have been resolved and the CMOS chip is now used in most cameras. The Foveon chip promises even more resolution and detail, but has been slow to enter the market.
Whichever chip is used, it is located in the camera at the same focal plane as film, known as the “film plane.” Working in conjunction with the chip’s ISO sensitivity, the shutter speed and lens aperture are adjusted according to the prevailing light levels.
Digital sensors chip comes in many sizes. Because the chips are expensive to produce, the manufacturers have tried to reduce cost by keeping the chip size small. Most digital SLR cameras are currently equipped with chips that are two-thirds the size of 35-mm film. Consequently, images produced on these digital cameras appear magnified compared with the same image on film. Because most fundus cameras are designed for 35-mm film, either a reducing lens is placed between the fundus camera and the digital camera back or a more expensive, larger chip is used.
Unlike film, which needs to be matched for correct color temperature (daylight or tungsten), digital cameras correct for proper color balance by adjusting what is known as “white balance.” Although this white balance can be set to automatically correct itself, it is sometimes prefe-rable to set it to match the specific type of lighting environment.
All lighting sources have a color temperature, which is expressed in degrees Kelvin (K). Normal daylight, around midday, has a color temperature of approximately 5400 K. The light produced by an electronic flash has the same color temperature. Standard household lighting is often much warmer in color, and tungsten bulbs are approximately 3000 K. Fluorescent lighting and halogens all have their own color temperature, some cooler, some warmer. Whereas the human eye and brain constantly adapt to correct for these color imbalances, the camera must be told to adjust. Otherwise, a color balance setting for outdoor lighting will have a blue cast if that setting is used with indoor tungsten lights. A tungsten color balance setting used outdoors will look orange. Fortunately, most digital cameras can automatically adjust for color balance changes by constantly monitoring lighting conditions and self-adjusting using the “auto white balance” (AWB) setting. However, it is sometimes good to manually make this setting, especially if complex lighting conditions cannot be established with AWB.
Digital imaging software
The image on a 35-mm slide may be viewed by just placing it in front of a light source, such as a light box. To observe a digital image, the process is not as direct. Without a computer running compatible software, the image bits are meaningless. Imaging software is just as important as the hardware used to make it. With just basic software, one can look at a few images, but to manage thousands of images of hundreds of patients, a more sophisticated software program not only allows a more efficient review of these images, but also provides a system of image management that far exceeds the capabilities of film. There are a number of professional ophthalmic image and management programs available, and they all do a good job of taking, storing, and retrieving photographs.
Among many other factors, the quality of a digital image is dependent on pixel resolution. The term “pixels per inch” (ppi) refers to the density of pixels in an image. “Dots per inch” (dpi) refers to the number of dots used by a printer to make a print based on the ppi of the image. In general, the higher the dpi, the finer is the detail in a print. However, how the dots are managed is just as important. A dye sublimation printer may have a low dpi rating, such as 300, but it uses heat to increase the tonal range of a print. Inkjet and laser printers use various algorithms to enhance performance beyond their dpi rating.
Image resolution is affected by many factors, including lens quality and exposure setting. Whereas different films have different resolving powers, digital imaging resolution is largely determined by the density of pixels. High resolution begins with a large number of pixels on an imaging chip. However, the amount of resolution actually needed is relative to how the image is used. Smaller images require fewer pixels than larger ones. Images displayed on a computer monitor require fewer pixels per inch than those displayed as a print.
Numerous imaging file formats accommodate different uses of images. The two main categories are uncompressed and compressed. An uncompressed file saves one pixel for every pixel. A commonly used uncompressed format is tagged image file format (TIFF). Image compression can be either lossless or lossy. Lossless compression reduces the file size by generalizing areas of common data. For instance, a large area of 100% black is stored with a more efficient description than repeating every pixel as black. Lossy compression takes more liberties with describing the data, but also enables more efficient storage and transporting of that data. Joint Photographic Experts Group (JPEG) is a common compression format that can be adjusted to various levels of compression. It is also widely recognized by most imaging programs and across Macintosh and Windows platforms, making it a format of choice when maximum versatility is required. Because there is some level of quality loss in JPEG, although slight to negligible at its “best” setting, it is best practice to work with an image in a lossless format until all corrections are made. As the final step, the image can be saved as a JPEG file. Readjusting images already in a JPEG file may introduce undesirable image artifact.
Many other file format types are available and are used by different camera manufacturers. It is important to know what these formats are to use these images outside that particular operating system.
Exposure is the total volume of light that strikes the image sensor. It is the sum of light intensity and duration of exposure. Correct exposure is achieved through the balanced interaction of sensor, sensitivity, brightness of illumination, f-stop, and shutter speed.
With the use of available light, the shutter speed is used to control the duration of the exposure and the f-stop is used to regulate the intensity of the light striking the film. Each full f-stop setting (f8, f11, f16, and so on) and each shutter speed setting (1/30, 1/60, and 1/125 and so forth) affects the total exposure by a factor of two. For example, if the camera were set at a correct exposure of 1/60 of a second at f11 and then the lens were opened to f8, twice as much light would reach the sensor and the image would be overexposed. Conversely, if the lens aperture were closed down to f16, only half the needed light would reach the sensor, and the image would be underexposed. Likewise, if the f-stop remained constant and the shutter speed varied, the same alteration in total exposure would result. By decreasing the shutter speed from 1/60 to 1/30 of a second, the exposure is doubled; by increasing the shutter speed from 1/60 of a second to 1/125 of a second, the exposure is halved. Correct settings therefore are vital to correct exposure. If either f-stop or shutter speed is off by just one setting, a serious overexposure or underexposure may result on the sensor.
Whenever a light source other than a flash is used, the correct exposure can be determined with the light meter, which is built into most consumer digital cameras. For the best exposures, it is good to be familiar with how your camera’s light meter reads the light in the viewfinder. If the light meter is allowed to respond to an area that is either much brighter or much darker than average, a proportionate underexposure or overexposure will result.
With the use of electronic flash, the duration of the exposure is usually determined by the duration of the flash. Because most modern electronic flash units have a duration of approximately 1/1000 of a second, the length of the exposure is 1/1000 of a second. This very short, motion-stopping duration makes the electronic flash ideally suited for eye photography.
Because the duration of the exposure is now essentially beyond our control, correct exposure is achieved by regulating the intensity of the light, either at its source or when it passes through the lens, or both. In most cases, the intensity of the flash source itself need not be altered. In fact, it is desirable to have ample light to guarantee a good exposure at very high f-stops, thereby ensuring the greatest possible depth of field at close working distances.
Electronic flash sources have different amounts of light output, and exposure is greatly determined by the flash’s distance from the subject. The farther the flash is away from the subject, the darker it gets, so either the camera’s lens aperture must be adjusted or its ISO must be set higher.
The camera’s shutter speed also must be set at the speed prescribed by the manufacturer to provide proper synchronization (maximum light output coincident with a fully open shutter). Shutter speeds that are too slow may allow extraneous ambient light to affect the exposure. Shutter speeds that are set faster than the shutter’s synchronization result in partial or no pictures ( Fig. 39.3 ).
A digital single-lens reflex (DSLR) camera with interchangeable lenses is recommended for external eye photography. The DSLR feature permits viewing and photography through the same lens system. Composition and sharp focus are thus made much simpler because the image being photographed is seen in the viewfinder exactly as it will appear on the sensor.
To achieve the necessary magnification, a macro lens (specially designed to permit focusing on very near objects) is recommended. Macro lenses for DSLR cameras are available from approximately 50 to 200 mm in focal length. The advantage of a longer (≥100 mm) focal length macro lens is that it permits a greater working distance between camera and subject and produces less perspective distortion. A good, practical magnification-to-working distance ratio can be achieved with the use of a 100-mm lens. Photography of an intraoperative procedure may dictate the use of a longer focal length lens to provide a greater working distance-to-magnification ratio.
Using a DSLR with interchangeable lenses, a good choice is a 100-mm macro lens. The longer focal length provides the close focusing capability and comfortable working distance away from the patient that is needed in ophthalmology.
A fixed (noninterchangeable) lens camera may provide adequate working distance and magnification, but not all cameras have this capability. Before purchasing a new digital camera with a fixed lens, determine that it can focus closely on a single eye and that the flash functions at close distances.
For illuminating external photographs, an electronic flash device is recommended. Problems created by rapid eye movements or blinking are eliminated by the extremely short duration of the flash. Many inexpensive flash units with automatic exposure control are available, eliminating the need to change f-stops while providing the ability to alter the distance between camera and subject over a given range. If such a flash is being contemplated, its ability to function in the automatic mode at such close distances must be ensured. Not all have that capability.
A flash source should always be positioned to provide even, diffuse illumination over the area being photographed. The bridge of the nose and prominent brows should not be allowed to cast a shadow onto the area of interest. When taking photographs of a single eye area, the light should be positioned on the patient’s temporal side. For taking a two-eye view or full-face photograph, the light should be positioned directly above the lens. In some types of portrait photography, a ring flash is used to provide soft and even illumination. However, for close-ups of corneal pathology, the large ring of light reflex created by these flash units often obscures what is meant to be documented. The Nikon SB-29s ( Fig. 39.4 ) is a convenient lens-mounted flash that uses two smaller flash tubes rather than a single large ring of light. The unit is mounted directly to the front of the lens and can be rotated and variously controlled for satisfactory illumination. The resulting photograph will show uniform illumination over the subject area, with shadows falling directly behind and below the patient. The use of a lens of a 100-mm focal length considerably lessens the problems arising with the use of sharply oblique illumination, which results from working at too close a range to the subject.
A suitable background should be provided. A simple solution is to obtain several large (30 × 40 inches [0.75 × 1 m]) matte boards available in a variety of colors. In general, a medium blue works well for most subjects.
Photo slit-lamp biomicrography
Many conditions affecting the anterior segment of the eye—especially the transparent cornea, anterior chamber, and lens—are of such a subtle nature that they defy detection by any means other than slit-lamp biomicroscopy. Because the adverse conditions occurring in these transparent or translucent structures are themselves commonly transparent, conventional, diffuse illumination is unsuitable for their visualization. Only the specialized capabilities of the slit-beam illuminator and high magnification of the slit-lamp biomicroscope provide an adequate view of subtle changes of interest to the ophthalmologist.
Fundamental to producing consistently useful photo slit-lamp documentation is thorough knowledge of the structures of the eye, the location and general appearance of the diverse conditions affecting the eye, and the basic forms of illumination and their application to these conditions. Basic illumination techniques include direct focal, tangential, direct, and indirect retroillumination from the iris; retroillumination from the fundus; transillumination; sclerotic scatter; proximal illumination; and Tyndall’s phenomenon for aqueous cells and flare.
A slit-lamp biomicroscopic examination is a dynamic process. With the use of a narrow slit beam to provide optic sectioning, transparent structures, such as those in the cornea, can be examined in minute detail a small section at a time. The result is, in essence, a composite, mental image of the entire cornea. In slit-lamp photography, however, each photograph is restricted to a single moment of that examination. To overcome this limitation, some slit lamps are equipped with an additional diffuse illuminator. When used in conjunction with the slit illuminator, the result can be a pleasingly illuminated image of the overall eye with a superimposed narrow slit beam to provide specific information about that section of the structure that it isolates ( Fig. 39.5 ).