Seeing is believing A beginners' guide to practical pitfalls in image acquisition
http://www.100md.com
细胞生物学杂志 2006年第1期
AbstractImaging can be thought of as the most direct of experiments. You see something; you report what you see. If only things were truly this simple. Modern imaging technology has brought about a revolution in the kinds of questions we can approach, but this comes at the price of increasingly complex equipment. Moreover, in an attempt to market competing systems, the microscopes have often been inappropriately described as easy to use and suitable for near-beginners. Insufficient understanding of the experimental manipulations and equipment set-up leads to the introduction of errors during image acquisition. In this feature, I review some of the most common practical pitfalls faced by researchers during image acquisition, and how they can affect the interpretation of the experimental data.
This article is targeted neither to the microscopy gurus who push forward the frontiers of imaging technology nor to my imaging specialist colleagues who may wince at the overly simplistic comments and lack of detail. Instead, this is for beginners who gulp with alarm when they hear the word "confocal pinhole" or sigh as they watch their cells fade and die in front of their very eyes time and time again at the microscope. Take heart, beginners, if microscopes were actually so simple then many people (including myself) would suddenly be out of a job!
All data are subject to interpretationDeliberate scientific fraud exists, but in modern microscopy a far greater number of errors are introduced in complete innocence. As an example of a common problem, take colocalization. Upstairs in the lab, a researcher collects a predominantly yellow merged image on a basic microscope, naturally interpreted as colocalization of green and red signals. But on the confocal microscope, there is no yellow in the merged images.
How can this be Many factors contribute. Here, I take the reader through the imaging process, from sample preparation to selection of the imaging and image-processing methods. Throughout, we will be on the look-out for problems that can produce misleading results, using colocalization as the most common example. Because one short article cannot be an exhaustive "how to" guide, I have also assembled a bibliography of a few highly recommended textbooks and microscopy web sites, which readers should consult for more extensive treatments of the critical issues introduced here.
Sample preparation"Garbage in = garbage out" is the universal motto of all microscopists. A worrying tendency today is to assume that deconvolution software or confocal microscopes can somehow override the structural damage or suboptimal immunolabeling induced by poor sample preparation. The importance of appropriate fixation, permeabilization, and labeling methods for preserving cellular morphology or protein localization is well known to electron microscopists (Hayat, 2000), but often underestimated in optical microscopy (Fig. 1).
Many labs use one standardized protocol for labeling with all antibodies, irrespective of whether the targets are membrane- or cytoskeleton-associated, nuclear or cytosolic. However, inappropriate fixation can cause antigen redistribution and/or a reduction in antigenicity. It is therefore important to test each antibody on samples fixed in a variety of ways, ranging from solvents such as methanol to chemical cross-linking agents such as paraformaldehyde and glutaraldehyde (for protocols see Bacallao et al., 1995; Allan, 1999), although glutaraldehyde fixation often reduces antigenicity and increases background autofluorescence. Consult textbooks for notorious pitfalls: phalloidin labeling is incompatible with methanol fixation, while microtubules are inadequately fixed by formaldehyde. Moreover, certain cell types, such as yeast cells, require specialized fixation protocols (Hagan and Ayscough, 1999).
Permeabilization is also critical in achieving a good compromise between antigen accessibility and ultrastructural integrity. Specific detergents will produce different effects (for example, Saponin treatment produces smaller holes in membranes than Triton exposure), and it is also important to test the effects of pre-, simultaneous, or post-fixation permeabilization. Be aware that tissue processing, and particularly "air drying" steps, may introduce tissue distortions that will affect dimensions and measurements. Many sample preparation problems are of course avoided by imaging living cells, though live cell work introduces a whole range of new potential artifacts (see Important considerations for live cell imaging).
What type of mountant should you useOf the many types of homemade and commercial mounting media, no one product is ideal for all applications. Mounting media that harden (often containing polyvinyl alcohol) are useful for long-term sample storage and are preferred for imaging using a wide-field (compound) microscope because the sample flattens as the mountant hardens. For that very reason, however, those that remain liquid (typically glycerol-based) are preferable when three-dimensional (3D) information is desired. These require a sealant around the coverslip for stability and to prevent desiccation.
Anti-fade agents are used to suppress photobleaching, but an anti-fade that is incompatible with specific fluorochromes can quench their signal significantly and/or increase background fluorescence. Consult the mountant's manufacturer for compatibility information because the anti-fade's identity may not be revealed in the datasheet. For GFP and its derivatives it is advisable to avoid anti-fades altogether, unless the sample is also labeled with a fluorochrome prone to photobleaching. Reports differ as to whether nail varnish, when used as a coverslip sealant, reduces GFP fluorescence, but users should be aware of the potential problem. A nondetrimental alternative sealant is VALAP, a 1:1:1 mixture of Vaseline, lanolin, and paraffin.
Optical properties of the microscope that you need to know aboutFew students and post-doctoral researchers will have the opportunity to choose the microscope they will use, or to influence the selection of specific components for purchase. However, there are certain factors that users can control, and they should consider these choices when configuring the microscope for their own experiments.
All objectives are not equal...The objective lens is the most critical component of a microscope and yet few researchers grasp the differences between specific objective classes.
For example, most scientists can tell you the magnification of an objective lens, but few will know its numerical aperture (light-gathering ability). Yet it's the numerical aperture (NA) that determines the resolving power of the lens (Fig. 2), while magnification is only then useful to increase the apparent size of the resolved features until they can be perceived by the human eye. Thus, a 40x 1.3 NA objective lens will be able to resolve far finer details than a 40x 0.75 NA lens, despite their similar magnification. The intensity of the signal also increases steeply with increasing NA (Fig. 3). Therefore, the objective's NA, as well as its magnification, should always be provided in the Materials and methods section of publications.
Why would anybody then choose an objective of lower NA The answer is that other features of the objective may prove more critical for a particular sample or application. For example, NA is proportional to the refractive index of the immersion medium, thus oil immersion objectives can have a higher NA than water immersion objectives, and dry objectives have the lowest NA. But for certain applications water immersion objectives have distinct advantages over oil (see section "The problem of spherical aberration") and high NA also comes at the expense of reduced working distance (how far the objective lens can focus into your sample), which may be problematic for thicker specimens. Other important factors to consider include design for use with or without coverslips, corrections for flatness of field and for chromatic aberrations, and transmission of specific wavelengths (particularly UV or IR light) (for detailed explanations see Keller, 1995; Murphy, 2001).
It is important to consider how resolution will affect colocalization analysis. We consider two fluorochromes to be "colocalized" when their emitted light is collected in the same voxels (3D pixels). If the distance separating two labeled objects is below the resolution limit of the imaging system, they will appear to be colocalized. Thus, users may "see" colocalization using a low resolution imaging system where a higher resolution system might achieve a visible separation of labels that are in close proximity but are not actually colocalized (Fig. 4). The NA of the objective lens, good refractive index match, and appropriate sampling intervals (small pixel sizes) will all affect resolution, and consequently, colocalization analysis. Note also that colocalization never indicates that two proteins are actually interacting, but only that they are located within close proximity.
Know your fluorochromes and filter setsColocalization can only be claimed in the certain absence of "cross-talk" (or "bleed-through") between selected fluorochromes. Choosing fluorochromes with well-separated excitation and emission spectra is therefore critical for multiple labeling. Consider the use of any two fluorochromes together. If their excitation peaks overlap, the wavelength of exciting light selected for the first may also excite the second, and vice versa. If their emission spectra also overlap, the fluorescence emitted by each may pass through both the emission filter selected for the first channel and that selected for the second. Thus one fluorochrome may also be detected in the other's detection channel, a phenomenon known as cross-talk or bleed-through. Be particularly suspicious of cross-talk if your two fluorochromes appear to be 100% colocalized.
Certain fluorochromes, such as Cyanine 3, are excellent for single labeling but can be problematic for multiple labeling because of spectral overlap with green emitters like fluorescein or Alexa Fluor 488. Conversely, Alexa Fluor 594 is well separated from standard green emitters, but is shifted too close to the far-red region to be useful for most green/red/far-red triple imaging (Rhodamine Red-X is better suited to this). It pays to stock a range of secondary antibody conjugates or dyes in order to tailor the combination toward specific protocols. Moreover, the brighter and more stable fluorochromes that are continually being developed may prove vastly superior to the reagents your lab has used for the past 20 years!
It is equally important to consider which filter sets are available on your microscope before selecting your fluorochromes. Long-pass filter sets, collecting all emissions past a certain wavelength, are generally less useful for multiple labeling than band-pass filters, which collect emissions in a specific range (Fig. 5), and the narrower the range of the band-pass filter, the better it can separate fluorochromes with close emission spectra.
Single-labeled controls should always be used to assess bleed-through. On confocal microscopes an additional test involves collecting images with each laser line deactivated in turn (you should now see no emission in that laser line's corresponding detection channel, unless there is cross-talk). Some cross-talk problems can be overcome on confocal microscopes by the use of sequential scanning (also known as multitracks or wavelength-switching line scans). In this mode, rather than exciting the sample with multiple laser lines at once and collecting the emissions simultaneously, first one laser line is activated and its corresponding emission collected, followed by the second laser line and its corresponding emission. However, this will not solve the problem if there is significant overlap between both excitation and emission spectra.
Equally problematic is the overlap of specific fluorescence with background autofluorescence, particularly in plant tissues, in animal tissues rich in highly autofluorescent proteins such as lipofuscin and collagen, and in cultures containing large numbers of dead or dying cells. Unlabeled samples are necessary to establish the levels and locations of autofluorescence, and narrow band-pass filters maximize the collection of specific signal compared with autofluorescence. Modern spectral imaging systems can be invaluable for separating specific fluorescent signal from autofluorescence, as well as for separating fluorochromes with extensive spectral overlap (Fig. 5 C).
Three-dimensional (3D) microscopyStandard compound or "wide-field" microscopes are often preferable for imaging thin cell or tissue specimens as more signal is presented to the detector. However, wide-field images can provide clear lateral (x- and y-axis) information but only limited axial (z-axis) information. For specimens thicker than a few micrometers, or when precise axial information is required, an instrument that removes out-of-focus blur and permits you to distinguish between the signals in thin "optical" slices may therefore prove superior. Current technologies to achieve optical sectioning include confocal microscopy, which uses one or more pinhole apertures to prevent out-of-focus light from reaching the detector; multi-photon microscopy, in which excitation only occurs in the plane of focus; and deconvolution algorithms, which are used to "restore" images from any type of microscope to a closer approximation of the original object. Each technology has distinct advantages for specific applications, which are best understood from detailed comparisons (Shaw, 1995; Murray, 2005) or by consulting local experts for advice. This article will concentrate largely on confocal microscopy, which is the most common approach. The following sections will consider how to establish the correct optical conditions to acquire meaningful 3D microscopy data.
The importance of pinhole size in confocal microscopyThe size of the confocal pinhole aperture determines the thickness of the optical section; that is, the thickness of sample slice from which emitted light is collected by the detectors. In most laser scanning confocal systems the pinholes have an adjustable diameter. Small pinhole diameters give thinner optical sections and therefore better z-axis resolution, which is important for colocalization analysis. However, the signal intensity is decreased, so when z-axis information is not required, or photobleaching is a problem, a larger pinhole diameter may be preferred.
Stating either the pinhole diameter or the optical section thickness in publications facilitates a more informed discussion of 3D localization (including colocalization). Confocal images are generally collected using a pinhole aperture setting around 1 Airy Unit, a diameter that achieves a good balance between rejection of out-of-focus light and signal collection. For multicolor imaging it is critical to achieve the same optical section thickness in all channels, which is accomplished by adjusting the pinhole size for the different wavelengths. Be aware that regular maintenance to ensure alignment of the pinholes is critical, as a poorly aligned pinhole can result in lateral shift and a "double" image where the same pattern is visualized in consecutive z-sections.
The problem of chromatic aberrationThe property of different wavelengths of light being focused to different positions within your sample is known as chromatic aberration. This can lead to an apparent lack of colocalization in the image stack of fluorochromes that are colocalized in the actual sample. Thus all microscopists need to be aware of this phenomenon.
Lateral (xy-axis) chromatic aberrations are generally corrected within the microscope, but note that full compensation is only achieved with proper matching of optical components. Some manufacturers use only the tube lens to impart corrections, whereas others use the objectives; thus, combining objectives and microscopes from different manufacturers can introduce aberrations. Lateral chromatic shifts can also be caused by mechanical shifts between different filter cubes or dichroic mirrors.
Corrections for axial (z-axis) chromatic aberrations are more difficult. Objective lenses are corrected for chromatic aberrations across a certain wavelength range, the extent of which depends on the type and age of the objective (improved lenses are developed every year). Most users are unaware that the majority of objective lenses currently in use are fully corrected across only the (approximately) green-to-red range of emission wavelengths. Thus, two fluorochromes outside these ranges (such as DAPI and Cy5) could be focused to z-positions several hundreds of nanometers apart, even if their targets are colocalized in the actual specimen. When compounded by further aberrations such as spherical aberration, they could appear well over a micrometer apart in the z-axis, and thus in different z-slices in your image stack (Fig. 6).
How do we check for chromatic aberration One option is to image the tiny (e.g., 0.1-μm diameter) multicolor "Tetraspeck" beads available from Molecular Probes and see whether the different colors of each bead show up in the same z-position or not. Another method is to use two secondary antibodies, both directed against the same primary antibody but conjugated to different fluorochromes (those used in your double-labeling experiment), and see whether the signals are superimposed in the z-axis or whether one always appears below the other. Ruling out severe chromatic aberrations in your microscope set-up by these methods permits you to be more confident of your data interpretation. When aberrations are found, try using fluorochromes closer together in emission wavelength. Alternatively, certain software programs will permit you to "shift" the image in one channel relative to the other (applying the exact shift calculated from multicolor bead images).
The problem of spherical aberrationSpherical aberration describes the phenomenon whereby light rays passing through the lens at different distances from its center are focused to different positions in the z-axis. It is the major cause of the loss in signal intensity and resolution with increasing focus depth through thick specimens.
Spherical aberrations occur as light rays pass through regions of different refractive index (for example, from the coverslip to the tissue, or between regions of different refractive index within the sample itself). The effects include a reduction in intensity and signal-to-noise in the plane of interest and distortions in the 3D image, with fine features appearing smeared out along the z-axis (Fig. 7). The aberrations become worse as you focus deeper into the sample. Corrections for spherical aberrations within the objective lens itself are only effective when certain preconditions are met. Thus, aberrations are increased by factors such as the use of the wrong coverslip thickness or type of immersion oil, too thick a layer of mounting medium, the presence of air bubbles in the immersion or mounting medium, or simply a temperature change. Note that most objective lenses for high resolution fluorescence work are calibrated for use with 0.17-mm thick glass, to which no. 1 1/2 coverslips correspond most closely. The specimen should be mounted on or as close to the coverslip as possible (avoid multiwell slides with a nonremovable gasket that places the coverslip many micrometers from the cells). The coverslip must also be mounted flat, as an angled coverslip will result in distorted optical properties (remove excess mounting medium by briefly placing small pieces of torn filter paper against the edge of the coverslip after mounting).
In selecting an objective lens for imaging thicker samples, you need to consider the balance between the effects of spherical aberrations and NA on your signal intensity. A high NA oil immersion lens may be optimal for use with thin specimens, because the glass coverslip, whose refractive index is matched to that of the immersion oil, becomes the predominant sample component. However, a water immersion lens, despite its lower NA, may achieve better images from a thick, largely aqueous specimen due to the better match of refractive index between immersion medium and sample. Methods of minimizing spherical aberrations range from the development of objectives with adjustable correction collars to the use of immersion oils with differing refractive indices (Fig. 7; and for a detailed and highly readable explanation of optical aberrations and their practical correction see Davis, 1999).
Establishing your acquisition settingsAppropriate acquisition settings are critical for obtaining meaningful and quantifiable data as well as "pretty" images. You must distinguish between acquiring all information in the raw data, and later presenting the data in a way that conveys the result more clearly.
All settings should be established using the real sample (or a positive control), and then the negative control is imaged using identical settings. The "autoexpose" or "find" functions should never be used for negative controls, as the camera or detectors will attempt to compensate for the low signal.
First, keep the acquisition settings constant between specimens to be compared quantitatively and particularly between sample and control.
Second, it is important to distinguish between an image that is useful for visualization alone, and an image from which meaningful quantitative data can be extracted. For quantitative microscopy the exposure time and/or gain (brightness) and offset (by which pixels below a certain threshold are defined as being black) should be adjusted to use the entire dynamic range of the detectors. Too high a gain results in saturated pixels, which cannot be quantified because the dataset is clipped at the maximum end of the dynamic range. Conversely, an inappropriately large offset, often used to hide "background" cellular fluorescence, clips the data at the minimum end of the dynamic range and again prohibits quantitative measurements. More significantly, how do you distinguish nonspecific background signal from a low, ubiquitous level of your protein with real biological significance
Most researchers lean toward a bright, high contrast image, and thus will invariably saturate their images. To avoid this, use acquisition software features such as an "autohistogram" display, or a "range indicator" or "glow scale" look-up table, to establish the settings more objectively. Once the correctly acquired data is saved (and always stored in the raw format for future reference!), brightness and contrast or scaling adjustments can then be applied for a more visually pleasing presentation.
How do you avoid saturation with samples containing some particularly bright but other very weak regions The collection of more grayscale levels (12-bit instead of 8-bit data) will help. You can then present two pictures showing different scalings applied to the same image, adjusted for the bright features in one and the finer, less intense details in the second. In more severe cases, image each area using two different acquisition settings, then present the two images side by side.
Great care must be taken to ensure adequate sampling (pixel/voxel dimensions) of your data, in all axes. According to the Nyquist sampling theorem, your spatial sampling intervals must be more than two times smaller than the smallest resolvable feature in your specimen. If this sampling requirement is not met, you will have gaps in your data and also spurious features can be introduced into your image by a process called "aliasing" (for explanation see Webb and Dorey, 1995). Thus, most confocal microscope software packages suggest the use of a z-interval around half the optical slice thickness (which is usually calculated for you). This is sufficient for detecting all resolvable features, although the use of even smaller z-intervals is advantageous for deconvolution or for creating smoother volume renderings.
In the xy-plane you should aim to relate the pixel size to the resolution of the system by adjusting the optical zoom setting (if available) or by using binning (a process by which the signals from neighboring pixels are combined into one value). The use of smaller pixel spacing than this, known as "oversampling," results in longer acquisition times that can cause greater photodamage to your samples. The image frame resolution (e.g., 1024 x 1024) should be set high enough to submit images for publication at a suitable size while maintaining 300 dpi resolution (Rossner and Yamada, 2004). Note that the use of an optical (real) zoom during image acquisition, to magnify specific features, will avoid the "pixilated" appearance of low magnification images to which digital zoom has subsequently been applied. Beyond a certain optical zoom, however, the user will enter "empty magnification," where that objective's maximum resolution has been reached and so no additional information is being obtained.
Choosing the "right" cell to image and publishOne of the greatest microscopy challenges is the choice of which cell(s) to present as a "typical" image. You may have preconceived ideas concerning your protein's localization, and subconsciously scan the sample to find the cell most closely fitting your expectations. In some cases this is a valid approach—for example to search for microinjected cells, for the sole expressers of a gene or for cells at a particular stage of the cell cycle. But there is a strong risk of focusing in on one cell and ignoring 10,000 strikingly different ones around it.
The more passionate we are about our experiment, the more we must doubt our ability to be truly objective. So ask an unbiased colleague to blind label the samples or to help collect or evaluate the data. The use of a motorized stage to image multiple, random positions can also help avoid bias. Samples containing cells with varying expression patterns or morphology should be presented as a low magnification overview beside high magnification views of representative, contrasting regions. Most importantly, a statistical analysis of cell numbers exhibiting particular characteristics will strengthen your data interpretation.
Transient transfections are particularly problematic for localization studies. A common mistake is to seek out transfected cells displaying the strongest expression levels, but here the high concentration of expressed protein may interfere with the balance of other proteins or cellular processes. Weak expressers are generally a better choice, in particular those showing limited signal localization. When available, antibodies to the endogenous protein can be used to assay for a normal distribution pattern. Aberrant localization may also be indicated by the abnormal distribution of a partner protein that should colocalize with the tagged component.
Presenting and interpreting your imagesYou must decide how to present your data in the most appropriate form. With 3D or 4D data this typically involves a choice between a single z-slice or a projection of multiple slices. A single slice must be presented when colocalization and/or z-resolution are in question, but a projection may better illustrate the continuity of a 3D network.
A merged image is often inadequate for demonstrating colocalization. A green-emitting fluorochrome and a red-emitting fluorochrome could be completely colocalized, but if one is brighter than the other the merged image may not appear yellow. Colocalization is better demonstrated using the "line profile" function included in many software packages, where an intensity plot for each channel is created along a line drawn across the image. Algorithms are available for calculating the degree of colocalization, but take care when establishing parameters such as threshold levels (for practical tips and caveats for colocalization studies see Smallcombe, 2001; for detailed methods of quantifying colocalization see Manders et al., 1993; Costes et al., 2004).
All images should be presented with scale bars. Many software packages include automatic scale bar calculation and pasting onto exported images. You can also image a stage micrometer to calculate the total magnification of a given system, which will be the product of the magnification of the objective and of other components such as the tube lens and relay optics to the camera.
Quantification of images—why is it useful and when is it appropriateQuantifying image data is necessary for the transition from anecdotal observation to an actual measurement. Quantitation is also an important means of avoiding subjective bias and presenting the overall pattern of the data. It is rarely straightforward in practice, requiring stringent acquisition conditions.
Images to be quantified should be acquired and exported in 12-bit or higher grayscale format, rather than the standard 8-bit (or 24-bit color) format suitable for most image presentation. Image processing can then be used, before quantification, to correct aberrations that have been introduced into the image stack during acquisition. You need to be aware of which image processing manipulations are consistent with quantification, and which are not. Constrained iterative 3D deconvolution algorithms, for instance, maintain the total signal intensity within an image stack, whereas nearest neighbor algorithms are subtractive and therefore do not.
Relative quantitation, such as comparing the signal intensity between one region of interest and another, or between the sample and a control (assuming constant acquisition settings), is simpler than absolute quantitation, but even this assumes a number of prerequisites such as even illumination across the entire field. Calibration slides (made from colored plastic and available from companies such as Applied Precision, Chroma Technology Corp., and Molecular Probes) can be imaged to determine irregularities in illumination and apply corrections. When calculating changes in signal intensity over time you must compensate for general photobleaching as well as for temporal fluctuations in laser power or lamp illumination. Laser power can be particularly volatile immediately after switching on the system, so a warm-up period of 30–60 minutes is recommended. A monitor diode or photosensor (if available on the system) and/or standardized samples are useful for normalizing experiments for fluctuations in excitation intensity.
Absolute quantification presents a greater challenge, requiring the researcher to have a good understanding of both the spectral and physical properties of the specific fluorochrome/fluorescent molecule and the appropriate choice of microscope optics and settings (Pawley, 2000). Important properties of the fluorochrome that must be taken into consideration include the extinction coefficient, the quantum yield, the photobleaching rate and properties, the chromophore folding kinetics, and the pH sensitivity (this can substantially affect measurements of proteins moving into and out of subcellular compartments).
The most critical components of the fluorescence microscope to consider for quantitative imaging are the objective lens (including its NA and its spectral transmission properties), the emission filter, and the detector (Piston et al., 1999). An emission filter that is well matched to the spectrum of your fluorescent probe will result in a better signal-to-noise ratio. A narrow band-pass filter is usually preferable to maximize collection of specific signal while minimizing the contribution of autofluorescence. Linear detectors (including the majority of cooled CCD cameras and photomultiplier tubes) will facilitate quantitation better than nonlinear ones (such as intensified CCD cameras). Standardized samples of known fluorochrome concentration can be used to establish appropriate gain and offset settings for the detectors. Saturation of the fluorophore, which occurs particularly when using laser excitation, also introduces nonlinearity into the measurements, making calibration of the system very difficult. Thus, it is recommended to use the lowest laser power that gives a sufficient signal-to-noise ratio.
Wide-field microscopy is often inappropriate for quantitation because you collect emitted light from the whole sample depth without knowing the thickness of each cell or structure. The application of 3D deconvolution algorithms to an image stack can overcome this problem for thin samples, but not for thick or highly fluorescent samples. Confocal microscopy is generally more quantification-friendly for samples over 15–20 μm in depth because of the defined optical section thickness. However, deeper focal planes will show reduced signal intensity due to absorption and scatter, necessitating further, more complex corrections.
Four-letter methods: FRAP and FRETSo far this article has concentrated on basic image acquisition. This next section will highlight a few danger areas associated with some more complex techniques used to monitor the kinetics of protein trafficking or protein–protein interactions in living cells.
The most common technique for monitoring protein kinetics is fluorescence recovery after photobleaching (FRAP). Many confocal and deconvolution microscope systems have incorporated remarkably user-friendly FRAP routines into their acquisition software. Unfortunately, interpreting the data is not always as simple. It is essential to "normalize" for general photobleaching by monitoring control cells that were not targeted. Furthermore, be aware that excitation light bright enough to bleach fluorescent molecules in a short time period can severely disrupt cellular ultrastructure. For quantitative FRAP, you must decide in advance which of the numerous available models will be used for analyzing the recovery curves, as this choice may affect the experimental design (Rabut and Ellenberg, 2005).
Fluorescence (or Forster) resonance energy transfer (FRET) describes the nonradiative transfer of photon energy from a donor fluorophore to an acceptor fluorophore when they are less than 10 nm apart. FRET thus reveals the relative proximity of fluorophores far beyond the normal resolution limit of a light microscope. However, since FRET also relies on additional prerequisites, such as a certain relative orientation of donor and acceptor (Herman et al., 2001), an absence of FRET cannot always be interpreted as the fluorophores being more than 10 nm apart. Positive signals can also be misleading, as FRET may occur where any two noninteracting proteins are highly concentrated in localized areas. Two common methods for measuring FRET include sensitized emission and acceptor photobleaching, and the most appropriate for your experiment will depend on factors such as the need for time-lapse imaging, the ability to bleach regions of interest, and laser line availability. In sensitized emission studies, rigorous positive and negative controls are required to correct for factors such as cross-talk and direct excitation of the acceptor by the donor's excitation.
Important considerations for live cell imagingWhen working with live cells, the rules for optimal imaging of fixed cells drop in priority. Phototoxicity and photobleaching become your biggest enemies and efforts focus on keeping the cells alive and behaving in a "normal" physiological manner (see the invaluable "Live Cell Imaging—A Laboratory Manual", R.D. Goldman and D.L. Spector [eds.], for detailed coverage). This requires appropriate environmental conditions (temperature, media, CO2, and possibly perfusion) and also optical considerations (such as the reduced phototoxic effects of longer excitation wavelengths). A major issue during time courses is the focal drift caused by thermal fluctuations in the room. Environmental chambers that enclose the entire microscope are generally more thermo-stable than smaller imaging chambers, but the latter can be more convenient for perfusion or for experiments requiring rapid temperature shifts.
Another serious challenge lies in acquiring images fast enough to capture rapid biological events and to accurately portray dynamic structures. This is particularly tricky when using multiple probes, as the labeled target may move in the time required to switch between filter positions. Solutions to this include the use of a simultaneous scan mode on a confocal microscope (in the absence of cross-talk), or positioning an emission splitter in front of a CCD camera to collect both signals simultaneously on the camera chip.
With fixed cells you typically maximize your signal-to-noise ratio via longer exposure times in wide-field systems, or in confocal microscopes by using higher laser power or by increasing the time the laser dwells on each pixel (using averaging or slower scan speeds). Optical zoom and Nyquist sampling are applied for optimal image quality. Approaches to minimize photobleaching in live cell imaging include a reduction in exposure time or laser power and pixel dwell time (you can compensate by using higher gain), increased pinhole diameters, the use of lower magnifications, and sub-optimal spatial sampling in xy- and z-axes. Binning your signal and the use of faster camera readout rates will enable more rapid imaging. The consequence of these compromises may include poorer resolution and reduced signal-to-noise ratios.
As the imaging proceeds, how do you avoid misleading data by checking for normal physiological behavior of your cells Here are a few clues: (1) Have they maintained their typical morphology, or are they shrunken, blebbing, rounded up, or coming off the coverslip (2) Are cells and organelles still moving around at a customary rate Acquiring simultaneous or sequential transmitted light and fluorescent images is an excellent way of assessing this; (3) If the sample is returned to the incubator after the experiment, will the cells carry on dividing or the embryos survive Continuing cell division is perhaps the most critical indication of healthy cells.
In conclusionGiven the complexities discussed above, how can we all share the responsibility for ensuring that published imaging data is an accurate representation of the truth Researchers need to learn enough about specimen preparation and the available imaging equipment to establish appropriate settings and collect optimal images. The wealth of modern information resources (see online supplemental material, available at http://www.jcb.org/cgi/content/full/jcb.200507103/DC1) enables all users to grasp at least basic microscopy concepts. Central imaging facilities can provide more advanced information required for specific applications and can help to ensure the use of appropriate imaging systems. The researcher's supervisor should rigorously critique both the raw and processed data, and needs to appreciate that high quality microscopy requires a significant time investment. Manufacturers must strive to design hardware that provides constant imaging conditions, and software that includes user-friendly tools for image analysis, and to ensure that researchers purchase the most appropriate equipment for their needs. Finally, scientific journals should set stringent guidelines, and manuscript reviewers must be critical of imaging data presentation, to guarantee that publications contain sufficient experimental detail to permit the readers to properly interpret the images and to repeat the experiments. Only by such a collective effort can we strive to present the true picture.
For a more extensive list of useful microscopy resources, including highly recommended textbooks, web sites, and practical courses, please see the online supplemental material, available at http://www.jcb.org/cgi/content/full/jcb.200507103/DC1.
AcknowledgmentsEnormous thanks to Kirk Czymmek, Michael Davidson, Scott Henderson, and Jyoti Jaiswal for their extremely helpful and encouraging comments on this article, and also to Lily Copenagle and Zhenyu Yue (formerly of the Rockefeller University) for allowing me to use the images I collected using their samples, both good and bad! I am immensely grateful to all those over the years who have patiently mentored me in microscopy, whether during my research positions, at numerous microscopy courses, or simply in discussions with colleagues and vendors, and in particular to Vic Small, who has the most exacting eye for detail of anyone I know, yet who always encouraged me to view even my ugliest experimental samples as "not entirely uninteresting."
References
Allan, V.J. 1999. Basic immunofluorescence. In Protein Localization by Fluorescence Microscopy—A Practical Approach. V.J. Allan, editor. Oxford University Press, Oxford, UK. 1–26.
Bacallao, R., K. Kiai, and L. Jesaitis. 1995. Guiding principles of specimen preservation for confocal fluorescence microscopy. In Handbook of Biological Confocal Microscopy. 2nd edition. J.B. Pawley, editor. Plenum Press, New York. 311–325.
Costes, S.V., D. Daelemans, E.H. Cho, Z. Dobbin, G. Pavlakis, and S. Lockett. 2004. Automatic and quantitative measurement of protein-protein colocalization in live cells. Biophys. J. 86:3993–4003.
Davis, I. 1999. Visualizing fluorescence in Drosophila—optimal detection in thick specimens. In Protein Localization by Fluorescence Microscopy—A Practical Approach. V.J. Allan, editor. Oxford University Press, Oxford, UK. 133–162.
Hagan, I.M., and K.R. Ayscough. 1999. Fluorescence microscopy in yeast. In Protein Localization by Fluorescence Microscopy—A Practical Approach. V.J. Allan, editor. Oxford University Press, Oxford, UK. 179–206.
Hayat, M.A. 2000. Principles and Techniques of Electron Microscopy: Biological Applications. 4th edition. Cambridge University Press, Cambridge, UK. 564 pp.
Herman, B., G. Gordon, N. Mahajan, and V. Centonze. 2001. Measurement of fluorescence resonance energy transfer in the optical microscope. In Methods in Cellular Imaging. A. Periasamy, editor. Oxford University Press, Oxford, UK. 257–272.
Keller, H.E. 1995. Objective lenses for confocal microscopy. In Handbook of Biological Confocal Microscopy. 2nd edition. J.B. Pawley, editor. Plenum Press, New York. 111–126.
Manders, E.M.M., F.J. Verbeek, and J.A. Aten. 1993. Measurement of co-localization of objects in dual-color confocal images. J. Microsc. 169:375–382.
Murphy, D.B. 2001. Lenses and geometrical optics. In Fundamentals of Light Microscopy and Electronic Imaging. Wiley-Liss, Inc., New York. 43–60.
Murray, J.M. 2005. Confocal microscopy, deconvolution, and structured illumination methods. In Live Cell Imaging—A Laboratory Manual. R.D. Goldman and D.L. Spector, editors. Cold Spring Harbor Laboratory Press, Cold Spring Harbor, NY. 239–279.
Pawley, J. 2000. The 39 steps: a cautionary tale of quantitative 3-D fluorescence microscopy. Biotechniques. 28:884–887.
Piston, D., G.H. Patterson, and S.M. Knobel. 1999. Quantitative imaging of the green fluorescent protein (GFP). Methods Cell Biol. 58:31–48.
Rabut, G., and J. Ellenberg. 2005. Photobleaching techniques to study mobility and molecular dynamics of proteins in live cells: FRAP, iFRAP, and FLIP. In Live Cell Imaging—A Laboratory Manual. R.D. Goldman and D.L. Spector, editors. Cold Spring Harbor Laboratory Press, Cold Spring Harbor, NY. 101–126.
Rossner, M., and K.M. Yamada. 2004. What's in a picture The temptation of image manipulation. J. Cell Biol. 166:11–15.
Shaw, P.J. 1995. Comparison of wide-field/deconvolution and confocal microscopy for 3D imaging. In Handbook of Biological Confocal Microscopy. 2nd edition. J.B. Pawley, editor. Plenum Press, New York. 373–387.
Smallcombe, A. 2001. Multicolor imaging: the important question of co-localization. Biotechniques. 30:1240–1242, 1244–1246.
Webb, R.H., and C.K. Dorey. 1995. The pixelated image. In Handbook of Biological Confocal Microscopy. 2nd edition. J.B. Pawley, editor. Plenum Press, New York. 55–67.(Alison J. North)
This article is targeted neither to the microscopy gurus who push forward the frontiers of imaging technology nor to my imaging specialist colleagues who may wince at the overly simplistic comments and lack of detail. Instead, this is for beginners who gulp with alarm when they hear the word "confocal pinhole" or sigh as they watch their cells fade and die in front of their very eyes time and time again at the microscope. Take heart, beginners, if microscopes were actually so simple then many people (including myself) would suddenly be out of a job!
All data are subject to interpretationDeliberate scientific fraud exists, but in modern microscopy a far greater number of errors are introduced in complete innocence. As an example of a common problem, take colocalization. Upstairs in the lab, a researcher collects a predominantly yellow merged image on a basic microscope, naturally interpreted as colocalization of green and red signals. But on the confocal microscope, there is no yellow in the merged images.
How can this be Many factors contribute. Here, I take the reader through the imaging process, from sample preparation to selection of the imaging and image-processing methods. Throughout, we will be on the look-out for problems that can produce misleading results, using colocalization as the most common example. Because one short article cannot be an exhaustive "how to" guide, I have also assembled a bibliography of a few highly recommended textbooks and microscopy web sites, which readers should consult for more extensive treatments of the critical issues introduced here.
Sample preparation"Garbage in = garbage out" is the universal motto of all microscopists. A worrying tendency today is to assume that deconvolution software or confocal microscopes can somehow override the structural damage or suboptimal immunolabeling induced by poor sample preparation. The importance of appropriate fixation, permeabilization, and labeling methods for preserving cellular morphology or protein localization is well known to electron microscopists (Hayat, 2000), but often underestimated in optical microscopy (Fig. 1).
Many labs use one standardized protocol for labeling with all antibodies, irrespective of whether the targets are membrane- or cytoskeleton-associated, nuclear or cytosolic. However, inappropriate fixation can cause antigen redistribution and/or a reduction in antigenicity. It is therefore important to test each antibody on samples fixed in a variety of ways, ranging from solvents such as methanol to chemical cross-linking agents such as paraformaldehyde and glutaraldehyde (for protocols see Bacallao et al., 1995; Allan, 1999), although glutaraldehyde fixation often reduces antigenicity and increases background autofluorescence. Consult textbooks for notorious pitfalls: phalloidin labeling is incompatible with methanol fixation, while microtubules are inadequately fixed by formaldehyde. Moreover, certain cell types, such as yeast cells, require specialized fixation protocols (Hagan and Ayscough, 1999).
Permeabilization is also critical in achieving a good compromise between antigen accessibility and ultrastructural integrity. Specific detergents will produce different effects (for example, Saponin treatment produces smaller holes in membranes than Triton exposure), and it is also important to test the effects of pre-, simultaneous, or post-fixation permeabilization. Be aware that tissue processing, and particularly "air drying" steps, may introduce tissue distortions that will affect dimensions and measurements. Many sample preparation problems are of course avoided by imaging living cells, though live cell work introduces a whole range of new potential artifacts (see Important considerations for live cell imaging).
What type of mountant should you useOf the many types of homemade and commercial mounting media, no one product is ideal for all applications. Mounting media that harden (often containing polyvinyl alcohol) are useful for long-term sample storage and are preferred for imaging using a wide-field (compound) microscope because the sample flattens as the mountant hardens. For that very reason, however, those that remain liquid (typically glycerol-based) are preferable when three-dimensional (3D) information is desired. These require a sealant around the coverslip for stability and to prevent desiccation.
Anti-fade agents are used to suppress photobleaching, but an anti-fade that is incompatible with specific fluorochromes can quench their signal significantly and/or increase background fluorescence. Consult the mountant's manufacturer for compatibility information because the anti-fade's identity may not be revealed in the datasheet. For GFP and its derivatives it is advisable to avoid anti-fades altogether, unless the sample is also labeled with a fluorochrome prone to photobleaching. Reports differ as to whether nail varnish, when used as a coverslip sealant, reduces GFP fluorescence, but users should be aware of the potential problem. A nondetrimental alternative sealant is VALAP, a 1:1:1 mixture of Vaseline, lanolin, and paraffin.
Optical properties of the microscope that you need to know aboutFew students and post-doctoral researchers will have the opportunity to choose the microscope they will use, or to influence the selection of specific components for purchase. However, there are certain factors that users can control, and they should consider these choices when configuring the microscope for their own experiments.
All objectives are not equal...The objective lens is the most critical component of a microscope and yet few researchers grasp the differences between specific objective classes.
For example, most scientists can tell you the magnification of an objective lens, but few will know its numerical aperture (light-gathering ability). Yet it's the numerical aperture (NA) that determines the resolving power of the lens (Fig. 2), while magnification is only then useful to increase the apparent size of the resolved features until they can be perceived by the human eye. Thus, a 40x 1.3 NA objective lens will be able to resolve far finer details than a 40x 0.75 NA lens, despite their similar magnification. The intensity of the signal also increases steeply with increasing NA (Fig. 3). Therefore, the objective's NA, as well as its magnification, should always be provided in the Materials and methods section of publications.
Why would anybody then choose an objective of lower NA The answer is that other features of the objective may prove more critical for a particular sample or application. For example, NA is proportional to the refractive index of the immersion medium, thus oil immersion objectives can have a higher NA than water immersion objectives, and dry objectives have the lowest NA. But for certain applications water immersion objectives have distinct advantages over oil (see section "The problem of spherical aberration") and high NA also comes at the expense of reduced working distance (how far the objective lens can focus into your sample), which may be problematic for thicker specimens. Other important factors to consider include design for use with or without coverslips, corrections for flatness of field and for chromatic aberrations, and transmission of specific wavelengths (particularly UV or IR light) (for detailed explanations see Keller, 1995; Murphy, 2001).
It is important to consider how resolution will affect colocalization analysis. We consider two fluorochromes to be "colocalized" when their emitted light is collected in the same voxels (3D pixels). If the distance separating two labeled objects is below the resolution limit of the imaging system, they will appear to be colocalized. Thus, users may "see" colocalization using a low resolution imaging system where a higher resolution system might achieve a visible separation of labels that are in close proximity but are not actually colocalized (Fig. 4). The NA of the objective lens, good refractive index match, and appropriate sampling intervals (small pixel sizes) will all affect resolution, and consequently, colocalization analysis. Note also that colocalization never indicates that two proteins are actually interacting, but only that they are located within close proximity.
Know your fluorochromes and filter setsColocalization can only be claimed in the certain absence of "cross-talk" (or "bleed-through") between selected fluorochromes. Choosing fluorochromes with well-separated excitation and emission spectra is therefore critical for multiple labeling. Consider the use of any two fluorochromes together. If their excitation peaks overlap, the wavelength of exciting light selected for the first may also excite the second, and vice versa. If their emission spectra also overlap, the fluorescence emitted by each may pass through both the emission filter selected for the first channel and that selected for the second. Thus one fluorochrome may also be detected in the other's detection channel, a phenomenon known as cross-talk or bleed-through. Be particularly suspicious of cross-talk if your two fluorochromes appear to be 100% colocalized.
Certain fluorochromes, such as Cyanine 3, are excellent for single labeling but can be problematic for multiple labeling because of spectral overlap with green emitters like fluorescein or Alexa Fluor 488. Conversely, Alexa Fluor 594 is well separated from standard green emitters, but is shifted too close to the far-red region to be useful for most green/red/far-red triple imaging (Rhodamine Red-X is better suited to this). It pays to stock a range of secondary antibody conjugates or dyes in order to tailor the combination toward specific protocols. Moreover, the brighter and more stable fluorochromes that are continually being developed may prove vastly superior to the reagents your lab has used for the past 20 years!
It is equally important to consider which filter sets are available on your microscope before selecting your fluorochromes. Long-pass filter sets, collecting all emissions past a certain wavelength, are generally less useful for multiple labeling than band-pass filters, which collect emissions in a specific range (Fig. 5), and the narrower the range of the band-pass filter, the better it can separate fluorochromes with close emission spectra.
Single-labeled controls should always be used to assess bleed-through. On confocal microscopes an additional test involves collecting images with each laser line deactivated in turn (you should now see no emission in that laser line's corresponding detection channel, unless there is cross-talk). Some cross-talk problems can be overcome on confocal microscopes by the use of sequential scanning (also known as multitracks or wavelength-switching line scans). In this mode, rather than exciting the sample with multiple laser lines at once and collecting the emissions simultaneously, first one laser line is activated and its corresponding emission collected, followed by the second laser line and its corresponding emission. However, this will not solve the problem if there is significant overlap between both excitation and emission spectra.
Equally problematic is the overlap of specific fluorescence with background autofluorescence, particularly in plant tissues, in animal tissues rich in highly autofluorescent proteins such as lipofuscin and collagen, and in cultures containing large numbers of dead or dying cells. Unlabeled samples are necessary to establish the levels and locations of autofluorescence, and narrow band-pass filters maximize the collection of specific signal compared with autofluorescence. Modern spectral imaging systems can be invaluable for separating specific fluorescent signal from autofluorescence, as well as for separating fluorochromes with extensive spectral overlap (Fig. 5 C).
Three-dimensional (3D) microscopyStandard compound or "wide-field" microscopes are often preferable for imaging thin cell or tissue specimens as more signal is presented to the detector. However, wide-field images can provide clear lateral (x- and y-axis) information but only limited axial (z-axis) information. For specimens thicker than a few micrometers, or when precise axial information is required, an instrument that removes out-of-focus blur and permits you to distinguish between the signals in thin "optical" slices may therefore prove superior. Current technologies to achieve optical sectioning include confocal microscopy, which uses one or more pinhole apertures to prevent out-of-focus light from reaching the detector; multi-photon microscopy, in which excitation only occurs in the plane of focus; and deconvolution algorithms, which are used to "restore" images from any type of microscope to a closer approximation of the original object. Each technology has distinct advantages for specific applications, which are best understood from detailed comparisons (Shaw, 1995; Murray, 2005) or by consulting local experts for advice. This article will concentrate largely on confocal microscopy, which is the most common approach. The following sections will consider how to establish the correct optical conditions to acquire meaningful 3D microscopy data.
The importance of pinhole size in confocal microscopyThe size of the confocal pinhole aperture determines the thickness of the optical section; that is, the thickness of sample slice from which emitted light is collected by the detectors. In most laser scanning confocal systems the pinholes have an adjustable diameter. Small pinhole diameters give thinner optical sections and therefore better z-axis resolution, which is important for colocalization analysis. However, the signal intensity is decreased, so when z-axis information is not required, or photobleaching is a problem, a larger pinhole diameter may be preferred.
Stating either the pinhole diameter or the optical section thickness in publications facilitates a more informed discussion of 3D localization (including colocalization). Confocal images are generally collected using a pinhole aperture setting around 1 Airy Unit, a diameter that achieves a good balance between rejection of out-of-focus light and signal collection. For multicolor imaging it is critical to achieve the same optical section thickness in all channels, which is accomplished by adjusting the pinhole size for the different wavelengths. Be aware that regular maintenance to ensure alignment of the pinholes is critical, as a poorly aligned pinhole can result in lateral shift and a "double" image where the same pattern is visualized in consecutive z-sections.
The problem of chromatic aberrationThe property of different wavelengths of light being focused to different positions within your sample is known as chromatic aberration. This can lead to an apparent lack of colocalization in the image stack of fluorochromes that are colocalized in the actual sample. Thus all microscopists need to be aware of this phenomenon.
Lateral (xy-axis) chromatic aberrations are generally corrected within the microscope, but note that full compensation is only achieved with proper matching of optical components. Some manufacturers use only the tube lens to impart corrections, whereas others use the objectives; thus, combining objectives and microscopes from different manufacturers can introduce aberrations. Lateral chromatic shifts can also be caused by mechanical shifts between different filter cubes or dichroic mirrors.
Corrections for axial (z-axis) chromatic aberrations are more difficult. Objective lenses are corrected for chromatic aberrations across a certain wavelength range, the extent of which depends on the type and age of the objective (improved lenses are developed every year). Most users are unaware that the majority of objective lenses currently in use are fully corrected across only the (approximately) green-to-red range of emission wavelengths. Thus, two fluorochromes outside these ranges (such as DAPI and Cy5) could be focused to z-positions several hundreds of nanometers apart, even if their targets are colocalized in the actual specimen. When compounded by further aberrations such as spherical aberration, they could appear well over a micrometer apart in the z-axis, and thus in different z-slices in your image stack (Fig. 6).
How do we check for chromatic aberration One option is to image the tiny (e.g., 0.1-μm diameter) multicolor "Tetraspeck" beads available from Molecular Probes and see whether the different colors of each bead show up in the same z-position or not. Another method is to use two secondary antibodies, both directed against the same primary antibody but conjugated to different fluorochromes (those used in your double-labeling experiment), and see whether the signals are superimposed in the z-axis or whether one always appears below the other. Ruling out severe chromatic aberrations in your microscope set-up by these methods permits you to be more confident of your data interpretation. When aberrations are found, try using fluorochromes closer together in emission wavelength. Alternatively, certain software programs will permit you to "shift" the image in one channel relative to the other (applying the exact shift calculated from multicolor bead images).
The problem of spherical aberrationSpherical aberration describes the phenomenon whereby light rays passing through the lens at different distances from its center are focused to different positions in the z-axis. It is the major cause of the loss in signal intensity and resolution with increasing focus depth through thick specimens.
Spherical aberrations occur as light rays pass through regions of different refractive index (for example, from the coverslip to the tissue, or between regions of different refractive index within the sample itself). The effects include a reduction in intensity and signal-to-noise in the plane of interest and distortions in the 3D image, with fine features appearing smeared out along the z-axis (Fig. 7). The aberrations become worse as you focus deeper into the sample. Corrections for spherical aberrations within the objective lens itself are only effective when certain preconditions are met. Thus, aberrations are increased by factors such as the use of the wrong coverslip thickness or type of immersion oil, too thick a layer of mounting medium, the presence of air bubbles in the immersion or mounting medium, or simply a temperature change. Note that most objective lenses for high resolution fluorescence work are calibrated for use with 0.17-mm thick glass, to which no. 1 1/2 coverslips correspond most closely. The specimen should be mounted on or as close to the coverslip as possible (avoid multiwell slides with a nonremovable gasket that places the coverslip many micrometers from the cells). The coverslip must also be mounted flat, as an angled coverslip will result in distorted optical properties (remove excess mounting medium by briefly placing small pieces of torn filter paper against the edge of the coverslip after mounting).
In selecting an objective lens for imaging thicker samples, you need to consider the balance between the effects of spherical aberrations and NA on your signal intensity. A high NA oil immersion lens may be optimal for use with thin specimens, because the glass coverslip, whose refractive index is matched to that of the immersion oil, becomes the predominant sample component. However, a water immersion lens, despite its lower NA, may achieve better images from a thick, largely aqueous specimen due to the better match of refractive index between immersion medium and sample. Methods of minimizing spherical aberrations range from the development of objectives with adjustable correction collars to the use of immersion oils with differing refractive indices (Fig. 7; and for a detailed and highly readable explanation of optical aberrations and their practical correction see Davis, 1999).
Establishing your acquisition settingsAppropriate acquisition settings are critical for obtaining meaningful and quantifiable data as well as "pretty" images. You must distinguish between acquiring all information in the raw data, and later presenting the data in a way that conveys the result more clearly.
All settings should be established using the real sample (or a positive control), and then the negative control is imaged using identical settings. The "autoexpose" or "find" functions should never be used for negative controls, as the camera or detectors will attempt to compensate for the low signal.
First, keep the acquisition settings constant between specimens to be compared quantitatively and particularly between sample and control.
Second, it is important to distinguish between an image that is useful for visualization alone, and an image from which meaningful quantitative data can be extracted. For quantitative microscopy the exposure time and/or gain (brightness) and offset (by which pixels below a certain threshold are defined as being black) should be adjusted to use the entire dynamic range of the detectors. Too high a gain results in saturated pixels, which cannot be quantified because the dataset is clipped at the maximum end of the dynamic range. Conversely, an inappropriately large offset, often used to hide "background" cellular fluorescence, clips the data at the minimum end of the dynamic range and again prohibits quantitative measurements. More significantly, how do you distinguish nonspecific background signal from a low, ubiquitous level of your protein with real biological significance
Most researchers lean toward a bright, high contrast image, and thus will invariably saturate their images. To avoid this, use acquisition software features such as an "autohistogram" display, or a "range indicator" or "glow scale" look-up table, to establish the settings more objectively. Once the correctly acquired data is saved (and always stored in the raw format for future reference!), brightness and contrast or scaling adjustments can then be applied for a more visually pleasing presentation.
How do you avoid saturation with samples containing some particularly bright but other very weak regions The collection of more grayscale levels (12-bit instead of 8-bit data) will help. You can then present two pictures showing different scalings applied to the same image, adjusted for the bright features in one and the finer, less intense details in the second. In more severe cases, image each area using two different acquisition settings, then present the two images side by side.
Great care must be taken to ensure adequate sampling (pixel/voxel dimensions) of your data, in all axes. According to the Nyquist sampling theorem, your spatial sampling intervals must be more than two times smaller than the smallest resolvable feature in your specimen. If this sampling requirement is not met, you will have gaps in your data and also spurious features can be introduced into your image by a process called "aliasing" (for explanation see Webb and Dorey, 1995). Thus, most confocal microscope software packages suggest the use of a z-interval around half the optical slice thickness (which is usually calculated for you). This is sufficient for detecting all resolvable features, although the use of even smaller z-intervals is advantageous for deconvolution or for creating smoother volume renderings.
In the xy-plane you should aim to relate the pixel size to the resolution of the system by adjusting the optical zoom setting (if available) or by using binning (a process by which the signals from neighboring pixels are combined into one value). The use of smaller pixel spacing than this, known as "oversampling," results in longer acquisition times that can cause greater photodamage to your samples. The image frame resolution (e.g., 1024 x 1024) should be set high enough to submit images for publication at a suitable size while maintaining 300 dpi resolution (Rossner and Yamada, 2004). Note that the use of an optical (real) zoom during image acquisition, to magnify specific features, will avoid the "pixilated" appearance of low magnification images to which digital zoom has subsequently been applied. Beyond a certain optical zoom, however, the user will enter "empty magnification," where that objective's maximum resolution has been reached and so no additional information is being obtained.
Choosing the "right" cell to image and publishOne of the greatest microscopy challenges is the choice of which cell(s) to present as a "typical" image. You may have preconceived ideas concerning your protein's localization, and subconsciously scan the sample to find the cell most closely fitting your expectations. In some cases this is a valid approach—for example to search for microinjected cells, for the sole expressers of a gene or for cells at a particular stage of the cell cycle. But there is a strong risk of focusing in on one cell and ignoring 10,000 strikingly different ones around it.
The more passionate we are about our experiment, the more we must doubt our ability to be truly objective. So ask an unbiased colleague to blind label the samples or to help collect or evaluate the data. The use of a motorized stage to image multiple, random positions can also help avoid bias. Samples containing cells with varying expression patterns or morphology should be presented as a low magnification overview beside high magnification views of representative, contrasting regions. Most importantly, a statistical analysis of cell numbers exhibiting particular characteristics will strengthen your data interpretation.
Transient transfections are particularly problematic for localization studies. A common mistake is to seek out transfected cells displaying the strongest expression levels, but here the high concentration of expressed protein may interfere with the balance of other proteins or cellular processes. Weak expressers are generally a better choice, in particular those showing limited signal localization. When available, antibodies to the endogenous protein can be used to assay for a normal distribution pattern. Aberrant localization may also be indicated by the abnormal distribution of a partner protein that should colocalize with the tagged component.
Presenting and interpreting your imagesYou must decide how to present your data in the most appropriate form. With 3D or 4D data this typically involves a choice between a single z-slice or a projection of multiple slices. A single slice must be presented when colocalization and/or z-resolution are in question, but a projection may better illustrate the continuity of a 3D network.
A merged image is often inadequate for demonstrating colocalization. A green-emitting fluorochrome and a red-emitting fluorochrome could be completely colocalized, but if one is brighter than the other the merged image may not appear yellow. Colocalization is better demonstrated using the "line profile" function included in many software packages, where an intensity plot for each channel is created along a line drawn across the image. Algorithms are available for calculating the degree of colocalization, but take care when establishing parameters such as threshold levels (for practical tips and caveats for colocalization studies see Smallcombe, 2001; for detailed methods of quantifying colocalization see Manders et al., 1993; Costes et al., 2004).
All images should be presented with scale bars. Many software packages include automatic scale bar calculation and pasting onto exported images. You can also image a stage micrometer to calculate the total magnification of a given system, which will be the product of the magnification of the objective and of other components such as the tube lens and relay optics to the camera.
Quantification of images—why is it useful and when is it appropriateQuantifying image data is necessary for the transition from anecdotal observation to an actual measurement. Quantitation is also an important means of avoiding subjective bias and presenting the overall pattern of the data. It is rarely straightforward in practice, requiring stringent acquisition conditions.
Images to be quantified should be acquired and exported in 12-bit or higher grayscale format, rather than the standard 8-bit (or 24-bit color) format suitable for most image presentation. Image processing can then be used, before quantification, to correct aberrations that have been introduced into the image stack during acquisition. You need to be aware of which image processing manipulations are consistent with quantification, and which are not. Constrained iterative 3D deconvolution algorithms, for instance, maintain the total signal intensity within an image stack, whereas nearest neighbor algorithms are subtractive and therefore do not.
Relative quantitation, such as comparing the signal intensity between one region of interest and another, or between the sample and a control (assuming constant acquisition settings), is simpler than absolute quantitation, but even this assumes a number of prerequisites such as even illumination across the entire field. Calibration slides (made from colored plastic and available from companies such as Applied Precision, Chroma Technology Corp., and Molecular Probes) can be imaged to determine irregularities in illumination and apply corrections. When calculating changes in signal intensity over time you must compensate for general photobleaching as well as for temporal fluctuations in laser power or lamp illumination. Laser power can be particularly volatile immediately after switching on the system, so a warm-up period of 30–60 minutes is recommended. A monitor diode or photosensor (if available on the system) and/or standardized samples are useful for normalizing experiments for fluctuations in excitation intensity.
Absolute quantification presents a greater challenge, requiring the researcher to have a good understanding of both the spectral and physical properties of the specific fluorochrome/fluorescent molecule and the appropriate choice of microscope optics and settings (Pawley, 2000). Important properties of the fluorochrome that must be taken into consideration include the extinction coefficient, the quantum yield, the photobleaching rate and properties, the chromophore folding kinetics, and the pH sensitivity (this can substantially affect measurements of proteins moving into and out of subcellular compartments).
The most critical components of the fluorescence microscope to consider for quantitative imaging are the objective lens (including its NA and its spectral transmission properties), the emission filter, and the detector (Piston et al., 1999). An emission filter that is well matched to the spectrum of your fluorescent probe will result in a better signal-to-noise ratio. A narrow band-pass filter is usually preferable to maximize collection of specific signal while minimizing the contribution of autofluorescence. Linear detectors (including the majority of cooled CCD cameras and photomultiplier tubes) will facilitate quantitation better than nonlinear ones (such as intensified CCD cameras). Standardized samples of known fluorochrome concentration can be used to establish appropriate gain and offset settings for the detectors. Saturation of the fluorophore, which occurs particularly when using laser excitation, also introduces nonlinearity into the measurements, making calibration of the system very difficult. Thus, it is recommended to use the lowest laser power that gives a sufficient signal-to-noise ratio.
Wide-field microscopy is often inappropriate for quantitation because you collect emitted light from the whole sample depth without knowing the thickness of each cell or structure. The application of 3D deconvolution algorithms to an image stack can overcome this problem for thin samples, but not for thick or highly fluorescent samples. Confocal microscopy is generally more quantification-friendly for samples over 15–20 μm in depth because of the defined optical section thickness. However, deeper focal planes will show reduced signal intensity due to absorption and scatter, necessitating further, more complex corrections.
Four-letter methods: FRAP and FRETSo far this article has concentrated on basic image acquisition. This next section will highlight a few danger areas associated with some more complex techniques used to monitor the kinetics of protein trafficking or protein–protein interactions in living cells.
The most common technique for monitoring protein kinetics is fluorescence recovery after photobleaching (FRAP). Many confocal and deconvolution microscope systems have incorporated remarkably user-friendly FRAP routines into their acquisition software. Unfortunately, interpreting the data is not always as simple. It is essential to "normalize" for general photobleaching by monitoring control cells that were not targeted. Furthermore, be aware that excitation light bright enough to bleach fluorescent molecules in a short time period can severely disrupt cellular ultrastructure. For quantitative FRAP, you must decide in advance which of the numerous available models will be used for analyzing the recovery curves, as this choice may affect the experimental design (Rabut and Ellenberg, 2005).
Fluorescence (or Forster) resonance energy transfer (FRET) describes the nonradiative transfer of photon energy from a donor fluorophore to an acceptor fluorophore when they are less than 10 nm apart. FRET thus reveals the relative proximity of fluorophores far beyond the normal resolution limit of a light microscope. However, since FRET also relies on additional prerequisites, such as a certain relative orientation of donor and acceptor (Herman et al., 2001), an absence of FRET cannot always be interpreted as the fluorophores being more than 10 nm apart. Positive signals can also be misleading, as FRET may occur where any two noninteracting proteins are highly concentrated in localized areas. Two common methods for measuring FRET include sensitized emission and acceptor photobleaching, and the most appropriate for your experiment will depend on factors such as the need for time-lapse imaging, the ability to bleach regions of interest, and laser line availability. In sensitized emission studies, rigorous positive and negative controls are required to correct for factors such as cross-talk and direct excitation of the acceptor by the donor's excitation.
Important considerations for live cell imagingWhen working with live cells, the rules for optimal imaging of fixed cells drop in priority. Phototoxicity and photobleaching become your biggest enemies and efforts focus on keeping the cells alive and behaving in a "normal" physiological manner (see the invaluable "Live Cell Imaging—A Laboratory Manual", R.D. Goldman and D.L. Spector [eds.], for detailed coverage). This requires appropriate environmental conditions (temperature, media, CO2, and possibly perfusion) and also optical considerations (such as the reduced phototoxic effects of longer excitation wavelengths). A major issue during time courses is the focal drift caused by thermal fluctuations in the room. Environmental chambers that enclose the entire microscope are generally more thermo-stable than smaller imaging chambers, but the latter can be more convenient for perfusion or for experiments requiring rapid temperature shifts.
Another serious challenge lies in acquiring images fast enough to capture rapid biological events and to accurately portray dynamic structures. This is particularly tricky when using multiple probes, as the labeled target may move in the time required to switch between filter positions. Solutions to this include the use of a simultaneous scan mode on a confocal microscope (in the absence of cross-talk), or positioning an emission splitter in front of a CCD camera to collect both signals simultaneously on the camera chip.
With fixed cells you typically maximize your signal-to-noise ratio via longer exposure times in wide-field systems, or in confocal microscopes by using higher laser power or by increasing the time the laser dwells on each pixel (using averaging or slower scan speeds). Optical zoom and Nyquist sampling are applied for optimal image quality. Approaches to minimize photobleaching in live cell imaging include a reduction in exposure time or laser power and pixel dwell time (you can compensate by using higher gain), increased pinhole diameters, the use of lower magnifications, and sub-optimal spatial sampling in xy- and z-axes. Binning your signal and the use of faster camera readout rates will enable more rapid imaging. The consequence of these compromises may include poorer resolution and reduced signal-to-noise ratios.
As the imaging proceeds, how do you avoid misleading data by checking for normal physiological behavior of your cells Here are a few clues: (1) Have they maintained their typical morphology, or are they shrunken, blebbing, rounded up, or coming off the coverslip (2) Are cells and organelles still moving around at a customary rate Acquiring simultaneous or sequential transmitted light and fluorescent images is an excellent way of assessing this; (3) If the sample is returned to the incubator after the experiment, will the cells carry on dividing or the embryos survive Continuing cell division is perhaps the most critical indication of healthy cells.
In conclusionGiven the complexities discussed above, how can we all share the responsibility for ensuring that published imaging data is an accurate representation of the truth Researchers need to learn enough about specimen preparation and the available imaging equipment to establish appropriate settings and collect optimal images. The wealth of modern information resources (see online supplemental material, available at http://www.jcb.org/cgi/content/full/jcb.200507103/DC1) enables all users to grasp at least basic microscopy concepts. Central imaging facilities can provide more advanced information required for specific applications and can help to ensure the use of appropriate imaging systems. The researcher's supervisor should rigorously critique both the raw and processed data, and needs to appreciate that high quality microscopy requires a significant time investment. Manufacturers must strive to design hardware that provides constant imaging conditions, and software that includes user-friendly tools for image analysis, and to ensure that researchers purchase the most appropriate equipment for their needs. Finally, scientific journals should set stringent guidelines, and manuscript reviewers must be critical of imaging data presentation, to guarantee that publications contain sufficient experimental detail to permit the readers to properly interpret the images and to repeat the experiments. Only by such a collective effort can we strive to present the true picture.
For a more extensive list of useful microscopy resources, including highly recommended textbooks, web sites, and practical courses, please see the online supplemental material, available at http://www.jcb.org/cgi/content/full/jcb.200507103/DC1.
AcknowledgmentsEnormous thanks to Kirk Czymmek, Michael Davidson, Scott Henderson, and Jyoti Jaiswal for their extremely helpful and encouraging comments on this article, and also to Lily Copenagle and Zhenyu Yue (formerly of the Rockefeller University) for allowing me to use the images I collected using their samples, both good and bad! I am immensely grateful to all those over the years who have patiently mentored me in microscopy, whether during my research positions, at numerous microscopy courses, or simply in discussions with colleagues and vendors, and in particular to Vic Small, who has the most exacting eye for detail of anyone I know, yet who always encouraged me to view even my ugliest experimental samples as "not entirely uninteresting."
References
Allan, V.J. 1999. Basic immunofluorescence. In Protein Localization by Fluorescence Microscopy—A Practical Approach. V.J. Allan, editor. Oxford University Press, Oxford, UK. 1–26.
Bacallao, R., K. Kiai, and L. Jesaitis. 1995. Guiding principles of specimen preservation for confocal fluorescence microscopy. In Handbook of Biological Confocal Microscopy. 2nd edition. J.B. Pawley, editor. Plenum Press, New York. 311–325.
Costes, S.V., D. Daelemans, E.H. Cho, Z. Dobbin, G. Pavlakis, and S. Lockett. 2004. Automatic and quantitative measurement of protein-protein colocalization in live cells. Biophys. J. 86:3993–4003.
Davis, I. 1999. Visualizing fluorescence in Drosophila—optimal detection in thick specimens. In Protein Localization by Fluorescence Microscopy—A Practical Approach. V.J. Allan, editor. Oxford University Press, Oxford, UK. 133–162.
Hagan, I.M., and K.R. Ayscough. 1999. Fluorescence microscopy in yeast. In Protein Localization by Fluorescence Microscopy—A Practical Approach. V.J. Allan, editor. Oxford University Press, Oxford, UK. 179–206.
Hayat, M.A. 2000. Principles and Techniques of Electron Microscopy: Biological Applications. 4th edition. Cambridge University Press, Cambridge, UK. 564 pp.
Herman, B., G. Gordon, N. Mahajan, and V. Centonze. 2001. Measurement of fluorescence resonance energy transfer in the optical microscope. In Methods in Cellular Imaging. A. Periasamy, editor. Oxford University Press, Oxford, UK. 257–272.
Keller, H.E. 1995. Objective lenses for confocal microscopy. In Handbook of Biological Confocal Microscopy. 2nd edition. J.B. Pawley, editor. Plenum Press, New York. 111–126.
Manders, E.M.M., F.J. Verbeek, and J.A. Aten. 1993. Measurement of co-localization of objects in dual-color confocal images. J. Microsc. 169:375–382.
Murphy, D.B. 2001. Lenses and geometrical optics. In Fundamentals of Light Microscopy and Electronic Imaging. Wiley-Liss, Inc., New York. 43–60.
Murray, J.M. 2005. Confocal microscopy, deconvolution, and structured illumination methods. In Live Cell Imaging—A Laboratory Manual. R.D. Goldman and D.L. Spector, editors. Cold Spring Harbor Laboratory Press, Cold Spring Harbor, NY. 239–279.
Pawley, J. 2000. The 39 steps: a cautionary tale of quantitative 3-D fluorescence microscopy. Biotechniques. 28:884–887.
Piston, D., G.H. Patterson, and S.M. Knobel. 1999. Quantitative imaging of the green fluorescent protein (GFP). Methods Cell Biol. 58:31–48.
Rabut, G., and J. Ellenberg. 2005. Photobleaching techniques to study mobility and molecular dynamics of proteins in live cells: FRAP, iFRAP, and FLIP. In Live Cell Imaging—A Laboratory Manual. R.D. Goldman and D.L. Spector, editors. Cold Spring Harbor Laboratory Press, Cold Spring Harbor, NY. 101–126.
Rossner, M., and K.M. Yamada. 2004. What's in a picture The temptation of image manipulation. J. Cell Biol. 166:11–15.
Shaw, P.J. 1995. Comparison of wide-field/deconvolution and confocal microscopy for 3D imaging. In Handbook of Biological Confocal Microscopy. 2nd edition. J.B. Pawley, editor. Plenum Press, New York. 373–387.
Smallcombe, A. 2001. Multicolor imaging: the important question of co-localization. Biotechniques. 30:1240–1242, 1244–1246.
Webb, R.H., and C.K. Dorey. 1995. The pixelated image. In Handbook of Biological Confocal Microscopy. 2nd edition. J.B. Pawley, editor. Plenum Press, New York. 55–67.(Alison J. North)