### Table of Contents

# Infinity Microscope Basics

This page is still under serious construction, lots of sections need to be expanded

This overview is meant to complement other available resources online such as https://www.microscopyu.com/microscopy-basics/infinity-optical-systems. At ASI we make modular microscopes in arbitrary configurations, which has far more flexibility than standard “big 4” research microscopes, but doing so also sometimes requires deeper understanding of the underlying principles (i.e. not seeing a microscope as a “black box”).

## What is an Infinity Microscope

At the core, an infinity microscope consists of a pair of lenses which magnify the image of the sample to an image sensor. The magnification is given by the ratio of effective focal lengths of the two lenses, which are designated the objective lens (near the sample, short focal length) and the tube lens (near the image sensor, longer focal length). Mathematically, the lateral magnification is

\begin{equation} M = F_{TL} / F_{obj} \end{equation}

(Axial magnification is $M^2$ / $RI$ where $RI$ is the medium of the sample and the detector is in air.)

The “infinity” signifies that the distance between the two lenses is not important. Optically this is because because both the objective and tube lens have one side focused at infinity, unlike older objectives and tube lenses which needed to be mounted a certain distance from each other to yield the stated magnification.

The region between objective and tube lens is often called “infinity space,” but we prefer the term “collimated space” to signify that rays originating from a single point in the sample plane are collimated or parallel in this region. The distance occupied by collimated space doesn't affect the magnification. The exact length of collimated space usually does not matter, as long as it is short enough to avoid vignetting. Filters, polarizers, and other elements are usually placed in collimated space.

Internally, objective lenses have many individual elements (often more than 10) in order to sufficiently correct aberrations with a relatively short focal length (i.e. the light rays need to bend quite a lot and in specific ways). Usually microscope bodies are built to accommodate different objective lenses. The tube lens is usually mounted in the microscope, and optically are comparatively simple lens (usually just a few elements).

## Lenses

A lens turns position into angles and angles into position at its focal plane. Mathematically one focal plane is the Fourier transform of the other focal plane.

### Objective Lenses

Objective lenses are universally labelled with a magnification, but optically the important thing is the effective focal length. Calculate the effective focal length by dividing the tube lens focal length by the magnification. Different manufacturers have different "standard" tube lenses, but realize that for modular microscopes you can substitute a different tube lens with certain caveats.

Dry, dipping, immersion…

Field number/FOV

There is a helpful reference on objective lenses and their use (overlapping much of the material on this page) at https://amsikking.github.io/microscope_objectives/.

### Tube Lenses

The nominal magnification of an objective assume the manufacturer's standard tube lens is used. Tube lens focal lengths are 200 mm for Nikon and Leica, 180 mm for Olympus, and 165 mm for Zeiss.

ASI offers many different tube lenses including from major manufacturers, from 70 mm focal length to 400 mm focal length. See the full list here.

### Flat-field Correction

### Spherical Correction

### Chromatic Correction

Chromatic correction is done independently in the objectives and tube lenses for Nikon and Olympus, meaning it is possible to mix and match objectives and tube lenses and even use a different focal length tube lens. However, Zeiss and Leica correct chromatic aberrations of their objectives using the tube lens, meaning that their particular tube lenses must be used (at the proper spacing) for optimum chromatic correction. ^{1)}

## Vignetting

Recall that light coming from a point in the sample will be turned into a bundle of parallel rays coming out of the objective lens, and the further the point is from the optical axis the more tilted that bundle will be. That bundle of rays is collected and refocused to a point by the tube lens, **if** the whole bundle makes it to the tube lens (the tube lens is only a certain size, plus there may be spots in the microscope's optical path where rays may hit a side wall or miss a mirror en route to the tube lens). The further from center of the sample the point is, the more pronounced the angle and the more likely some of those rays won't reach the tube lens. This leads to vignetting or darkening around the edge of the image. Also increased distance between the objective and the tube lens (more “collimated space”) leads to increased vignetting. The larger the back aperture of the objective the more this is a concern.

The formula for vignetting-free distance, assuming everything is perfectly aligned, is:

\begin{equation} L_{coll} = (⌀_{vig} - ⌀_{BFP}) \times F_{TL} / ⌀_{sensor} \end{equation}

Where:

$L_{coll}$ is the distance between the back focal plane and limiting aperture when vignetting just starts to happen

$⌀_{vig}$ is the diameter of the limiting aperture, which might be a tube lens, filter, or whatever other element will clip the rays. (E.g. 30-32 mm for most of ASI's tube lenses, 30 mm for the inside of ASI's C60-RING, and 23 mm for typical 25 mm emission filters.)

$⌀_{BFP}$ is the diameter of the objective back focal plane, given by $2 \times NA_{obj} \times F_{obj}$ where $F_{obj}$ is the effective focal length of the objective as described in the section about objective lenses.

$F_{TL}$ is the tube lens effective focal length

$⌀_{sensor}$ is the diameter where vignetting just starts to happen on the image side (E.g. 18.8 mm diagonal for standard sCMOS camera full-frame)

Rearranging this equation you can come to the following two equations in terms of the diameter of the vignette-free field of view at the sensor ($⌀_{sensor,max}$) and sample ($⌀_{sample,max}$):

\begin{equation} ⌀_{sensor,max} = (⌀_{vig} - ⌀_{BFP}) \times F_{TL} / L_{coll} \end{equation}

\begin{equation} ⌀_{sample,max} = (⌀_{vig} - ⌀_{BFP}) \times F_{obj} / L_{coll} \end{equation}

Note that for the same objective lens ($⌀_{BFP}$ and $F_{obj}$), changing the tube lens ($F_{TL}$) will change the magnification and hence the size of the image on the sensor and point on the sensor at which vignetting starts. However it will **not** change the point on the sample at which vignetting starts.

Collimated space starts at the back focal plane which usually falls inside the objective housing, generally deeper inside for higher magnification. Olympus now specifies the location of the back focal plane and other manufacturers may provide the location if asked nicely, or alternatively you can measure it. ^{2)}

## 4f Spacing

The name “4f” suggests that lenses are placed so their focal planes are coincident. When two successive lenses are so arranged, the outer focal planes not only preserve position information but also angles as well (subject to the overall magnification). This can be important sometimes, e.g. on the illumination path of a light sheet microscope the galvo tilt is converted into a pure translation by placing the galvo at the focal plane of the scan lens. To keep that pure translation at the sample plane, the next lens and the objective lens need to form a 4f relay; if not then at the sample plane the galvo tilt will result in both translation as well as rotation of the input beam.

4f spacing is not needed on imaging path of most infinity microscopes because camera isn't sensitive to what angle the incoming rays arrive.

## Resolution

The wave nature of light imposes fundamental limitations on the resolution of an optical system. For a self-luminous body, as in fluorescence microscopy, the resolving ability is commonly defined using the Rayleigh criterion: a point source of light can barely be resolved from a neighboring point source when spaced by the Airy disk radius. It can be shown that this distance $d_{xy}$ is given by

\begin{equation} d_{xy}={0.61λ \over \mathrm{NA}_{obj}} \end{equation}

where $λ$ is the wavelength of the light and $\mathrm{NA}_{obj}$ is the numerical objective of the imaging objective lens. The prefactor varies with the criteria to define resolution: 0.61 is the prefactor for the Rayleigh criterion which is the most common, 0.515 is the prefactor for FWHM of point source, 0.5 is the prefactor for Abbe resolution, and 0.47 is the Sparrow resolution limit. Lateral resolution is synonymous with $d_{xy}$.

For transmitted light microscopy, the resolving power is also affected by the numerical aperture of the illumination optics.^{3)} For transmitted light using a condenser with numerical aperture $NA_{cond}$ the lateral resolution is given by

\begin{equation} d_{xy,trans}={1.22λ \over (\mathrm{NA}_{obj}+\mathrm{NA}_{cond})} \end{equation}

In the z-direction, the objective lens' resolving power or axial resolution is equivalent to the depth of field^{4)}. The most common expression for the depth of field $d_z$ is

\begin{equation} d_z={2λn \over \mathrm{NA}_{obj}^2} \end{equation}

where $n$ is the index of refraction of the medium in which the object is embedded. Some versions of this equation include a term for the effects of lateral sampling, which we omit for the optic-limited case. The prefactor varies with the criteria to define resolution: the factor of 2.0 is the Abbe resolution most often used, a prefactor of 1.772 is for FWHM of a point source, and a prefactor of 1.22 is occasionally used. Axial resolution is almost always worse than to the lateral resolution, and the asymmetry is especially pronounced at low $\mathrm{NA}_{obj}$.

**Table 1** shows the resolving power and depth-of-field for some example microscope objectives. Note that the resolution does not depend on the magnification laterally or axially and mainly depends on $\mathrm{NA}_{obj}$.

Table 1: Resolution limits for various microscopes when $λ$ = 520 nm and $\mathrm{NA}_{cond}$ = 0.55 | |||||
---|---|---|---|---|---|

Magnification | Medium | $\mathrm{NA}_{obj}$ | $d_z$ | $d_{xy}$ | $ d_{xy,trans}$ |

x10 | Air (n=1.0) | 0.4 | 6.50 μm | 0.79 μm | 0.67 μm |

x40 | Air (n=1.0) | 0.65 | 2.46 μm | 0.48 μm | 0.53 μm |

x40 water | Water (n=1.33) | 0.8 | 2.16 μm | 0.40 μm | 0.47 μm |

x40 oil | Oil (n=1.51) | 1.4 | 0.80 μm | 0.23 μm | 0.33 μm |

x100 oil | Oil (n=1.51) | 1.4 | 0.80 μm | 0.23 μm | 0.33 μm |

These above expressions for diffraction-limited lateral ($d_{xy}$) and axial ($d_z$) resolution give us a good idea of the physical limits of the microscope system. Deviations in positions smaller than these resolution limits will be rendered undetectable by diffraction. The resolution obtained in practice can be worse than the diffraction limit due to optical aberrations or improper sampling.

“Super-resolution” microscopy techniques allow one to surpass the diffraction limit, but they incur significant trade-offs and further discussion is a separate topic. Very briefly, super-resolution methods fall into two categories: (1) localization techniques which determine the center point of isolated fluorophores (e.g. STED, PALM, STORM) and (2) structured illumination techniques in which the illumination pattern has a fine structure which is moved, either a periodic grid patterns or a scanned excitation point. Localization methods can achieve resolution in 10s of nanometers but require special efforts and long exposures to isolate fluorophores. Structured illumination in contrast can win at most a factor of 2 in resolution (excepting nonlinear methods) but otherwise are more like traditional fluorescence microscopy.

## Spatial Sampling

When images are captured with a digital camera, the size of the camera's dexel (detection pixel) size will also impact the ultimate resolution of the imaging system. Most scientific cameras have dexels a few microns across. The pixel size in the resulting image $p$ is given by

\begin{equation} p={d \over M} \end{equation}

where $d$ is the dexel size and $M$ is the total magnification. For infinity microscopes (near-universal and industry standard), the magnification $M$ is given by the ratio of the tube lens and objective lens' focal lengths. ^{5)}

Table 2 show the resulting pixel size for some example sensors and objective lens combinations, assuming nameplate magnification.

Table 2: Digital camera resolution | |||
---|---|---|---|

Magnification | Dexel Size | Pixel Size | Nyquist-limited $d_{xy}$ |

x40 | 16 μm | 0.40 μm | 0.80 μm |

x100 | 16 μm | 0.16 μm | 0.32 μm |

x40 | 10 μm | 0.25 μm | 0.50 μm |

x100 | 10 μm | 0.10 μm | 0.20 μm |

x40 | 6.5 μm | 0.163 μm | 0.33 μm |

x100 | 6.5 μm | 0.065 μm | 0.13 μm |

The Nyquist criterion says the resulting resolution can be no more than twice the spatial sampling. However, excessive oversampling will increase the data size without adding additional true information. For example, using an objective with NA 1.0 and light with a wavelength of 520 nm, $d_{xy}$ is ~320 nm. Thus if the pixel size is larger than 160 nm in the final image then the sampling will limit the resolution instead of the optics. Suppose further the camera dexel is 6.5 μm (typical sCMOS), then with a 40x magnification (162.5 nm pixels) there will be slight undersampling, with 60x magnification (108 nm pixels) there is 50% oversampling, and using a 100x magnification (65 nm pixels) there is huge oversampling. Similarly, when collecting 3D stacks the z-step should be less than half the depth of field ($d_z$); otherwise the attained axial resolution will be limited by sampling instead of the optics.

^{1)}

^{2)}

^{3)}

^{4)}

^{5)}