AutoQuant Feature Overview

Easily Import-to and Export-from AutoQuant using the latest AutoQuant CONNECT technology supported on a platform with robust performance using multi-time point, multi-channel images sets of a myriad of supported file formats. Import any folder full of TiFF files by using the sequence detector and set builder, then export for review as 5D image sets or playable movie renderings.

Image Correction 
Prepare multi-dimensional image sets for deconvolution and analysis with an array of intuitive correction tools. Stabilize your sample through advanced set alignment, reshape image sets to correct for inverted dimensions, adjust for photobleaching, and treat hot pixels before getting started.

Deconvolution Modalities
The most common optical microscopy modalities are available for each of the industry leading image deconvolution algorithms. Each algorithm adjusts to handle the unique Point Spread Functions (PSFs) created by each method of imaging for optimal deconvolution results.

Deconvolution Algorithms
AutoQuant delivers an extensive selection to meet the needs of any microscopist. Retrieve better data from your images using a complete suite of 2D and 3D algorithms, including the industry’s best Blind Deconvolution. Use Adaptive Constrained Iterative methods for fully automated tasks or fix your PSF throughout the sample to compare against similar image sets.

PSF Options
An accurate Point Spread Function is the key to successful quantitative deconvolution results. In addition to utilizing measured image stacks of sub resolution beads for input, AutoQuant also generates stunning theoretical models using algorithms such as the newest Gibson & Lanni physics-based PSF generation.

Deconvolution Tools
Improve the PSF by auto detecting and correcting any Spherical Aberration present, remove harmful noise prevalent in Confocal image sets, and ensure your sampling distance is in conformity with the Nyquist Criterion. Before and after deconvolution, tools are available to help restore vital image data quickly and efficiently. 
Additional Algorithms
Deblurring methods are fast and effective tools for improving the contrast of an image when quantitative results are not required. Each method is available in the same intuitive interface AutoQuant is known for.

Display stunning visual rendering of multi-dimensional image data in a variety of Views and Projections. Display multiple viewers simultaneously and even sync the views of for visual comparison. Share results by generating 5D movies and exporting as AVI movie format.

Data Analysis 
Compare raw and deconvolved data to perceive the restoration of image sets using Line Profile comparisons, set statistics and object measurements. Display and export results for presentations and further analysis.

Application Specific Analysis
Count and Track moving objects, measure the colocalization of fluorescent probes, discover the concentration of cellular samples using ratiometrics, and study protein-to-protein interactions using commonly accepted FRET calculations.

   AutoQuant Connect Workflow Accelerator

Seamlessly move your images between software applications

AutoQuant Connect is a sophisticated workflow accelerator that enables the automatic import and export of data to partnering software applications for additional analysis or visualization. Using AutoQuant Connect, you can import a "CONNECT Image Set" acquired in partner software applications (such as Image-Pro Plus or other software partners being added frequently) which includes the acquired image set, the image set meta-data, complete deconvolution settings, instructions, and completion commands. Upon receiving the "CONNECT Image Set", AutoQuant will queue all subsequent sets into the batch processing viewer to launch the next task. Each set is deconvolved using the defined settings with AutoQuant's industry leading algorithms. Once the deconvolution task is complete the Image Set along with complete meta-data is returned to the location of your choice. It can be returned to the connected software partner application, saved to a custom folder, or displayed in AutoQuant for review, visualization, or analysis.

Go to the AutoQuant Connect Partners page for information about becoming a partner and a list of current partners.

   All New High Resolution Interface

Software that looks good and performs even better.

An ALL NEW high resolution look and feel accompany the latest version of AutoQuant, along with intuitive new icons for every tool available. With the most relevant tools displayed right up front, you can guarantee increased productivity and accessibility. Take the next step and customize your new interface, take advantage of workflow acceleration, and utilize the fastest most extensive algorithms today.

   Custom Gibson-Lanni Modeling

Using physics based mathematics for accurate optical modeling

This established Point Spread Function (PSF) modeling algorithm is now available for Theoretical PSF determination as well as Spherical Aberration (SA) detection and correction. customized to AutoQuant’s exacting standards. The result is Higher Quality PSF modeling.

   Auto Detect Unidentified Sample Dimensions

Accurate system constraints deliver accurate image results

Using known parameters, AutoQuant can aid the microscopist with entering unidentified optical and sample constraints such as the Refractive Index (RI) of the sample being imaged and Distance of that sample from the Cover Slip. These two parameters are now exposed clearly in the Data Manager to be applied in the core of the deconvolution algorithm. Improved accuracy in these categories leads to better results. An automatic calculation is run based on the known accuracy of parameters to derive both values when the information is not available.

   Sync Multiple Line Profiles and Measurements for comparison

Validate your results with a quantitative examination of reassigned intensities

Compare original image sets with deconvolved results by easily mapping a line profile to identical pixels on a second resultant image. Display the line distance in microns, measure the angle of the line to the horizontal, auto scale the intensities and even view in a log scale if values are drastically different. Export the comparison data in .csv format for viewing in Excel.

Most users employ this tool to achieve quantitative comparison data. Observe the difference between the intensity profile of the original image and the sharper peaks and reduced background of the deconvolved result.

   ROI Deconvolution Preview

A great way to verify your settings quickly and effortlessly

For larger image sets it can be difficult to quickly test the settings of your deconvolution run. This new feature, available for both the 2D and 3D deconvolution algorithms, previews using all the active settings in the relative algorithm panels. The user is prompted to draw a small ROI on the image set and then instantly the optimized acceleration preview begins. By using the current settings in the deconvolution panel this result gives an immediate look into the results awaiting the end of the full stack deconvolution.

  The benefit is better deconvolution results every time because you can ensure the result will be the way you need it.

The benefit is better deconvolution results every time because you can ensure the result will be the way you need it.

   Deconvolution and Optical settings files now saved in .cfg format

Recreate identical results on any X3 system using achieved configuration files

From the 2D and 3D deconvolution dialogs all the active parameters can now be saved to file for use on any system running AutoQuant X3. This functionality is also now available for typical optics parameters. Set up your own custom Lenses, Dyes, Oils, Deconvolution settings, and much more for system constraints that you frequently use, saving you time and effort. Enter the system parameters once and simply recall the settings whenever you need them or simply save them with the image sets.

   Faster Deconvolution Times

Less waiting more working

AutoQuant X3 is the fastest AutoQuant ever. Optimized math libraries, reduced writing of temporary files, and algorithm improvements are just some of the system level improvements that create results faster. With over 3X the speed of previous versions, AutoQuant X3 can deconvolve larger images and more images in less time. This saves time, money, and resources when you have large number of images.

   Faster loading of images into the Batch Processing queue

Makes adding image sets to your queue faster than ever

The AutoQuant batch processing viewer is a helpful tool that queues all your deconvolution jobs and alignment procedures to be run sequentially upon launching. This intelligent technology will launch the next task instantly with the keen knowledge of the size of the job and its distribution across available cores and threads. 
This latest version provides an improved loading process to queue the sets faster and error check them more accurately before being able to launch into the process. These improvements reduce loading times for all image sets but demonstrate the largest benefit to users working with very large image sets.

   New additions to the Objective, Camera, and Dye Lists

Many new entries have been added to keep up-to-date with the latest hardware and dyes on the market. Browse the list by simply typing the first letter of the item you’re searching for and then navigate the list of similar names. If your item does not exist among the hundreds of options available simply add a custom item to be saved and used at a later time.

The AutoQuant Advantage

Top 10 Reasons to Use AutoQuant

1. One-Click Accelerated Workflow
AutoQuant is the only deconvolution software package with a workflow accelerator directly linking acquisition and analysis software partners to our standalone platform. This makes importing and exporting image sets with complete meta-data a one-click operation, saving time and effort.

2. Most Complete File Format Support
Over 24 file formats are supported for reading and over 17 file formats for writing. Standard and proprietary formats that include multiple channels, multiple time points, large Z stacks and large XY dimensions round out a complete selection all compatible for deconvolution and visualization.

3. Most Extensive Algorithm Selection

  • AutoQuant offers the most complete suite of algorithms available. Both 2D and 3D adaptive algorithms use the most advanced methods of image restoration; Constrained iterative methods with Maximum Likelihood Estimation (MLE), Adaptive or fixed PSF modeling and noise reduction.
  • All algorithms are available for Widefield, Brightfield, Spinning Disk & Laser Scanning Confocal, and 2 Photon microscopy modalities.
  • Additional algorithms are available including 3D Inverse Filter and deblurring methods such as No Neighbors and Nearest Neighbors.

4. Intuitive 4-step Process
No confusing multi-screen wizards are necessary; only 4 simple steps to review and run your image restoration. AutoQuant automatically checks for meta-data accuracy and settings conflicts so you know that the results will turn out as expected. For larger image sets, an ROI deconvolution preview is even available to ensure settings are tweaked just right.

5. Best Selection of Point Spread Function (PSF) options

  • AutoQuant provides options at each step to improve the key ingredient to a successful deconvolution, a good PSF.
  • In step one you can chose to use the PSF as an unaltered model throughout each iteration, known as a FIXED deconvolution, or you can use the superior Adaptive method known as BLIND deconvolution where the PSF will improve over each iteration as the image improves to account for every pixel of the volume and not just a single region from where the PSF was imaged.
  • In step two, AutoQuant performs these tasks with either a single MEASURED PSF cropped from an image stack, a MEASURED PSF generated and refined from a vast array of beads within an image stack, or a THEORETICALLY generated PSF based on the optical parameters of the microscope and sample.

6. Automatic Spherical Aberration Detection and Correction of PSF

  • AutoQuant doesn’t stop at measuring or generating a good theoretical Point Spread Function (PSF) model. The software will also automatically detect for irregularities and asymmetries known as Spherical Aberrations (SA) of the desired PSF using a customized version of the Gibson-Lanni modeling algorithm.
  • Once detected, the SA can be corrected for and therefore eliminated from the image stack during deconvolution. Unlike other software which always requires user input, AutoQuant removes the guesswork by automatically calculating SA based on comparison between the theoretical model and the reality of the acquired image stack.

7. Automatic Noise Removal for Improved Confocal
Noise from the optical system and detector can typically be problematic for obtaining an accurate result, but AutoQuant uses advanced Maximum Likelihood Estimation (MLE) calculations and known PSF models to iteratively remove noise before it can be considered a signal originating from the sample. This problem is most common in confocal imaging due to the low light conditions and sensitive recording equipment being used. Unlike some software which only smoothes the signal, simply masking the noise, AutoQuant uses an intelligent approach by completely removing it from the equation to only work with true sample signal before completing restoration.

8. Easy-to-Use Spacing Calculator XY & Z
The best results will always originate from properly acquired samples, so AutoQuant provides helpful spacing calculators to ensure the space between Z planes is holding true to the Nyquist Criterion. Once the image set is opened in AutoQuant, another calculator is beneficial for ensuring the XY pixel size is accurately measured based on the magnification, zoom, and camera settings. Two simple tools to make life easier for everyone.

9. Well Published and Referenced
Over a decade of cutting edge research, development, and implementation has kept AutoQuant on the forefront of advanced deconvolution technology. Deeply rooted in innovative imaging and restoration cutting-edge research, AutoQuant continues to improve upon the published results of accomplished investigators. A simple search in Google Scholar for the term AutoQuant or deconvolution you will deliver thousands of examples of successful users who have improved their fluorescent images with quantitative deconvolution technology.

10. Free Online Tools for Support and Education 
An extensive community of AutoQuant users is readily available through email list serves and online forums where experts can answer questions and give helpful advice for free. Additionally, Media Cybernetics continues to improve the ever-growing list of free educational movies, webinars, and live training courses for support and education of all ranges of users.


AutoQuant X3 System Requirements

Minimum Requirements:

  • Processor: Intel® dual core processor
  • RAM: 3GB memory
  • Free Disk Space: 2GB on installation drive + free space for images (20GB+)
  • OS: Windows XP (32), or 7 (32 or 64) 
  • Graphics Card: NVIDIA graphics card (for 5D Viewer performance)

Recommend Requirements:

  • Processor: Intel quad-core 64-bit processor (current recommended model, Core i7 2600) or better
  • RAM: 8GB memory or higher
  • Free Disk Space: 2GB on installation drive + dedicated drive (500GB+)
  • OS: Windows® 7 (64 bit)
  • Screen Resolution: 1600x1200 screen resolution
  • Graphics Card: NVIDIA Quadro 600 or better

Super Number Cruncher:

  • Processor: Dual Intel quad-core 64-bit processors (current recommended model, 2X Xeon 5600 series or better)
  • RAM: 48GB memory or higher
  • Free Disk Space: 2GB on installation drive and one or more dedicated SATA 6Gb/s data drivers (2TB+)
  • OS: Windows 7 (64-bit) 
  • Graphics Card: NVIDIA Quadro 4000 or better

Specifications are subject to change. Please contact Media Cybernetics or your local reseller for the latest features.

Six-label Imaging of Brain Tissue: Spectral & Multiphoton Imaging with Deconvolution

Karen L Smith 1, Natalie Dowell-Mesfin 1, and Richard Cole 1,2
1 NYS Department of Health, Biggs Laboratory - Wadsworth Center, Albany NY 12201
2 Dept. of Biomedical Sciences, School of Public Health State University of New York, Albany, NY 12201 

The goal of this project was to selectively image six biological structures within a 100 μm thick rat brain section using immunohistochemistry and multiphoton imaging. Channel separation and deconvolution algorithms were used to minimize spectral overlap, increase resolution, and maximize the signal-to-noise ratio (S/N). By precisely selecting fluorescent probes and adapting existing immunohistochemical methods it was possible to spectrally resolve six channels. These methods provide the capability of simultaneously identifying six different spatially separated biological structures.

Imaging Software: Media Cybernetics AutoQuant X3 & Image-Pro Plus 7 software were used for this study.

Methods and Results: 
An advanced six-day immunohistochemical protocol was necessary to achieve an acceptable result. The sequence of the primary and secondary antibodies (AB), in addition to the stains, was found to be critical for the successful labeling. The AB and stains used in this protocol were: laminin for vascular labeling (Primary 1:100 Dylight, Jackson ImmunoResearch-West Grove PA), Iba-1 for microglia labeling (primary 1:800 Wako Chemicals, Richmond VA), neurofilament for axon labeling (Primary 1:100 NFM Life Technologies Corp. Grand Island NY), GFAP for astrocyte labeling (Primary 1:1000), Hoechst for nuclear labeling (1:1000), and NeuroTrace which labels neuronal cell bodies (1:150 Life Technologies Corp).

In brief, on day 1 neurofilament (mouse) AB was applied, which was followed on day 2 by a Alexa 546 anti-mouse secondary (Life Technologies Corp). On day 3, primaries for GFAP (rat) and Iba-1 (rabbit) were applied. On day 4, the secondaries Alexa 594 anti-rat (Zymed/ Life Technologies Corp) and Alexa 514 anti-rabbit along with both Hoechst 33342 (Sigma) and NeuroTrace 640/660 were applied. On day 5, the laminin Dylight (488) was applied. Day 6 consisted of washes and mounting sections in ProLong Gold (Life Technologies Corp).

Spectral images were collected using a Leica TCS SP5 scanning laser confocal microscope .The images were taken from a region located within the ventral striatum. This region was selected since it contains a dense granular cell layer and a highly vascular network. Images were collected using a 20X objective (0.7 N.A.). Individual Z slices were taken at a step size of 0. 42μm and the stack represented 64.6 μm in total. The image was collected at 1024x1024 with a voxel resolution of 302.5 nm in the x and y dimension and 419.6 nm in the z dimension. For two channels, Leica spectral unmixing software was used to fully isolate the two individual channels (Iba-1 microglia and the Dylight laminin channel). Color manipulations (sudo coloring, brightness & contrast adjments) were performed using Image Pro 7 post deconvolution.

Deconvolution can improve both the detectability and the signal-to-noise of even confocal images. This is especially true in the Z dimension, where the pin hole of the confocal is not nearly as effective. In order to achieve the highest quality and aberration free results each channel was processed separately.

The spherical aberration and channel refractive index were determined from the sample using Autoquant's "detect function" prior to starting deconvoltion. A filtered original was used as a starting image and 30 iterations (blind method) were performed. 

Unprocessed, maximum intensity projections (XY & ZY) from the confocal Z series (below) demonstrate the reduced resolution in the Z dimension. The red box is a higher magnification view of the area in the white box. The grey line in the red box is the pixel row that was used to generate the RGB line scan. The vertical tick denotes the start of the ZY projection and corresponds to the green-boxed area on the graph.

  Before AutoQuant

Before AutoQuant

The deconvoluted image (below) clearly shows a substantial improvement in the signal to noise; yielding clear cellular and process boundaries. This is especially noticeable in the ZY projection. The line scan of the ZY projection has more define peaks, thus more define structures when compared to the same area in the above graph.

  After AutoQuant

After AutoQuant

This study illustrates the power of using advanced immunohistochemistry, multi-channel microscopy, and image processing to separately label and delineate six distinct cellular features, providing the foundation for further functional biological quantification.

The authors acknowledge the use of the Wadsworth Center's Advanced Light Microscopy & Image Analysis Core Facility for this work.

The New York State Department of Health does not indorse any of the products described in this article.

Quantifying Microvascular Parameters Using AutoQuant and 3D Constructor®

Researchers: Kim Douma M.Sc. Ph.D., Wim Engels Ph.D., and Marc A.M.J. van Zandvoort Ph.D. - Maastricht University, Maastricht, the Netherlands

Researchers at Maastricht University are utilizing multi-photon laser scanning microscopy (MPLSM) to visualize and quantify the microvascular features of cardiovascular disease and cancer in various animal models. Their experimental protocol employs antibody- and protein-conjugated fluorophores to highlight the (angiogenic) microvasculature from the surrounding heart/tumor tissue.

The Dutch research group also uses MPLSM as a validation tool for whole-body magnetic resonance imaging (MRI) in assessing angiogenic activity and microvessel density in tumor-bearing mice and mice with myocardial infarction.1 The group’s research requires a high-quality, high-speed microscopy setup that offers sufficient penetration depth in vital tissue, both in vivo and ex vivo, as well as powerful data processing and analysis software.

Imaging Hardware

Two primary setups are used to perform vital tissue imaging. Each is operated in two-photon mode. The first setup comprises a Nikon Eclipse E600FN microscope embedded in a Bio-Rad Radiance 2100 MP system equipped with a Spectra-Physics Tsunami Ti:sapphire pulsed laser. The second comprises a Leica TCS SP5 microscope embedded in a Leica TP system equipped with a Coherent Chameleon Ti:sapphire pulsed laser.

Imaging Software Methods

Using MPLSM, 3D data stacks were acquired by optical sectioning, that is, the in-depth sequential imaging of excised and fluorescently labeled tumor and heart tissue. These 3D data sets were next deconvolved using AutoQuant X2 in 3D blind deconvolution mode. (Refer to Figure 1.) Deconvolution parameters were optimized using fluorescent microspheres of known diameter (4.43 µm). Deconvolution reduced the microsphere volume from 100.11 ± 3.46 µm3 to 42.41 ± 4.02 µm3 (mean ± standard deviation), which is close to the theoretical volume (45.52 µm3).

Imaging Software Results

The non-deconvolved data sets of the microvasculature were then imported into Image-Pro Plus 6.3 for further analysis. In conjunction with embedded and self-written macro-files, 3D Constructor 5.0 was utilized to process the acquired 3D data sets and quantify microvessel diameter, density, and length, as well as the number of branching points. (See Figure 2.) These features are typically used to characterize neovascularization and may be used to follow pathological development and assess therapeutic efficacy.

  Figure 1. Effect of blind deconvolution on the microvessel density at the periphery (white lines) and core (black lines) of LS174T tumors grown subcutaneously in Swissnu/nu mice. Deconvolution reduced the values of the microvessel density to values that comply with literature.2,3

Figure 1. Effect of blind deconvolution on the microvessel density at the periphery (white lines) and core (black lines) of LS174T tumors grown subcutaneously in Swissnu/nu mice. Deconvolution reduced the values of the microvessel density to values that comply with literature.2,3

     Figure 2. Left: 3D data set of tumor microvasculature stained with FITC-labeled monoclonal antibody against CD31. Middle: Deconvolved microvasculature covered with an isosurface to determine its fractional volume (microvessel density). Right: Isosurface reduced to skeletonized microvasculature to determine total microvessel length and number of branching points.


Figure 2. Left: 3D data set of tumor microvasculature stained with FITC-labeled monoclonal antibody against CD31. Middle: Deconvolved microvasculature covered with an isosurface to determine its fractional volume (microvessel density). Right: Isosurface reduced to skeletonized microvasculature to determine total microvessel length and number of branching points.


Background of Deconvolution



Generally speaking, outside of the field of light microscopy: Deconvolution is an engineering discipline, which refers to the retrospective improvement of fidelity in electronic signals such as voice, music, radar and pictures. In light microscopy, deconvolution is used for 3D widefield fluorescence imaging, which uses an ordinary microscope and optical sectioning techniques [Holmes, 1992, 1995].

AutoDeblur is Media Cybernetic’s deconvolution software product which uses the AutoQuant deconvolution algorithms. It is used not just for 3D widefield fluorescence. Applications include resolution improvement of 2D images, improvement of confocal micrographs (2D and 3D), transmitted light brightfield (2D and 3D), spinning disk (2D and 3D) microscopes such as the Yokagawa and Perkin-Elmer microscopes, and multiphoton microscopy (2D and 3D). Deconvolution improves the clarity of images by improving resolving power, removing out-of-focus haze and eliminating noise such as that caused by low light levels and electronic thermal noise from video cameras.

Deconvolution is based on the equation: , where x represents the 3D spatial coordinate, * represents convolution, f(x) represents the ideal image stack of perfect fidelity, h(x) represents the optical point-spread function (PSF) [Holmes] (diffraction pattern) of the microscope, n(x) represents noise due to electronics and quantum photons and g(x) represents the image stack. Deconvolution recovers an estimate of f(x) from g(x).


Types of Deconvolution and Deblurring


Iterative Constrained

These methods iteratively update the recovered image. This is done by reblurring the picture(s) according to f(x)*h(x) and comparing this blurred picture against g(x). The difference is computed and then used to update the estimated f(x). The reblurring process is then repeated. These methods are constrained in that the solution of f(x) must be positive, since f(x) represents a light intensity and intensities are positive by definition. The earliest Iterative Constrained deconvolution was introduced to light microscopy by Agard [1989] who used the Jansson-van Cittert algorithm which was used originally for deconvolution of spectra.


Maximum Likelihood Deconvolution

Maximum likelihood deconvolution is a more recent improved subset of Iterative Constrained algorithms. The iteration is designed based upon a probability model. The mathematical solution is the f(x) which has the highest probability of being correct. The mathematics of this algorithm is based upon the behavior of quantum photon emissions and diffraction. Among all known approaches, the Maximum Likelihood approach has proved to provide the best quality images [Verveer]. T. Holmes (Cofounder and CEO of AutoQuant) was the first to introduce Maximum Likelihood Deconvolution to optical imaging while at the U. of Missouri [Holmes, 1988]. AutoDeblur is the first known commercial product to use this method.


Blind Deconvolution

Blind deconvolution is a subset of Iterative Constrained algorithms which produce an estimate of h(x) concurrently with f(x). It does not need the PSF h(x) to be measured. Other iterative constrained algorithms require h(x) to be measured by acquiring data from subresolution fluorescent beads.

Blind deconvolution was first introduced to the imaging community, outside of light microscopy by Ayers and Dainty [Ayers] and it was introduced to light microscopy by T. Holmes while at the Rensselaer Polytechnic Institute [Holmes, 1992]. AutoDeblur uses blind deconvolution and maximum likelihood estimation (MLE) together. The AutoQuant deconvolution algorithms provide the first and only blind deconvolution product. AutoDeblur uses Blind Deconvolution with MLE. Vendors sometimes make inaccurate claims that they are using a blind deconvolution, when they are really using a conventional Iterative Constrained deconvolution having a theoretically calculated PSF rather than a measured one. It is a blind deconvolution only if the algorithm is producing the PSF from information within the data set g(x). This is done by first assuming an h(x), then (1), estimating which f(x) could have caused g(x). This calculation is followed by (2), estimating which h(x) could have caused g(x) from the estimated f(x), and then steps (1) and (2) are repeated again and again. It is believable that the PSF information is in the data because one often sees the light spreading from fine point or line structures in the data, and this spreading makes up the PSF.


Linear methods


Inverse Filter

Linear methods run quickly on a computer because they are not iterative. The Inverse Filter uses an approximate direct linear inversion of the equation: g(x) = f(x)*h(x). This inversion is carried out according to: where h-1(x) is the inverse-filter impulse response and is designed by a minimum means-square-error criterion [Castleman, Ch. 14]. This criterion implies that among all possible Inverse Filters it provides the smallest difference between the perfect f(x) and the estimate . It was first used for light microscopy by Agard [1989].


Nearest Neighbors and No Neighbors Deblurring

The Nearest Neighbor algorithm falls under the category of a deblurring algorithm and by definition is not really a deconvolution because it is not based upon an estimation of f(x). Instead, it is based upon improving the picture by sharpening edges of structures. It is a specific type of sharpening filter called an unsharp mask [Russ]. It works by reblurring the image and then subtracting a fraction of the reblurred image from the blurry one. In AutoDeblur it works according to: , where the subscripts indicate the optical slice number. One 2D slice is sharpened at a time. The two slices that are immediately above and below it are reblurred and a fraction c of these reblurred slices is subtracted. With No Neighbors deblurring the k-th slice is reblurred and subtracted from itself according to a similar formula.


The early ideas of this approach were introduced to light microscopy by Castleman [Castleman, Ch. 17] and brought into fruition by Agard [1984].



- Agard, D., Optical Sectioning Microscopy: Cellular Architecture in Three Dimensions, Annual Review of Biophysics and Bioengineering, 13: 191-219, 1984.

- Agard, D., Hiraoka, Y., Shaw, P., Sedat, J., Fluorescence Microscopy in Three Dimensions, Methods in Cell Biology, 30: 353-377, 1989.

- Ayers, G.R., Dainty, J.C., Iterative Blind Deconvolution Method and Its Applications, Optics Lett., 13: 547-549, 1988.

- Castleman, K., Digital Image Processing, Prentice-Hall, 1979.

- Holmes, T.J., Maximum Likelihood Image Restoration Adapted for Noncoherent Optical Imaging, JOSA-A, 5(5): 666-673, 1988.

- Holmes, T., Bhattacharyya, S., Cooper, J., Hanzel, D., Krishnamurthi, V., Lin, W., Roysam, B., Szarowski, D., Turner, J., Light Microscopic Images Reconstructed by Maximum Likelihood Deconvolution, Ch. 24, Handbook of Biological Confocal Microscopy, J. Pawley, Plenum, 1995.

- Holmes, T., Liu, Y., Image Restoration for 2D and 3D Fluorescence Microscopy, Visualization in Biomedical Microscopies, A. Kriete, VCH, 1992.

- Holmes, T., Blind Deconvolution of Quantum-Limited Noncoherent Imagery, JOSA-A, 9: 1052 - 1061, 1992.

- Russ, J.C., Image Processing Handbook, CRC, 1994.

- Verveer, P., Computational and Optical Methods for Improving Resolution and Signal Quality in Fluorescence Microscopy, PhD Thesis, Tecnische Universiteit Delft, 1998.


Z-Step Measurement

The following recommendation is for all AutoDeblur resellers and customers.

Media Cybernetics recommends that the z step size of the microscope be measured upon installation and verified weekly thereafter. Devices that measure this z-step are available from Mitutoyo ( The part numbers to order are 519-620A (High Resolution Mini-checker), which is the electronics box, and 519-899 (Lever Head) which is the transducer. 

The microscope stage must be modified by a machine shop in order to install the lever head. Typically this modification involves drilling and tapping some holes for the mounting screws of the lever head, and for mounting a flange on the stage, where the lever end rests. A conceptual schematic diagram is shown below. The actual setup will depend upon the microscope model. The cost of this device must be quoted from the manufacturer. 

This z measurement is important because the image collection relies on satisfying the optical sectioning sampling criterion, where the z-steps ought to be at least as small as the depth of field. If the focusing mechanics of the microscope malfunctions the z-step measurement to be wrong and this device will let the customer detect the malfunction immediately. The device will provide an independent verification of the acquisition software. Although commercial acquisition software is calibrated to provide the z-step size automatically, for good laboratory practice, it is important to regularly verify this calibration. 

To carry out the verification, set up a 3D volume collection where the thickness of the volume (along z) is 10 micrometers. The Mitutoyo device will verify this distance to within 0.1 micrometer (1%) precision.

Description of Spherical Aberration


Spherical aberrations result from the fact that the focal points of light rays far from the optic axis of a spherical lens are different from the focal points of rays of the same wavelength passing near the center (Figure 1). Depending on the severity and direction of the aberrations, it can result in PSFs similar to those in Figure 2, and images similar to those in Figures 3 and 4. Notice that spherical aberrations can cause a noticeable heavier tail on one side of the XZ projection of the image, depending on whether positive or negative spherical aberrations are found. Notice also that Figure 2 shows how spherical aberrations affects the convergence of rays, from converging to one point (Figure 1a; Figure 2c), to converging in multiple points (Figure 1b; Figure 2a,b; Figure 2e,f).

Figure 1: Optical system without (a) and with (b) Spherical Aberrations. Spherical aberration is caused by different focal points for rays far from and center and others closer to it (b).

      Figure 2: Spherically aberrated PSFs: a+b) XY and XZ maximum intensity projection or a PSF with negative severe spherical aberrations, c+d) XY and XZ maximum intensity projection or a PSF with no spherical aberrations, and e+f) XY and XZ maximum intensity projection or a PSF with positive severe spherical aberrations.


Figure 2: Spherically aberrated PSFs: a+b) XY and XZ maximum intensity projection or a PSF with negative severe spherical aberrations, c+d) XY and XZ maximum intensity projection or a PSF with no spherical aberrations, and e+f) XY and XZ maximum intensity projection or a PSF with positive severe spherical aberrations.

     Figure 3: Example of spherically aberrated image: a) XY and b) XZ maximum intensity projection of the original image, c) XY and d) XZ maximum intensity projection of the deconvolved image with spherical aberration compensation.        *Courtesy of Diane Kube, Ph.D., Co-Director CF Imaging Core, CWRU Department of Pediatrics. *Courtesy of Robert Kolb, Ph.D., Post-Doctoral Fellow CWRU Department of Pediatrics.


Figure 3: Example of spherically aberrated image: a) XY and b) XZ maximum intensity projection of the original image, c) XY and d) XZ maximum intensity projection of the deconvolved image with spherical aberration compensation.


*Courtesy of Diane Kube, Ph.D., Co-Director CF Imaging Core, CWRU Department of Pediatrics.
*Courtesy of Robert Kolb, Ph.D., Post-Doctoral Fellow CWRU Department of Pediatrics.

   Figure 4: The image in Figure 3, deconvolved, then convolved with spherically aberrated PSFs: a+b) XY and XZ maximum intensity projection of the image convolved with severe negative spherical aberrations, and c+d) XY and XZ maximum intensity projection of the image convolved with severe positive spherical aberrations.

Figure 4: The image in Figure 3, deconvolved, then convolved with spherically aberrated PSFs: a+b) XY and XZ maximum intensity projection of the image convolved with severe negative spherical aberrations, and c+d) XY and XZ maximum intensity projection of the image convolved with severe positive spherical aberrations.

What is the spherical aberrations coefficient (SA)?
There are many ways to represent the SA-coefficient. One of them is illustrated in Figure 5. Another description is: SA is a function of the number of waves/cycles the plane of focus is shifted/elongated due to the existence of spherical aberrations.



                                                                                                                                                           (b)   Figure 5: The XZ projections of two PSFs, a) with SA=0, and b) with SA-10.


Figure 5: The XZ projections of two PSFs, a) with SA=0, and b) with SA-10.


Potential Sources of Error and Variability in FRET Image Acquisition


When acquiring images for an experiment that utilizes the AutoVisualize FRET module, there are many sources of potential error, inconsistency and variability. When designing an experiment or an assay, it is helpful to be aware of these sources. If results, such as measured FRET efficiency, are inconsistent from one sample to the next, as part of the trouble-shooting, these error sources should be considered and remedied wherever possible.


1. Exposure parameters
The exposure parameters need to be identical within each image set. There are 3 image sets per experiment; one each, for (1), calibrating the donor bleed-through, (2), calculating the acceptor bleed-through and (3), collecting data from the FRET sample. In (1) a donor-only sample is used and in (2) an acceptor-only sample is used. Each of these 3 image sets, in turn, have 3 images, one each with the excitation and emission filter tuned, respectively, to (a), donor - donor, (b), acceptor - acceptor, and (c), donor - acceptor. For example, with the donor-only calibration sample of Item (1), use the same camera exposure time (i.e., shutter speed), neutral density filter and any gain settings on the camera or frame-grabber card. These parameters may be changed later when collecting the images of Item (2) with the acceptor-only calibration sample, but they need to remain the same for all 3 of the images collected with this acceptor-only calibration sample.


2. Dynamic range and avoidance of saturation
The exposure parameters (including the exposure time, neutral density filters and any gain settings) for each of the 3 samples should be set so that the brightest image out of the 3 that are taken (see Items a, b and c, above) has its maximum gray scale at roughly 3/4 of the maximum possible value. For a 16-bit camera with 65,536 gray levels, the maximum gray scale should be at approximately 3/4 x 65,536 = 49152.

Avoid saturation of any pixels. This occurs when the light intensity is higher than what the digital storage will allow. For example, if there is a 16-bit camera and if there are pixels in the image that are so intense, they SHOULD have an intensity greater than 65,536, then these pixels will appear in stored digital image with a gray-level of 65,536 because this is the largest value that can be stored with the 16 bit camera. The purpose of setting the maximum gray-level to 49152 is to provide a margin for error so that this situation will be avoided altogether. Whether or not saturation has occurred is simple to check. Use AutoVisualize’s Image Statistics function. The dialog box will show the maximum intensity in the data set.


3. Avoiding automatic scaling or automatic gain controls.
Sometimes cameras and frame grabber cards are provided with an automatic gain control, and sometimes the acquisition software provides an automatic scaling of the image to maximize the contrast. While these types of automatic settings enhance the contrast of the image, they ruin the quantitation of the FRET signal. None of these automatic settings should be used with either the calibration images sets or the FRET image set.


4. Avoiding calibration samples with autofluorescence.
Part of the calibration process that is automatically carried out in the FRET module is a calculation of bleed-through of the donor and of the acceptor. In this calibration, the amount of bleed-through excitation of the probe (e.g., the acceptor probe) is measured, under the condition where the other probe (e.g., the donor) is being excited directly by having the excitation filter tuned to it. When autofluorescence is present, this measurement is not accurate because the autofluorescence is mistaken for bleed-through excitation. For example, if animal tissue sections are being used as an assay and that tissue contains autofluorescing lipofuscin, or some other autofluorescing compound, it is better to label a cell-culture analog, with little or no autofluoresce, with the probe and to use that sample for the calibration. 

Be sure to inspect the images to confirm that they have no autofluorescence. If this cell-culture approach is not possible, then it is acceptable to use a drop of solution that contains the probe, and to place this drop directly on the slide with a coverslip. It is not essential that the calibration sample be a biological sample, because it is only the spectral characteristic of the optical filter that is being measured. Admittedly, a biological sample would be better if it has no autofluorescence, because then any influence that the molecular and tissue environment has on the spectrum of the probe will be measured as well. However, with autofluorescence being present, this advantage is lost anyway, so a simple drop of solution works better.


5. Checks and verifications.
With a variety of adversities that accompany any FRET experiment, including autofluorescence, sample movement, detector noise, optical noise, photobleaching and spectral bleed-through the software will not perfectly compensate for all of them. Good laboratory practice dictates that these adversities be avoided or minimized wherever possible, and, to supply independent checks and verifications of all findings.


There are several verifications of a positive FRET signal that can be carried out.


1. Check the FRET efficiency value.
Where the positive FRET signal is suspected, one should observe a FRET efficiency that is significantly different from regions in the image that are not undergoing FRET. It is difficult to pinpoint an exact number to look for, because this number is too dependent upon the probe uptake, distances between the donor and acceptor molecules, autofluorescence and other background noise. For example, if an efficiency of 25% is being read in the suspected FRET region and the surrounding areas show an efficiency of 3%, then the 25% numbers can be attributed to a positive FRET occurrence and the 3% numbers may be attributed to background noise.


2. Use the acceptor photobleaching method as a second verification. 
After all of the images from the FRET sample have been acquired, collect one more set of images. First, expose the sample where the excitation filter is set for the acceptor and the exposure time is long enough to completely bleach the sample. Verify that the acceptor has completely bleached. Collect an image, to be stored for the record, which proves that the acceptor is completely bleached. This should be a completely black image of the acceptor. Next collect an image of the donor, where the excitation and emission filters are both set for the donor. Subtract the donor-donor image (using Image Algebra in AutoVisualize) of the same FRET sample that was collected according to Paragraph 1 above from this donor-donor image of the bleached sample. The brightest areas in the subtraction image are the areas where FRET occurred, because they are no longer quenched.


3. Use positive and negative control samples.
Prepare positive and negative control samples. FRET samples are prepared under conditions that duplicate the conditions of the sample under test. The same probe pair should be used. If possible the same cell culture or tissue type should be used. The main idea behind these controls is to duplicate all of the same error sources, and to thereby prove that FRET can be properly discriminated under controlled conditions. So, for example, if there is autofluorescence in the sample under test, there should be autofluorescence of similar magnitude due to the same causes in the control samples. Identical magnification, exposure parameters and so forth should be set up. One of these samples is the positive control sample and one is the negative control sample, which respectively confirms that a FRET signal appears and is absent when it should be. The positive control sample is one where it is certain, by prior confirmation, that a FRET signal will appear. The negative control sample is one where the FRET cannot occur even though both the donor and acceptor probes have been applied and are seen in the donor-donor and acceptor-acceptor image pair. For example, the two probes may be applied to different cells in the field, if this is possible to accomplish, or one of them may be conjugated to a different molecule than is used in the sample under test.


4. Apparent sample movement.
Apparent sample movement is very common. Even if the sample is not physically moving there are optical causes that make the sample appear to move from one image to the next. Sometimes when the filter wheel is shifted from one set of wavelengths to the next, the light refracts differently through the filter and causes this apparent movement. Chromatic aberrations can also cause an apparent motion. 

Motion of the calibration samples causes no problem. Any motion, even if it is very slight, of the FRET sample will likely cause erroneous FRET signals to appear. Motion can be inspected by loading the 3 images of the FRET sample into AutoVisualize as a stack and using the optical slice viewer to scroll through them. Objects in the image will appear to shift left and right or up and down by a small amount. AutoVisualize’s FRET module will automatically compensate for this motion. If motion is seen in the image, be sure that the “Use Automatic Alignment” checkbox is checked.


5. The same calibration set for every FRET sample in an experiment batch.
When carrying out an experimental run, using a particular probe and sample type, always use just one calibration image set for all of the samples under test, and for all of the positive and negative control samples. There should be only 6 calibration images that are utilized for all of the samples under test. These calibration images are used to calculate the bleed-through coefficients. It is more important that these calculated coefficients do not change from one sample to the next. Even though, in theory, a well prepared calibration sample ought to result in the same bleed-through calibration coefficient, because of all the adversities mentioned earlier there will be some variation. 

Using the same calibration image set, while it may not make the calculation more accurate, it will eliminate some of the variation from one sample to the next. It is important to eliminate this variation because the decision of whether or not a positive FRET signal is present is based largely on comparisons. In cases where one sample is imaged at a higher magnification than another, there is a temptation to recollect the calibration images at the same magnification. Suppose, for example, Sample 1 is imaged with a 40X objective and Sample 2 (using the same probe pair) is imaged with a 100X objective. Also, suppose we want to compare the FRET signals between these two samples. Since these 2 samples are being compared against each other, it is important to eliminate the variation in the calibration samples. Therefore, the calibration images that were collected for Sample 1 should also be used for Sample 2, even though it has a different magnification than Sample 2.


6. Autofluorescence in the sample under test
Autofluorescence should be avoided or minimized because it causes errors in the FRET calculations. Autofluorescence of the calibration sample is easier to avoid than autofluorescence of the sample under test. Depending on the experiment, autofluorescence of the sample under test may be impossible to eliminate. Living tissue samples, such as muscle sections, have autofluorescence due to lipofuscin. This situation will amplify errors in the FRET efficiency calculation. However, this problem may be mitigated by being especially vigilant about checks and verifications that are described above in Sec. 5. 

The fundamental question is: “What types of errors will this autofluorescence cause?” In principle, if every autofluorescent molecule possessed the exact same absorption and emission spectrum as the donor or the acceptor, then autofluorescence of the sample under test would cause no errors at all. These identical spectra would simply be processed by the FRET software to be a “non-FRET-ing” background portion of the acceptor or donor probe. However, this ideal scenario does not happen. Autofluorescent spectra differ greatly from the donor and acceptor spectra and they differ among themselves from pixel to pixel. Fundamentally the function of the FRET detection software is to detect these differences and it is these differences that produce the FRET regions in a processed image. So, the autofluorescent absorption and emission spectra can cause a falsely positive FRET detection. Hopefully, the erroneous efficiency numbers will be small compared to the true FRET efficiency numbers. This may be so, considering that the autofluorescence level ought to be lower than the donor and acceptor fluorescence levels. To verify if a FRET signal is true in the presence of autofluorescence, look for a significant difference in their FRET efficiencies. For example if the bright regions in the FRET image are showing an efficiency of 30% and those in the autofluorescing background region are showing 10%, then the 30% regions are probably due to true FRET. This decision will be verified by the acceptor photobleaching method and the positive and negative controls.


7. Flicker in exposure time
If a commercial acquisition system is being used, it is unlikely that there will be any flicker in exposure time. However, home-spun acquisition systems can have this flicker. The mechanical shutters of the camera and excitation light have variability in them from one exposure to the next. This variability will be anywhere from 10 to 30 msec. With exposure times of 100 msec or less, this variability causes significant errors in the FRET calculations. The solution to avoiding this variability is to time the camera reset/clear operation and the charge acquisition so that it has a good margin inside of the opening and closing of the shutters. See the timing diagram below for an example:

  Example timing diagram for a 100 msec. exposure with a cooled CCD camera to eliminate variability due to shutter flicker.

Example timing diagram for a 100 msec. exposure with a cooled CCD camera to eliminate variability due to shutter flicker.


Collecting an Empirical PSF


An empirical PSF may be used with the 3D Maximum Likelihood deconvolution. Methods for collecting a PSF are described in [Hiraoka, McNally, Wallace]. The main idea is to image a fluorescent bead that is smaller than the Rayleigh resolution limit. There are many challenges in collecting a PSF, such as noise, spherical aberration, low light and difficulties in bead sample preparation. There are many variants to the collection method, in order to counter these challenges. The method described here is a summary of the most straightforward method. For further details and variants, refer to [Hiraoka, McNally, Wallace] and references therein.


The sample is a fluorescently labeled sub resolution microsphere. The size of the microsphere is determined by calculating the Rayleigh resolution according to the formula: resolution = numerical aperture / wavelength. Choose a microsphere whose diameter is less than this resolution. Ready-made samples are available from Molecular Probes.


Align the microscope. This alignment includes the centering of all excitation and emission optics and focusing of field lenses for Köhler illumination. Follow the instructions provided by the microscope manufacturer. Collect a stack of images, representing a through-focus series above, through and below the bead. Collect this stack under conditions that are identical to the biological sample data. If possible, do so at the same time, so you will be assured that conditions have not changed. A minute rotation of the objective lens by removing and re-inserting it, a small vibration of the stage by placing an object on the workbench or a few degree temperature change will significantly change the PSF. Use the same lens, magnifications settings, filters, z-step size (space between optical slices) and other conditions. Select a bead on the slide that is at the same depth below the cover slip as the biological sample in order to match the spherical aberrations. Adjust the stage so that the bead is in the center of the field in all three directions x, y and z. The bead should be in focus in the image slice that resides in the center of the stack. Collect the same number of images in the stack with the bead as with the biological sample.

Eliminate spherical aberrations by adjusting the immersion oil refractive index. Cargille Laboratories and other manufacturers provide oils that have various refractive indices and which are adjustable by mixing. Have an array of oils prepared with adjustments at the 3rd decimal place. Place a drop of oil on the cover slip. Examine the PSF by direct eye viewing while focusing up-and-down above and below the bead. 

Although it is not possible to have perfect symmetry, you want to make the bead images as symmetric as possible. In-plane symmetry implies that the optics are well aligned. Symmetry along z, which is apparent if the images above the bead appear similar to those below, implies that the spherical aberration is minimized by good matching of the oil. Try various refractive indices until the best symmetry is achieved. Remove the oil from the cover slip, place a drop of a different refractive index and repeat focusing up-and-down to judge the symmetry.

See the article by Wallace [Wallace] for other techniques to correct spherical aberration, such as usage of an objective lens that has a correction collar.

After collecting the bead images the stack may possess a flicker in intensity from slice to slice. This flicker may be due to shutter-speed variations, wandering of the arc in the excitation lamp and several other causes. It is corrected by using the “flicker correction” function of AutoDeblur (also called the “optical density correction” in prior software versions). 

The bead image may also possess a fading due to photo bleaching, which is evident by noticing that the images are dimmer on one side of the bead compared to the other (e.g., in out-of-focus slices above the bead compared to below the bead). This error is corrected by the “attenuation correction” function in AutoDeblur.



- Wallace, W., Schaefer, L., Swedlow, J., A Workingperson’s Guide to Deconvolution in Light Microscopy, Biotechnics, 32(5): 1076-1097, 2001.

- Wallace, W., Schaefer, L., Swedlow, J., A Workingperson’s Guide to Deconvolution in Light Microscopy, Biotechnics, 32(5): 1076-1097, 2001.

- Hiraoka, y., J.W. Sedat, D.A. Agard, Determination of Three-Dimensional Imaging Properties of a Light Microscope System, Biophysical Journal, 57: 325-333, 1990.

- McNally, J.G., Kazrpova, T., Cooper, J., Conchello, J., Three-Dimensional Imaging by Deconvolution Microscopy, Methods, 19: 373-385, 1999.

Ratiometrics Module

Precise Engineering for Intercellular Ionic Imaging.


The Ratiometrics module is designed for researchers who study the effect of changing the environment of a sample by comparing the same sample with differing concentrations of calcium or changing the pH.

Ratiometrics X employs the Grynkiewicz Equation for Ion Concentration and produces accurate results with visually-emphasized color mapping.

Built in pre and postprocessing steps such as Automatic Alignment, Remove Spot Noise and Gaussian Smoothing make for a cleaner resultant image with less steps for the user.

Once the preprocessing has been done, Ratiometrics X delivers concise statistics, and is easily exported into an xls file. Select specific areas to analyze by creating regions of interest, fully defined by the user. Get the results you need from the areas you want.

FRET Module

User-friendly Tools to Study Protein-to-Protein Interactions.


Created for researchers focusing on protein-protein interactions, the FRET module incorporates the two most commonly accepted algorithms: Elangovan and Periasamy, and Gordon and Herman, and adds our own proprietary algorithm as well. All three algorithms correct Cross Talk, but AutoQuant's proprietary algorithm goes one step further. Where other algorithms make assumptions about the Cross Talk, our Maximum Likelihood Estimation algorithm mathematically solves the Cross Talk, allowing for a much more accurate analysis of the images.

FRET X also includes several helpful preprocessing tools to turn your images into precisely analyzed statistics. The Channel to Channel alignment tool corrects for shifts between channels, shifts that would otherwise corrupt your analyses. The Background Subtraction tool eliminates artifacts that can compromise the results of your analyses.

      Once the datasets have been processed with the desired algorithm, easily create as many regions of interest as you desire, then run statistical analyses on those ROI’s, and export the results to a spreadsheet. For added ease of use, FRET X is multiThreaded, meaning more than one process can be running at a time.


Once the datasets have been processed with the desired algorithm, easily create as many regions of interest as you desire, then run statistical analyses on those ROI’s, and export the results to a spreadsheet. For added ease of use, FRET X is multiThreaded, meaning more than one process can be running at a time.

Colocalization Module

Analyze Multichannel Data and Identify Overlapping Fluorescent Markers.


Multi-channel images often have sections where two or more of the dyes overlap. The Colocalization module can identify these areas, display them, and generate statistics on user-selected regions of interest.

An easy to use interface lets you load in two channels of a multi-channel dataset, create an intensity mask in which to display the colocalized areas, as well as generate a new "colocalized" dataset, displaying only the areas with Colocalization.

Counting & Tracking Module

Count Objects and Track Them as They Move through Time.


The Object Counting and Tracking module has the ability to count a nearly infinite number of objects. It runs on a multiDimensional compatible platform and can load and process 3D time-series datasets. Counting and Tracking X has powerful and intuitive preprocessing tools to give complete control over the objects to be counted and tracked.

     Once the objects have been counted, the objects can then be tracked through the time series. Easy-to-follow tracking lines show where the object has moved through time, for a vivid graphic depiction of the objects’ activities. Properties such as the size, circularity, volume, speed, acceleration, distance traveled between time-points and much more can be calculated and exported to a spreadsheet for later analysis.    


Once the objects have been counted, the objects can then be tracked through the time series. Easy-to-follow tracking lines show where the object has moved through time, for a vivid graphic depiction of the objects’ activities. Properties such as the size, circularity, volume, speed, acceleration, distance traveled between time-points and much more can be calculated and exported to a spreadsheet for later analysis.


Screenshot 2015-01-22 15.27.21.png

AutoQuant Bibliography

Use Google Scholar to Locate Research Articles that Reference Media Cybernetics Software



Tip: Type "Media Cybernetics" or "AutoQuant" (product name) 
and a related keyword.
Example: "Media Cybernetics", "Cell"


AutoQuant Articles

Agard, D. A. (1984). "Optical Sectioning Microscopy: Cellular Architecture in Three Dimen­sions." Annual Review of Biophysics and Bioengineering. 13 191-219. 

Aikens, R. S., D. A. Agard and J. W. Sedat. (1989). "Solid-State Imagers for Microscopy." Meth­ods in Cell Biology. 29 291-313. 

Ayers, G. R. and J. C. Dainty. (1988). "Iterative Blind Deconvolution Method and Its Applica­tions." Optics Letters. 13(7): 547-549. 

Bertero, M., P. Boccacci, G. J. Brakenhoff, F. Malfanti and H. T. M. Van der Voort. (1990). "Three-Dimensional Image Restoration and Super-Resolution in Fluorescence Confocal Micros­copy." Journal of Microscopy. 157(1): 3-20. 

Carrington, W. A. (1990). "Image Restoration in 3D Microscopy with Limited Data." Bioimaging and Two-Dimensional Spectroscopy, Los Angeles, SPIE. 1205. 72-83. 

Cohen, A. R., B. Roysam and J. N. Turner. (1994). "Automated Tracing and Volume Measure­ments of Neurons from 3-D Confocal Fluorescence Microscopy Data." Journal of Microscopy. 173(2): 103-114. 

Conchello, J. and E. Hansen. (1990). "Enhanced 3-D Reconstruction From Confocal Scanning Microscope Images. 1: Deterministic and Maximum Likelihood Reconstructions." Applied Optics. 29(26): 3795-3804. 

Cooper, J. A., S. Bhattacharyya, J. N. Turner and T. J. Holmes. (1993). "Three-Dimensional Transmitted Light Brightfield Imaging: Pragmatic Data Collection and Preprocessing Consider­ations." MSA Annual Meeting, Cincinnati, San Francisco Press. 51. 276-277. 

Deitch, J. S., K. L. Smith, J. W. Swann and J. N. Turner. (1991). "Ultrastructural Investigation of Neurons Identified and Localized Using the Confocal Scanning Laser Microscope." Journal of Electron Microscopy Technique. 18 82-90. 

Dempster, A. P., N. M. Laird and D. B. Rubin. (1977). "Maximum Likelihood from Incomplete Data via the EM Algorithm." Journal of the Royal Statistical Society B. 39 1-37. 

Elangovan M, Wallrabe H, Chen Y, Day R, Barroso M, and Periasamy A. (2003). “Characteriza­tion of one- and two-photon excitation fluorescence resonance energy transfer microscopy. Meth­ods 29. 58-73. 

Erhardt, A., G. Zinser, D. Komitowski and J. Bille. (1985). "Reconstructing 3-D Light-Micro­scopic Images by Digital Image Processing." Applied Optics. 24(2): 194-200. Media Cybernetics, Inc. Manual & Tutorials January, 2009 Page 145 

Fay, F. S., W. Carrington and K. E. Fogarty. (1989). "Three-Dimensional Molecular Distribution in Single Cells Analysed using the Digital Imaging Microscope." Journal of Microscopy. 153(2): 133-149. 

Gerchberg, R. W. and W. O. Saxton. (1974). "Super-Resolution Through Error Energy Reduc­tion." Optica Acta. 21 709-720. 

Gibson, S. F. and F. Lanni. (1991). "Experimental Test of an Analytical Model of Aberration in an Oil-Immersion Objective Lens Used in Three-Dimensional Light Microscopy." Journal of the Optical Society of America A. 8(10): 1601-1613. 

Gordon G, Berry G, Liang X, Levine B, and Herman B. (1998). “Quantitative fluorescence reso­nance energy transfer measurements using fluorescence microscopy.” Biophysical Journal 74, 2702-2713. 

Grynkiewicz G, Poenie M, and Tsien RY. (1985). “A new generation of Ca2+ indicators with greatly improved fluorescent properties.” Journal of Biological Chemistry 260, 3440-3450. 

Hebert, T., R. Leahy and M. Singh. (1988). "Fast MLE for SPECT Using an Intermediate Polar Representation and a Stopping Criterion." IEEE Transactions on Nuclear Science. 35(1): 615-619. 

Hiraoka, Y., J. W. Sedat and D. A. Agard. (1987). "The Use of Charge-Coupled Device for Quan­titative Optical Microscopy of Biological Structures." Science. 238 36-41. 

Hiraoka, Y., J. W. Sedat and D. A. Agard. (1990). "Determination of Three-Dimensional Imaging Properties of a Light Microscope System: Partial Confocal Behavior in Epifluorescence Micros­copy." Biophysics Journal. 57 325-333. 

Holmes, T. J., Bhattacharyya, S., Cooper, J. A., Hanzel, D., Krishnamurthi, V., Lin,W., Roysam, B., Szarowski, D. H., Turner, J. T., “Light Microscopic Images Reconstructed by Maximum Like­lihood Deconvolution,” Chapter 24 in The Handbook of Biological Confocal Microscopy, 2nd Edition, James Pawley, Editor, Plenum Press, New York, 1995. 

Holmes, T. J. (1989). "Expectation-Maximization Restoration of Band-Limited, Truncated Point-Process Intensities with Application in Microscopy." Journal of the Optical Society of America A. 6(7): 1006-1014. 

Holmes, T. J. (1992). "Blind Deconvolution of Quantum-Limited Incoherent Imagery." Journal of the Optical Society of America A. 9(7): 1052-1061. 

Holmes, T. J. and Y. H. Liu. (1989). "Richardson-Lucy/Maximum Likelihood Image Restoration for Fluorescence Microscopy: Further Testing." Applied Optics. 28(22): 4930-4938. 

Holmes, T. J. and Y. H. Liu. (1991). "Acceleration of Maximum-Likelihood Image-Restoration for Fluorescence Microscopy and Other Noncoherent Imagery." Journal of the Optical Society of America A. 8(6): 893-907. Media Cybernetics, Inc. Manual & Tutorials January, 2009 Page 146 

Holmes, T. J., Y. H. Liu, D. Khosla and D. A. Agard. (1991). "Increased Depth-of-Field and Ste­reo Pairs of Fluorescence Micrographs Via Inverse Filtering and Maximum Likelihood Estima­tion." Journal of Microscopy. 164(3): 217-237. 

Janesick, J. R., T. Elliott and S. Collins. (1987). "Scientific Charge-Coupled Devices." Optical Engineering. 26(8): 692-714. 

Joshi, S. and M. I. Miller. (1993). "Maximum a Posteriori Estimation with Good's Roughness for Three-Dimensional Optical-Sectioning Microscopy." Journal of the Optical Society of America A. 10(5): 1078-1085. 

Kasten, F. H. (1993). "Introduction to Fluorescent Probes: Properties, History and Applications." Fluorescent Probes for Biological Function of Living Cells: A Practical Guide. Academic Press, London. in press. 

Kimura, S. and C. Munakata. (1990). "Dependence of 3-D Optical Transfer Functions on the Pin­hole Radius in a Fluorescent Confocal Optical Microscope." Applied Optics. 29(20): 3007-3011. 

Krishnamurthi, V., Y. Liu, T. J. Holmes, B. Roysam and J. N. Turner. (1992). "Blind Deconvolu­tion of 2D and 3D Fluorescent Micrographs." Biomedical Image Processing III and Three-Dimen­sional Microscopy, San Jose, SPIE. 1660. 95-102. 

Krishnamurthi, V., J. N. Turner, Y. Liu and T. J. Holmes. (1994). "Blind Deconvolution for Fluo­rescence Microscopy by Maximum Likelihood Estimation." Applied Optics. in review. 

Lalush, D. S. and M. W. Tsui. (1992). "Simulation Evaluation of Gibbs Prior Distributions for Use in Maximum A Posteriori SPECT Reconstructions." IEEE Transactions on Medical Imaging. 11(2): 267-275. 

Lange, K. (1990). "Convergence of EM Image Reconstruction Algorithms with Gibbs Smooth­ing." IEEE Transactions on Medical Imaging. 9(4): 439-446. 

Llacer, J. and E. Veklerov. (1989). "Feasible Images and Practical Stopping Rules for Iterative Algorithms in Emission Tomography." IEEE Transactions on Medical Imaging. 8(2): 186-193. errata 9(1):112(1990). 

Lucy, L. B. (1974). "An Iterative Technique for the Rectification of Observed Distributions." The Astronomical Journal. 79(6): 745-765. 

Macias-Garza, F., K. R. Diller, A. C. Bovik, S. J. Aggarwal and J. K. Aggarwal. (1989). "Improvement in the Resolution of Three-Dimensional Data Sets Collected Using Optical Serial Sectioning." Journal of Microscopy. 153(2): 205-221. 

Martin, L. C. and B. K. Johnson. (1931). Practical Microscopy. Blackie and Son, London. Media Cybernetics, Inc. Manual & Tutorials January, 2009 Page 147 

Miller, M. I., and B. Roysam, "Bayesian Image Reconstruction for Emission Tomography Incor­porating Good's Roughness Prior on Massively Parallel Processors," Proceedings of the National Academy of Sciences, Vol. 88, No. 8, pp. 3223-3227, April 1991. 

Miller, M. I. and D. L. Snyder. (1987). "The Role of Likelihood and Entropy in Incomplete-Data Problems: Applications to Estimating Point-Process Intensities and Toeplitz Constrained Covari­ances." Proceedings of the IEEE. 75 892-907. 

Oppenheim, A. V. and R. W. Schafer. (1975). Digital Signal Processing. Prentice-Hall, Engle­wood Cliffs, NJ. 

Poenie, M. (1990). Alteration of intracellular Fura-2 fluorescence by viscosity: a simple correc­tion. Cell Calcium 11(2-3), 85-91. 

Politte, D. G. and D. L. Snyder. (1991). "Corrections for Accidental Coincidences and Attenuation in Maximum-Likelihood Image Reconstruction for Positron-Emission Tomography." IEEE Trans­actions on Medical Imaging. 10(1): 82-89. 

Richardson, W. H. (1972). "Baysian-Based Iterative Method of Image Restoration." Journal of the Optical Society of America. 62(1): 55-59. 

Roysam, B., H. Ancin, A. K. Bhattacharjya, A. Chisti, R. Seegal and J. N. Turner. (1994). "Algo­rithms for Automated Characterization of Cell Populations in Thick Specimens from 3-D Confo­cal Fluorescence Data." Journal of Microscopy. 173(2): 115-126. 

Roysam, B., A. K. Bhattacharjya, C. Srinivas and J. N. Turner. (1992). "Unsupervised Noise Removal Algorithms for 3-D Confocal Fluorescence Microscopy." Micron and Microscopica Acta. 23(4): 447-461. 

Shaw, P. J. and D. J. Rawlins. (1991). "Three-Dimensional Fluorescence Microscopy." Progress in Biophysics and Molecular Biology. 56 187-213. 

Shepp, L. A. and Y. Vardi. (1982). "Maximum Likelihood Reconstruction for Emission Tomogra­phy." IEEE Transactions on Medical Imaging. 1(2): 113-121. 

Sheppard, C. J. R. and M. Gu. (1994). "3D Imaging in Brightfield Reflection and Transmission Microscopes." 3D Image Processing in Microscopy, Munich, Society for 3D Imaging in Micros­copy. 

Snyder, D. L., A. M. Hammoud and R. L. White. (1993). "Image Recovery from Data Acquired with a Charge-Coupled-Device Camera." Journal of the Optical Society of America A. 10(5): 1014-1023. 

Snyder, D. L., M. I. Miller, L. J. Thomas and D. G. Politte. (1987). "Noise and Edge Artifacts in Maximum-Likelihood Reconstructions for Emission Tomography." IEEE Transactions on Medi­cal Imaging. 6(3): 228-238. Media Cybernetics, Inc. Manual & Tutorials January, 2009 Page 148 

Streibl, N. (1984). "Depth Transfer by an Imaging System." Optica Acta. 31 1233-1241. 

Turner, J. N., K. L. Szarowski, S. M., A. Marko, A. Leith and J. W. Swann. (1991). "Confocal Microscopy and Three-Dimensional Reconstruction of Electrophysiologically Identified Neurons in Thick Brain Slices." Journal of Electron Microscopy Technique. 18 11-23. 

Van Trees, H. L. (1968). Detection, Estimation, and Modulation Theory. Wiley , New York. 

Veklerov, E. and J. Llacer. (1987). "Stopping Rule for the MLE Algorithm Based on Statistical Hypothesis Testing." IEEE Transactions on Biomedical Imaging. 6(4): 313-319. 

Visser, T. D., J. L. Oud and G. J. Brakenhoff. (1992). "Refractive Index and Axial Distance Mea­surements in 3-D Microscopy." Optik. 90(1): 17-19. 

R.H. Webb and C.K. Dorey, "The Pixelated Image," Chapter 4 in The Handbook of Biological Confocal Microscopy, 2nd Edition, James Pawley, Editor, Plenum Press, New York, 1995. 

Willis, B., J. N. Turner, D. N. Collins, B. Roysam and T. J. Holmes. (1993). "Developments in Three-Dimensional Stereo Brightfield Microscopy." Microscopy Research and Technique. 24 437-451.