Posted by
Seamus Holden-2 on
URL: http://confocal-microscopy-list.275.s1.nabble.com/Measuring-noise-characteristics-of-sCMOS-cameras-tp7585913p7585914.html
*****
To join, leave or search the confocal microscopy listserv, go to:
http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopyPost images on
http://www.imgur.com and include the link in your posting.
*****
Hi Kyle
Ref 17 in that paper you refer to is very interesting:
T. J. Lambert, J. C. Waters, \Assessing camera
performance for quantitative microscopy," Quantitative
Imaging in Cell Biology, Chap. 3, J. C. Waters,
T. Wittman, Eds., pp. 35–53, Elsevier, NewYork
(2013).
https://www.researchgate.net/publication/263515014_Assessing_camera_performance_for_quantitative_microscopyThe most thorough description of how to characterise a camera that I have seen.
Read noise and poisson noise I think can be treated on a per-pixel basis. But after reading that article, my impression is that the contribution of fixed pattern noise may be hard to account for without uniform illumination. But I would have thought that contribution would be fairly small.
Best wishes
Seamus
Dr Seamus Holden
University Research Fellow
Centre for Bacterial Cell Biology
Baddiley-Clark Building
Newcastle University
Richardson Road
Newcastle upon Tyne
NE2 4AX, United Kingdom
Phone: +44 (0)191 208 3230
-----Original Message-----
From: Confocal Microscopy List [mailto:
[hidden email]] On Behalf Of Kyle Douglass
Sent: 26 October 2016 08:55
To:
[hidden email]
Subject: Measuring noise characteristics of sCMOS cameras
*****
To join, leave or search the confocal microscopy listserv, go to:
http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopyPost images on
http://www.imgur.com and include the link in your posting.
*****
Hi everyone,
This is a rather long and technical post which comes down to a few questions, so I am providing a "too long; didn't read" first to summarize. I'm hoping that some of you will find this topic interesting and be able to reply.
tl;dr: How flat should the illumination be when measuring the photon response curve of an sCMOS camera? Why should the illumination pattern be so uniform when each sCMOS pixel can be thought of as an independent sensor?
I am returning to work on a minor problem that has interested me for some time. I work in localization microscopy (STORM/PALM/PAINT) and have been using sCMOS cameras for the past two years with good results. To precisely localize the single molecule emissions, we take into account the pixel-dependent noise characteristics of our sensors, incorporating the measured characteristics of the sensor into the maximum likelihood estimation of a fluorescent molecule's position. This estimation procedure was--as far as I know--first described in Huang et al., Nature Methods 10, 653 (2013), doi:10.1038/nmeth.2488.
To do the characterization requires measuring three quantities for each pixel of our cameras:
1. the offset (average ADU count under zero illumination) 2. the read noise (variance of the ADU counts under zero illumination) 3. the gain (the number of photoelectrons per ADU when the camera is in the linear response regime)
The offset and read noise are trivial to measure. To measure the gain, however, we capture a few tens of thousands of camera frames with the camera chip under uniform illumination at different light intensities and follow the mathematical operations described in the supplement to the paper cited above.
My questions are:
1. Why does the illumination need to be flat when we are measuring the gain by observing fluctuations in the pixels' ADU counts in time, not in space? I can understand why illumination non-uniformities would lead to errors when measuring the noise of a CCD chip. For CCD's, I believe that one typically treats each pixel as an independent sample of the noise from the entire chip, so one inherently assumes that the photon shot noise is uniform across the sensor. However, each pixel is only compared to itself when measuring the gain of an sCMOS sensor in the manner described above, so why does it matter that each pixel receives the same light intensity?
2. How flat is "flat enough" for this calibration procedure? With a smart phone screen set an optimum distance from the bare camera port and carefully rotated into position, I can get about 97% uniformity across the whole chip by simply by displaying gray scale images. Most of the non-uniformity appears at the corners of the chip where I think shadowing from the opening in the camera's housing is decreasing the light intensity slightly. The calibrations I get from this method allow me to obtain a localization precision that I independently measured from sparsely distributed dye molecules to be between 8 and 12 nm, which is in line with published STORM results. When measuring tiny clusters of proteins, the scatter plots of the localizations match the overall shapes of their widefield images quite well.
However, a recent paper by Li et al., J. Innov. Opt. Health Sci. 09,
1630008 (2016), doi:10.1142/S1793545816300081, states that one needs better than 99% uniformity to avoid introducing significant bias into the noise measurements. Furthermore, the engineers at one of the big camera manufacturers once told me I shouldn't even bother trying to do the noise characterization myself since I wouldn't be able to get the required level of uniformity for an accurate characterization. (In fairness, they sell the characterization process as a service.)
Unfortunately, I have been unable to find satisfactory answers to these questions. So far, my results seem to suggest that my calibration is good enough, but I wonder if someone else can offer their input.
Thanks!
Kyle
--
Kyle M. Douglass, PhD
Post-doctoral researcher
The Laboratory of Experimental Biophysics EPFL, Lausanne, Switzerland
http://kmdouglass.github.io http://leb.epfl.ch