No, that's not correct. If the EM gain is high enough read noise is
negligible and therefore a threshold (at say >5 s.d.) is reliable. Therefore the number of events above background induced by a low light signal is the photo-electron rate -in events/frame. Cheers Andreas Bruckbauer wrote: > Hi Mark, > sorry if anyone is bored by this topic now, but i think counting > single photon events after thresholding is a bad idea because the > result depends very much on the threshold settings, the signal for one > photon and read out noise is not well enough separated, i would rather > trust the established methods. > > best wishes > > Andreas > > > > -----Original Message----- > From: Mark Cannell <[hidden email]> > To: [hidden email] > Sent: Fri, 23 Apr 2010 22:37 > Subject: Re: photons vs. photoelectrons? > > Hi Andreas > > As I said at the beginning, there are very few cases where actual > photon numbers are needed, but it adds a veneer of precision/expertise > to put out an image "calibrated" in photoelectrons. Now I don't mind > that, but if it's to be done that way I would like it to be > correct/honest. I hope you can see my point. > > As in other areas, the purpose of calibration is to allow reference to > others. But in my experience it is hard to do a good calibration of > most complex measurements so it's better if a result can be expressed > in terms of a change... The only cases I can think of where actual > quantum numbers are needed are for some statistical tests or fitting > to theory. > > The trouble with EMCCD is that the multiplicative noise reduces the > S/N so it's as if you actually got about half the number of photons. > (So, if you are in a regime where your signal for the exposure time is > much greater than the read noise you should not use an EMCCD. While > most EMCCDs also allow you not to use the EM register, the read out > amplifier for the CCD shift register is also very noisy by good CCD > standards. ) > But With an EMCCD, 'accurate' calibration is actually easier when you > can detect a signal with mean signal per pixel <<1 phot. Now when you > _count_ (by thresholding) events you have removed the problem of > multiplicative noise so when you take the average signal intensity > (minus dark frames of course) you know how many photoelectron events > are associated with it. As far as I know, no software/camera does this > -but you can. > > Cheers > > Andreas Bruckbauer wrote: > > I have a few questions regarding this: > > > > - What is the point in knowing how many photoelectrons have been > > detected when photons get lost all the way through the microscope and > > the number of photons depends on other parameters like illumination > > intensity and environment of the dye? > > > > - Mark, you seem to be so confident about your way to calibrate the > > camera, how do you do it? > > > > - The method with dark frames and flats is described by Gosh and > Webb > in Biophysical Journal Volume 66 May 1994 1301-1318, they > write: "this > provides a lower > > boundary for the actual number of photons detected, because other > > noise contributions with similar square-root dependencies may exist." > > > > - Has anyone actually compared the results of these calibration with > a > result of an illumination of a known number of photons? > > > > best wishes > > > > Andreas > > > > > > |
Karl Garsha-2 |
In reply to this post by Mark Cannell
Dear All,
To clarify, there is an important distinction between flat-field corrections and the type of bias-subtracted field illumination that will resolve pixel-to-pixel variation on the camera. It is important to understand this difference, lest a reviewer challenge our veneers of expertise. Keep in mind the microscope is a closed system, and we must consider many sources of error including the illumination and sample. The considerations for astronomical observations are not entirely the same.
'Flat field' corrections, usually used to compensate for uneven illumination, are usually a division operation multiplied by a scaling factor, it's an image processing operation.
The other type of 'flat' that will resolve pixel response is an averaged even field illumination with the bias level (average pixel value of image taken with zero exposure time) subtracted from the image.
As long as we are basking in good old fashioned hellfire and brimstone, let us turn to the gospel of isolation...
An even field illumination is not properly achieved with the camera mounted to the microscope. You will not measure camera ‘flatness’ this way. This convolves error from the optical train and illumination source with the camera chip...the error introduced will be significant and results have a very good chance of being imprecise at best, perhaps even sloppy. With a nice fluorescent sea and the potential refractive index mismatch, spherical abberations, possible issues with fluorochrome solubility, differences in lamp coupling, lens properties, wandering arcs...who knows what. A good deal of real error from the practical world. We can do better, even within the limits of a superficial understanding.
So in order to really get down to these sources of variation in our camera and not simply measure noise in the optical train, light source and lamp, we need to isolate the camera. Different components of our closed system have tolerances and tolerances stack. If we don't isolate, then we aren't being rigorous. For even field illumination to measure camera response and inter-pixel variation, pull the camera and use the proper equipment.
If we want to make statements about our pixel to pixel variation and the impact of such variability we need to use the right method, this actually makes things easier. An integrating sphere and analytical light source are one proper way to do this, or if your camera has a built in field illumination that isolates it from the microscope you can use that.
The integrating sphere with stabilized light source and camera mount is tried and true, so just proudly ask someone in the lab where yours is. This is an important part of our simple calibration, we should confirm and report our differential pixel response with zeal.
Thankfully, on a very good EMCCD the pixel variation should be within the manufacterer's designated specification, and this specification is hopefully below the variability one would expect due to sources of noise that are ostensibly intractable, such as shot noise and multiplicative noise. You can check this and make sure your camera is within specification, if you know how to achieve proper even field illumination. Some more costly cameras hold precise pixel response tolerances but the manufacterer has to cover the cost of non-conforming chips; one benefit for the additional price premium is usually that interpixel variation is very well controlled.
On some cheaper cameras one would expect more pixel to pixel variablity because the chip yield (number of EMCCD chips purchased from the supplier at the supplier’s specification that can pass the camera maker’s specification to be put in a camera that someone buys) is higher if the inter-pixel variation tolerance isn't as tight. In this case it makes sense to be careful about overstating precision. Best, Karl
On Wed, Apr 21, 2010 at 3:26 PM, Mark Cannell <[hidden email]> wrote: A flat is the image obtained with a uniformly illuminated field. Uniform illumination is not always easy to achieve, but if you could take images of a thin uniform dye layer as a reasonable measure. Then when you average many such frames you should have captured the non-uniformities in your optical system that affect the flatness of your fluorescence image. This link may help you: |
Nico Stuurman |
I did, and nobody knew what I was talking about;). Can anyone recommend a source for an integrating sphere and stabilized light source? Karl inspired me to take camera calibration more seriously, and, regretfully, my cameras do not have an even illumination source build-in. Thanks! Nico |
Karl Garsha-2 |
Thanks Nico,
CVI Melles Griot, Newport, and Oriel are good sources, there are others, it helps to have a breadboard and some filter wheels for neutral density and wavelength. I'll try to provide more supporting detail when I manage to get caught up. Best, Karl On Fri, Apr 30, 2010 at 10:00 PM, Nico Stuurman <[hidden email]> wrote:
|
Free forum by Nabble | Edit this page |