Precisely driving several devices from IgorPro through National Instruments board possible?

classic Classic list List threaded Threaded
24 messages Options
12
Mark Cannell Mark Cannell
Reply | Threaded
Open this post in threaded view
|

Re: photons vs. photoelectrons?

No, that's not correct. If the EM gain is high enough read noise is
negligible and therefore a threshold (at say >5 s.d.) is reliable.
Therefore the number of events above background induced by a low light
signal is the photo-electron rate -in events/frame.

Cheers

Andreas Bruckbauer wrote:

> Hi Mark,
> sorry if anyone is bored by this topic now, but i think counting
> single photon events after thresholding is a bad idea because the
> result depends very much on the threshold settings, the signal for one
> photon and read out noise is not well enough separated, i would rather
> trust the established methods.
>
> best wishes
>
> Andreas
>
>
>
> -----Original Message-----
> From: Mark Cannell <[hidden email]>
> To: [hidden email]
> Sent: Fri, 23 Apr 2010 22:37
> Subject: Re: photons vs. photoelectrons?
>
> Hi Andreas
>  
> As I said at the beginning, there are very few cases where actual
> photon numbers are needed, but it adds a veneer of precision/expertise
> to put out an image "calibrated" in photoelectrons. Now I don't mind
> that, but if it's to be done that way I would like it to be
> correct/honest. I hope you can see my point.
>  
> As in other areas, the purpose of calibration is to allow reference to
> others. But in my experience it is hard to do a good calibration of
> most complex measurements so it's better if a result can be expressed
> in terms of a change... The only cases I can think of where actual
> quantum numbers are needed are for some statistical tests or fitting
> to theory.
>  
> The trouble with EMCCD is that the multiplicative noise reduces the
> S/N so it's as if you actually got about half the number of photons.
> (So, if you are in a regime where your signal for the exposure time is
> much greater than the read noise you should not use an EMCCD. While
> most EMCCDs also allow you not to use the EM register, the read out
> amplifier for the CCD shift register is also very noisy by good CCD
> standards. )
> But With an EMCCD, 'accurate' calibration is actually easier when you
> can detect a signal with mean signal per pixel <<1 phot. Now when you
> _count_ (by thresholding) events you have removed the problem of
> multiplicative noise so when you take the average signal intensity
> (minus dark frames of course) you know how many photoelectron events
> are associated with it. As far as I know, no software/camera does this
> -but you can.
>  
> Cheers
>  
> Andreas Bruckbauer wrote:
> > I have a few questions regarding this:
> >
> > - What is the point in knowing how many photoelectrons have been >
> detected when photons get lost all the way through the microscope and
> > the number of photons depends on other parameters like illumination
> > intensity and environment of the dye?
> >
> > - Mark, you seem to be so confident about your way to calibrate the
> > camera, how do you do it?
> >
> > - The method with dark frames and flats is described by Gosh and
> Webb > in Biophysical Journal Volume 66 May 1994 1301-1318, they
> write: "this > provides a lower
> > boundary for the actual number of photons detected, because other >
> noise contributions with similar square-root dependencies may exist."
> >
> > - Has anyone actually compared the results of these calibration with
> a > result of an illumination of a known number of photons?
> >
> > best wishes
> >
> > Andreas
> >
> >
> >
Karl Garsha-2 Karl Garsha-2
Reply | Threaded
Open this post in threaded view
|

Re: photons vs. photoelectrons?

In reply to this post by Mark Cannell

Dear All,

 

To clarify, there is an important distinction between flat-field corrections and the type of bias-subtracted field illumination that will resolve pixel-to-pixel variation on the camera. It is important to understand this difference, lest a reviewer challenge our veneers of expertise. Keep in mind the microscope is a closed system, and we must consider many sources of error including the illumination and sample. The considerations for astronomical observations are not entirely the same.

 

'Flat field' corrections, usually used to compensate for uneven illumination, are usually a division operation multiplied by a scaling factor, it's an image processing operation.

 

The other type of 'flat' that will resolve pixel response is an averaged even field illumination with the bias level (average pixel value of image taken with zero exposure time) subtracted from the image.  

 

As long as we are basking in good old fashioned hellfire and brimstone, let us turn to the gospel of isolation...

 

An even field illumination is not properly achieved with the camera mounted to the microscope. You will not measure camera ‘flatness’ this way. This convolves error from the optical train and illumination source with the camera chip...the error introduced will be significant and results have a very good chance of being imprecise at best, perhaps even sloppy. With a nice fluorescent sea and the potential refractive index mismatch, spherical abberations, possible issues with fluorochrome solubility, differences in lamp coupling, lens properties, wandering arcs...who knows what. A good deal of real error from the practical world. We can do better, even within the limits of a superficial understanding.  

 

So in order to really get down to these sources of variation in our camera and not simply measure noise in the optical train, light source and lamp, we need to isolate the camera. Different components of our closed system have tolerances and tolerances stack. If we don't isolate, then we aren't being rigorous. For even field illumination to measure camera response and inter-pixel variation, pull the camera and use the proper equipment.

 

If we want to make statements about our pixel to pixel variation and the impact of such variability we need to use the right method, this actually makes things easier. An integrating sphere and analytical light source are one proper way to do this, or if your camera has a built in field illumination that isolates it from the microscope you can use that.  

 

The integrating sphere with stabilized light source and camera mount is tried and true, so just proudly ask someone in the lab where yours is. This is an important part of our simple calibration, we should confirm and report our differential pixel response with zeal.

 

Thankfully, on a very good EMCCD the pixel variation should be within the manufacterer's designated specification, and this specification is hopefully below the variability one would expect due to sources of noise that are ostensibly intractable, such as shot noise and multiplicative noise. You can check this and make sure your camera is within specification, if you know how to achieve proper even field illumination. Some more costly cameras hold precise pixel response tolerances but the manufacterer has to cover the cost of non-conforming chips; one benefit for the additional price premium is usually that interpixel variation is very well controlled.     

 

On some cheaper cameras one would expect more pixel to pixel variablity because the chip yield (number of EMCCD chips purchased from the supplier at the supplier’s specification that can pass the camera maker’s specification to be put in a camera that someone buys) is higher if the inter-pixel variation tolerance isn't as tight. In this case it makes sense to be careful about overstating precision.


Best,

Karl 


On Wed, Apr 21, 2010 at 3:26 PM, Mark Cannell <[hidden email]> wrote:
A flat is the image obtained with a uniformly illuminated field. Uniform illumination is not always easy to achieve, but if you could take images of a thin uniform dye layer as a reasonable measure. Then when you average many such frames you should have captured the non-uniformities in your optical system that affect the flatness of your fluorescence image. This link may help you:

http://www.aavso.org/observing/programs/ccd/manual/3.shtml

Regards, Mark

Hi All,

By « Flat images » what do you mean compare to dark images? And what is the right procedure to acquire them? Thanks a lot,

/Monique Vasseur/

tél. (514) 343-6111 poste 5148

*De :* Confocal Microscopy List [mailto:[hidden email]] *De la part de* Karl Garsha
*Envoyé :* 20 avril 2010 22:51
*À :* [hidden email]
*Objet :* Re: photons vs. photoelectrons?

Hello All,

The photo-electron measurement can be considered to be the electrons which are registered by the camera pixel, the conversion to photons is a calculation that takes the quantum efficiency into account. The conversion to photons makes some assumptions about the wavelength and bandwidth of the photon population that delivered the photoelectron count (consider fluorescence emission and objective transmission curves and filters convolved with the quantum efficiency of the camera chip).

In my experience the evolve calibration technology is defensible from an analytical standpoint; it is also valuable in a practical context. I have no commercial interest in making this statement. I concur that it’s advisable to understand what such tools do, and I don’t think there is any reason to believe that technology obfuscates the theory behind it. Most of us probably don’t contemplate how our mass air flow sensors affect spark timing in our automobiles on our way to work, yet the information is available, and it can be empowering under the right circumstances.

Because my cameras have to be calibrated, and I work with several cameras, I submit that rigorous gain calibrations aren’t all that painless. The situation with even the most advanced EMCCD technology can be substantially less trivial. The type of automated gain calibration under discussion can take a number of noise factors into account and make a non-trivial situation much more manageable, accurate and precise.

With the evolve tool, the calibration is handled responsibly. I’ve made the effort to convince myself of this. The automated calibration produces more precise calibration than I’m likely to produce manually in the absence of such automated calibration tools, but the big advantage is convenience. The calibration is handled at every gain level (in multiple replicates) using a uniform field illumination built into the camera. There is indeed quite a bit more to it (mean-variance / photon transfer curve calculations using bias’s and flats, bias stability management, etc.. as well as sophisticated voltage management of the EM gain register), but my point is that this is done in minutes. It would be prohibitive for me, or many other busy scientists, to be doing this routinely. This technology makes it straightforward to have a summer undergraduate intern, junior research associate or senior scientist all collaborating to gather advanced quantitative data in the context of the ‘big picture’ (no pun intended) without us worrying about whether someone calibrated the camera at a given gain state correctly.

If others have opinion that departs from my experience, then it’s worth discussion; it can be healthy to challenge new tools and pose questions. But we should do so based on evidence. Data I gathered using an evolve clearly indicates the calibration performed by the camera is accurate –when I tested the linearity of the EM Gain on a calibrated unit the actual least squares fit I recorded had an R squared value of 0.9995. The gain reported is the measurable gain, to the best of my ability to verify. This isn’t an exercise I would repeat for fun, but I can speak to the results. The technology does work, quite well. Quantitative work with EM cameras raises responsibility for considerations beyond those typical of interline cameras. There are different sources of error, noise etc.

I can put on a slide prepared a year ago on one of my instruments and tell if it changed and by how much. I require this level of instrument characterization. This brings up an important point however: analytical imaging technology is a system level calibration. Fluorescence is a real-time photochemical phenomenon, and variability can arise from both the instrument and sample. If you want to truly resolve sample differences, both the illumination and the camera need to be well characterized (assuming standardized optics). I’ve witnessed 30% discrepancy between instruments because of light guide aging (all other things being equal, new arc lamps etc). Technologies like closed loop illumination and sample plan calibration can be enormously helpful in helping to efficiently assure data integrity. The recent introduction of practical quantitative illumination and calibration tools is an important advance that makes quantitative work more accessible, reliable and convenient.

So, in the spirit of informative discussion, I've added my input as well.

Best Regards,
Karl Garsha

On Mon, Apr 19, 2010 at 9:27 PM, Mark Cannell <[hidden email] <mailto:[hidden email]>> wrote:

Hi Steve

As EM gain calibration is so trivial -I couldn't help but be unimpressed :-P To calibrate in terms of average photoelectrons across the image is also trivial when you reduce the signal to << 1 photon per pixel and take plenty of frames. But that is not still not accounting for the pixel to pixel sensor variation. My point is that is that you can't calibrate an _image_ by assuming that the gain and offset of every pixel is the same -you need darks and flats to do this and only then can you provide an image calibrated in 'photons' captured. I may be getting old but I don't like seeing quite complicated ideas being distilled by "turn key" solutions to the point where a user thinks they have something accurately calibrated -but never know what the calibration means or its assumptions.

I know that many folks these days don't seem to want to know anything about the limitations of the methods they use because they think results are more important (than actually understanding what their machine actually does). But as you know, my view is that unless you "understand the process of imaging you risk imagination" (c).

another 2c.

Cheers Mark



Stephen Cody wrote:

Dear List,

I just checked the Evolve web page again, it is not a "Dark
Calibration" but a light calibration. A shutter is closed, and an
internal light source in the camera activated to calibrate the EMCCD.
Below I've extracted the relevant text..... I have no commercial
affiliation with Photometrics.

From Photometrics Web page (This is from a commercial company, I have
no affiliation ad I have no personal experience of this product).

"EMCCD cameras are subject to aging of the EMCCD register as a result
of its usage. The Evolve has a simple calibration feature that
performs the industry’s most accurate EM calibration within 3 minutes.

A simple turn of the camera’s nose-piece closes a shutter and
activates a light source which the detector uses to calibrate its EM
gain. This ensures that users will receive the most accurate EM gain
and EM gain applied matches what the user requests.

Simple software control will allow the user to use this feature as a
manual shutter in order to block all light from the sensor in order to
take dark reference frames if necessary."

On 20 April 2010 09:37, Stephen Cody <[hidden email] <mailto:[hidden email]>> wrote:

Dear Mark et al,

As I understand from the promotional material for this camera, there
is a dark calibration procedure built into the camera. The eVolve web
site while very glittzy is quite informative (if you can stand the
hype)

Stephen Cody

On Saturday, April 17, 2010, Mark Cannell <[hidden email] <mailto:[hidden email]>> wrote:

Hi All

I must admit to being unimpressed by this 'improvement'. It removes (from the researcher) the need to understand what a camera really does and I doubt that it is accurate. Before someone howls at this, I would point out that astronomers who routinely produce calibrated images use a dark and a flat frame to achieve this. Without a dark, you cannot calibrate the camera image -even if you assume it is flat (which it isn't). The problem is that the camera changes it's properties (especially the EM register) so no single calibration is going to be accurate. Since it is easy to actually use darks and flats to calculate actual photon numbers, why rely on a manufacturer calibration? I suggest it's a bit like assuming your Gilson/Eppendorf is still correct and everyone knows that's not GLP -right? But let's be clear, most people don't give a damn about how many photoelectrons there are -they just want a pretty image. For the few cases where photo-electron numbers are needed, the time taken to take darks and flats are trivial compared to the time taken in precise experiments.

my 2c

Mark Cannell





*Van:* Confocal Microscopy List [mailto:[hidden email] <mailto:[hidden email]>] *Namens *John Oreopoulos

*Verzonden:* vrijdag 16 april 2010 16:04
*Aan:* [hidden email] <mailto:[hidden email]>

*Onderwerp:* photons vs. photoelectrons?

The recent release of the Photometrics EMCCD "eVolve" camera which has the ability to output images with pixel values that correspond to photoelectron counts......

--
Stephen H. Cody





Nico Stuurman Nico Stuurman
Reply | Threaded
Open this post in threaded view
|

Re: photons vs. photoelectrons?


The integrating sphere with stabilized light source and camera mount is tried and true, so just proudly ask someone in the lab where yours is.

I did, and nobody knew what I was talking about;).  Can anyone recommend a source for an integrating sphere and stabilized light source?  Karl inspired me to take camera calibration more seriously, and, regretfully, my cameras do not have an even illumination source build-in.

Thanks!

Nico

Karl Garsha-2 Karl Garsha-2
Reply | Threaded
Open this post in threaded view
|

Re: photons vs. photoelectrons?

Thanks Nico,

CVI Melles Griot, Newport, and Oriel are good sources, there are others, it helps to have a breadboard and some filter wheels for neutral density and wavelength. I'll try to provide more supporting detail when I manage to get caught up.

Best,
Karl 

On Fri, Apr 30, 2010 at 10:00 PM, Nico Stuurman <[hidden email]> wrote:

The integrating sphere with stabilized light source and camera mount is tried and true, so just proudly ask someone in the lab where yours is.

I did, and nobody knew what I was talking about;).  Can anyone recommend a source for an integrating sphere and stabilized light source?  Karl inspired me to take camera calibration more seriously, and, regretfully, my cameras do not have an even illumination source build-in.

Thanks!

Nico


12