Posted by
Nuno Moreno on
URL: http://confocal-microscopy-list.275.s1.nabble.com/not-a-confocal-question-features-of-a-widefield-tp591195p591201.html
Search the CONFOCAL archive at
http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocalI'm very happy with all this replies. I would never guess that this was
such an interesting/actual topic. Bellow you can find some answers to
very important points.
I think there is a new concept born here: ADAPTATIVE MICROSCOPY. The
name looks nice and I hope in a near future all manufacturers will have
a similar packaged.
;)
Regards,
NM
Julio Vazquez wrote:
> Search the CONFOCAL archive at
>
http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal -
> Hi Nuno,
>
> I have some thoughts (but not any useful answer, I'm afraid)...
>
> 1. Do you want to track cells in x,y too, or only in z? In addition
> to the Nikon implementation already mentioned, on the Zeiss 510 one can
> "autofocus" on the inner surface of the coverslip, and then move the
> focal plane to a specified distance from that surface. This will correct
> for drift of the focus drive. On the DeltaVision, the autofocus
> function takes a series of images around the previous z location, finds
> the plane of highest intensity, and then recenters the stack at the new
> z position. In both cases, the systems look for the most intense image
> and use that as the new reference. By using a combination of both
> approaches, in iteration, one could compensate both for mechanical
> drift, as well as for drift of the cell inside the sample. The problem
> is that the most intense plane is not necessarily the one you want,
> and/or may change within the sample over time. In photography, autofocus
> is achieved by looking for maximum contrast, rather than max intensity,
> and I could imagine something like this being implemented in
> microscopy. I think these approaches will work fine if you have a
> single cell in your field of view, but once you have many, how does one
> teach the software to pay attention to one specific cell and ignore the
> others, so that focus doesn't keep jumping between cells? Some image
> analysis software have tools for object tracking, where individual
> objects (cells) are identified based on total intensity, and possibly
> morphological parameters. I guess one could use such an approach to
> force the microscope to stay on one specific object and track it in
> x,y,z over time, but complex samples where there are many cells changing
> shape and intensity over time would be very difficult for the software
> to track... We have experienced this when trying to track objects for
> analysis purposes... it works OK with good images and few objects, but
> gets messy rather quickly otherwise
Yes. I'm not saying the system is perfect. It is correction for a
"contrast normalization" but when you have to many cells it can be
confused. It is for specific applications but its so simple that I was
just wondering why it is still not implemented yet.
>
> 2. Regarding the autoexposure issue, again DeltaVision has a function
> where a brief series of short exposures is taken and then exposure time
> is set a value that gives a preset max intensity. This is probably how
> most autoexposure routines work with microscopy acquisition software,
> and while it is true that some implementations use quite heavy doses of
> exposure, the DeltaVision implementation generally uses only a small
> fraction of the exposure time you would use for normal acquisition.
> Bleaching is certainly increased, but not outrageously.
I absolutely agree!
>
> What you suggest is a system where images are collected, and based on
> post-acquisition analysis of one given image, exposure would be adjusted
> for the next time point, therefore avoiding the need for extra exposure
> required by a conventional autoexposure routine. The major problem I see
> with this is that if one time point is grossly overexposed (saturated),
> how does the software calculate the correction factor for the next time
> point?
It goes gradually to the target value. You will get the over exposed
value but I don't see how it will get such a big difference.
Here you can find and example of a time laplse of more than 13 hours
where the exposure time of one channel decreased from ~6000 milliseconds
to ~1500 milliseconds with no post processing. I must admit that without
post processing shorter exposure times or higher time between frames you
will get flickering.
http://uic.igc.gulbenkian.pt/pics/adaptExpTimeXvid.aviIn addition, such a system is clearly most important in cases
> where the intensity of the sample is changing significantly. But then,
> how can the software predict the rate of change? It might work if the
> rate is linear, but even so, one has to wait for an image to deviate
> from the desired exposure level to implement a change for to bring the
> next exposure to a desired value... we would still end up with stacks of
> varying intensities cycling around an optimal value...
yes but you can correct those small changes by software afterwords.
Finally, one
> problem we have seen with systems based on feedback from average image
> intensities, is that the object you are interested in may be a minor
> contributor to the total image intensity, and therefore your autofocus,
> or autoexposure, may be responding to extraneous things that are
> irrelevant for your experiment.
Yes. If there is external factors like a bit of crap floating around
your are done.
For instance, if you base your
> autoexposure on total image intensity, and your cell is quite small, the
> autoexposure may be following the changes in background fluorescence,
> and not the changes in your cell. On the other hand, if it is adjusting
> to the max intensity, then you have to find a field where the cell of
> interest is also the brightest object...
This night there is a timelapse with a malaria parasite and a cell
stained with lisotraker. It will not work very well for the parasite
because its vary small. The focus adaptation is made in the cell
lysotracker channel.
>
> 3. Most tracking system I can think of use some sort of live feed back:
> the autofocus on your photo camera estimates the distance to the object
> just before you click the shutter, or, if in "continuous" mode, keeps
> measuring and estimating the distance, so that when you click the
> shutter, the camera will focus where it thinks the object will be. You
> still need to tell the camera which object to focus on (by keeping it in
> the crosshair), or use some fancy algorithm that makes assumptions as to
> what the object of interest looks like. I suppose a missile tracking
> system would also rely on continuous feed back in real time to
> anticipate the next location of the missile.
The system does not make prediction, yet.
I think such a system will
> most likely fail if the time delay between measurement and action
> increases, if the object has a highly non-linear trajectory (changes of
> direction and velocity), and if there is crowding (1 missile to track
> among 100 identical decoys). Unfortunately, most of these caveats seem
> to apply to some extent in real-life microscopy, and that is perhaps why
> an autoexpose/autofocus function just before acquisition might be the
> most reliable... On the other hand, if you can implement a system such
> as the one you describe, I would love to invest in your business
> (although I don't have that much to invest, unfortunately)! We actually
> had a user who wanted to follow yeast cells as they underwent mitosis,
> and those guys do jump around like crazy.
on this case it will not work properly unless you are imaging a single
or few yeasts. What you can give is a sensitivity value that will adjust
focus more often but you might come up to a point where its always
trying to get the best focus and will bleach more than it should. In
this case the deltavision solution will probably work better.
Eventually, she did what you
> suggest, except that she was part of the feed back loop: she just kept
> looking at the images on the monitor as they were being acquired and
> manually refocusing the microscope. Couldn't find any software that
> would do that better than she did...
Yes but if your timelapse is more than 10 hours...
>
>
>
> --
> Julio Vazquez
> Fred Hutchinson Cancer Research Center
> Seattle, WA 98109-1024
>
>
>
http://www.fhcrc.org/>
>
>
> On Dec 7, 2007, at 8:41 AM, Nuno Moreno wrote:
>
>> Search the CONFOCAL archive at
>>
http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal>>
>> Autoexpose will bleach everything, right?
>>
>> Regarding the adaptative focus that I mentioned before, there are
>> commercial system that with minimum light and before an acquisition
>> "measure" the cell position and adapt the focus. But this is an half
>> adaptation. It could be that it does not need to readjust the focus.
>>
>> What I was counting with would be after the acquisition, if it is out
>> of focus, it make the adjustment base in some kind of sensitivity
>> parameter. This could be after 10 time points but it might be that it
>> would never need such adjustment.
>>
>>
>> About the intensity variations I'm not talking about post processing
>> adjustments. If it gets saturated there are no post processing that
>> can help you.
>>
>> Regards,
>> NM
>>
>>
>>
>>
>>
>> Shalin Mehta wrote:
>>> Search the CONFOCAL archive at
>>>
http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal Dear Nuno,
>>> Wouldn't auto-exposure on cameras suffice for maintaining constant
>>> intensity?
>>> Apparently most of the commercial adaptive optics systems are geared
>>> towards astronomy. Perhaps you have known this already:
>>>
http://cfao.ucolick.org/>>> Interesting to note that James Webb space telescope will have
>>> hardware and intelligence for adaptive optics evolved from algorithms
>>> developed for correcting aberrations for hubble telescope.
>>> Regards,
>>> Shalin
>>> On Dec 7, 2007 10:43 PM, Nuno Moreno <
[hidden email]
>>> <mailto:
[hidden email]>> wrote:
>>> Search the CONFOCAL archive at
>>>
http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal>>> Does anyone knows any commercial widefield SYSTEM that makes an
>>> adaptative focus. And I mean adaptative (follows the cell.
>>> The other feature is a commercial system that keeps intensities,
>>> i.e.,
>>> if you have something with different protein expression levels over
>>> time, the system will correct the exposure time so that at the
>>> end the
>>> intensities are constant.
>>> Many thanks,
>>> --
>>> Nuno Moreno
>>> Cell Imaging Unit
>>> Instituto Gulbenkian de Ciência
>>>
http://uic.igc.gulbekian.pt <
http://uic.igc.gulbekian.pt>
>>>
http://www.igc.gulbekian.pt>>> phone +351 214464606
>>> fax +351 214407970
>>> --
>>> ~~~~~~~~~~~~~~~~~~~~~~~~~
>>> Shalin Mehta
>>> Graduate Student in Bioengineering, NUS
>>> mobile: +65-90694182
>>> blog: shalin.wordpress.com <
http://shalin.wordpress.com>
>>> ~~~~~~~~~~~~~~~~~~~~~~~~~~
>>
>> --
>> Nuno Moreno
>> Cell Imaging Unit
>> Instituto Gulbenkian de Ciência
>>
http://uic.igc.gulbekian.pt>>
http://www.igc.gulbekian.pt>> phone +351 214464606
>> fax +351 214407970
>
--
Nuno Moreno
Cell Imaging Unit
Instituto Gulbenkian de Ciência
http://uic.igc.gulbekian.pthttp://www.igc.gulbekian.ptphone +351 214464606
fax +351 214407970