Checking for flat field illumination

classic Classic list List threaded Threaded
20 messages Options
Sean Speese Sean Speese
Reply | Threaded
Open this post in threaded view
|

Checking for flat field illumination

I have recently been trying to determine how flat the illumination field is
in our spinning disk system to make sure things are "fairly well" aligned,
but have had some strange results that I do not understand.  I would like
some feedback on what is acceptable, as I am new to these type of
measurements.  I started by using a fluor-ref slide from microscopy
education, which I assume is a good diagnostic slide for such an operation.
 I will also mention that I was using a Zeiss C-apochromat 40x/1.2 water
immersion lens with an adjustable collar. With the fluor-ref slide image,
the field looks very flat, with only about a 3% intensity drop on the edges
(determined via a line profile across the image).  However, if I take a
molecular probes bead and move it to different areas of the field, I get a
very different answer.  I find that I have a relatively large area (~20% of
the total field) on the bottom left corner that registers a 22% drop in
intensity via measuring the bead.  I focused up and down to make sure I was
at the brightest point of the bead, in case the focus had changed in a
different area of the field, but that did not seem to make a difference. I
have two questions I would be interested in getting feedback on. 1) What is
the reason for the difference between the flou-ref slide and the beads? and
2) What kinds of percent changes in intensity over the field are considered
acceptable?  

Thanks,


Sean Speese, Ph.D.
UMASS Medical School
Department of Neurobiology
Chris Tully Chris Tully
Reply | Threaded
Open this post in threaded view
|

Re: Checking for flat field illumination

Sean,

The beads are likely much smaller diameter than the thickness of your reference slide.  I am speculating here, because I don't have ready access to a scope and have not taken the time to draw this up or do any math, but a uniform field of fluorescent material that is much larger than your FOV will for any given point have _some_ contribution of light from neighboring regions, even in a confocal.  By virtue of it's small size, a bead effectively eliminates this effect.  The next steps I would take are to image a field of disperse beads (you want 100 or more beads in a FOV, but none touching...).  You can then plot intensity versus spatial position (one of the few real uses for Excel's 3 axis graphs!).  I would also look to mount a piece of say gut tissue and look at a several fields of auto fluorescence.  You won't get the nice smooth line that a uniform field gives on a line profile, but it will still show any non-uniformity of fluorescence in a line profile.

Whats acceptable really depends on what you plan to do with your images.  If you are just looking to document what you saw, it is surprising how much unflatness is acceptable beore people start to notice it in the published image.  If on the other hand you want to use automatic methods for finding and analyzing objects in your images, the flatter your field, the easier it will be to extract all of your objects.

Chris Tully

P.S. I am still loking for a job, if anyone knows of an opeing in NC, SC or VA please let me know!

Chris Tully
Microscopy and Image Analysis Expert
[hidden email]
240-888-1021
http://www.linkedin.com/in/christully


On Thu, May 14, 2009 at 8:36 AM, Sean Speese <[hidden email]> wrote:
I have recently been trying to determine how flat the illumination field is
in our spinning disk system to make sure things are "fairly well" aligned,
but have had some strange results that I do not understand.  I would like
some feedback on what is acceptable, as I am new to these type of
measurements.  I started by using a fluor-ref slide from microscopy
education, which I assume is a good diagnostic slide for such an operation.
 I will also mention that I was using a Zeiss C-apochromat 40x/1.2 water
immersion lens with an adjustable collar. With the fluor-ref slide image,
the field looks very flat, with only about a 3% intensity drop on the edges
(determined via a line profile across the image).  However, if I take a
molecular probes bead and move it to different areas of the field, I get a
very different answer.  I find that I have a relatively large area (~20% of
the total field) on the bottom left corner that registers a 22% drop in
intensity via measuring the bead.  I focused up and down to make sure I was
at the brightest point of the bead, in case the focus had changed in a
different area of the field, but that did not seem to make a difference. I
have two questions I would be interested in getting feedback on. 1) What is
the reason for the difference between the flou-ref slide and the beads? and
2) What kinds of percent changes in intensity over the field are considered
acceptable?

Thanks,


Sean Speese, Ph.D.
UMASS Medical School
Department of Neurobiology

Jon Ekman-2 Jon Ekman-2
Reply | Threaded
Open this post in threaded view
|

Re: Checking for flat field illumination

Here in a pinch, I will scavenge a Yellow Sharpy Accent Highlighter; pulling off the white cap with hemostats and using a second pair to squeeze dye out onto a coverslip/slide or in the case of  inverted scopes, I drop the dye into a Lab-tek-II multi-well coverglass system. The 488nm line is then used to image the entire FOV, we then process the image in whatever flavor of software the user prefers. I like the Sharpy dye, because there is particulate in it that helps with quickly focusing. The dye works just a smidge better in reproducing a FOV image than fluorescent plastic slides. If we need real clean FOV image we will make up some stock fluorescein in buffer and image that.

 

Jon Ekman

 

From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of Chris Tully
Sent: Thursday, May 14, 2009 9:33 AM
To: [hidden email]
Subject: Re: Checking for flat field illumination

 

Sean,

The beads are likely much smaller diameter than the thickness of your reference slide.  I am speculating here, because I don't have ready access to a scope and have not taken the time to draw this up or do any math, but a uniform field of fluorescent material that is much larger than your FOV will for any given point have _some_ contribution of light from neighboring regions, even in a confocal.  By virtue of it's small size, a bead effectively eliminates this effect.  The next steps I would take are to image a field of disperse beads (you want 100 or more beads in a FOV, but none touching...).  You can then plot intensity versus spatial position (one of the few real uses for Excel's 3 axis graphs!).  I would also look to mount a piece of say gut tissue and look at a several fields of auto fluorescence.  You won't get the nice smooth line that a uniform field gives on a line profile, but it will still show any non-uniformity of fluorescence in a line profile.

Whats acceptable really depends on what you plan to do with your images.  If you are just looking to document what you saw, it is surprising how much unflatness is acceptable beore people start to notice it in the published image.  If on the other hand you want to use automatic methods for finding and analyzing objects in your images, the flatter your field, the easier it will be to extract all of your objects.

Chris Tully

P.S. I am still loking for a job, if anyone knows of an opeing in NC, SC or VA please let me know!

Chris Tully
Microscopy and Image Analysis Expert
[hidden email]
240-888-1021
http://www.linkedin.com/in/christully

On Thu, May 14, 2009 at 8:36 AM, Sean Speese <[hidden email]> wrote:

I have recently been trying to determine how flat the illumination field is
in our spinning disk system to make sure things are "fairly well" aligned,
but have had some strange results that I do not understand.  I would like
some feedback on what is acceptable, as I am new to these type of
measurements.  I started by using a fluor-ref slide from microscopy
education, which I assume is a good diagnostic slide for such an operation.
 I will also mention that I was using a Zeiss C-apochromat 40x/1.2 water
immersion lens with an adjustable collar. With the fluor-ref slide image,
the field looks very flat, with only about a 3% intensity drop on the edges
(determined via a line profile across the image).  However, if I take a
molecular probes bead and move it to different areas of the field, I get a
very different answer.  I find that I have a relatively large area (~20% of
the total field) on the bottom left corner that registers a 22% drop in
intensity via measuring the bead.  I focused up and down to make sure I was
at the brightest point of the bead, in case the focus had changed in a
different area of the field, but that did not seem to make a difference. I
have two questions I would be interested in getting feedback on. 1) What is
the reason for the difference between the flou-ref slide and the beads? and
2) What kinds of percent changes in intensity over the field are considered
acceptable?

Thanks,


Sean Speese, Ph.D.
UMASS Medical School
Department of Neurobiology

 

Kurt Thorn Kurt Thorn
Reply | Threaded
Open this post in threaded view
|

Re: Checking for flat field illumination

In reply to this post by Sean Speese
A really nice method for flat field correction is described in the
following papers:
http://www3.interscience.wiley.com/journal/121446923/abstract
http://www3.interscience.wiley.com/journal/118757058/abstract

They record the photobleaching rate of a thin dye layer to measure the
illumination intensity over the field of view.  Variation in sample
brightness that can't be explained by the illumination intensity
distribution then must come from variations in detection efficiency.  
Separating the excitation and emission components of the spatial
nonuniformity is nice in that if there is significant photobleaching
they need to be corrected for differently.

Kurt

Sean Speese wrote:

> I have recently been trying to determine how flat the illumination field is
> in our spinning disk system to make sure things are "fairly well" aligned,
> but have had some strange results that I do not understand.  I would like
> some feedback on what is acceptable, as I am new to these type of
> measurements.  I started by using a fluor-ref slide from microscopy
> education, which I assume is a good diagnostic slide for such an operation.
>  I will also mention that I was using a Zeiss C-apochromat 40x/1.2 water
> immersion lens with an adjustable collar. With the fluor-ref slide image,
> the field looks very flat, with only about a 3% intensity drop on the edges
> (determined via a line profile across the image).  However, if I take a
> molecular probes bead and move it to different areas of the field, I get a
> very different answer.  I find that I have a relatively large area (~20% of
> the total field) on the bottom left corner that registers a 22% drop in
> intensity via measuring the bead.  I focused up and down to make sure I was
> at the brightest point of the bead, in case the focus had changed in a
> different area of the field, but that did not seem to make a difference. I
> have two questions I would be interested in getting feedback on. 1) What is
> the reason for the difference between the flou-ref slide and the beads? and
> 2) What kinds of percent changes in intensity over the field are considered
> acceptable?  
>
> Thanks,
>
>
> Sean Speese, Ph.D.
> UMASS Medical School
> Department of Neurobiology
>
>  


--
Kurt Thorn, PhD
Director, Nikon Imaging Center
University of California San Francisco

UCSF MC 2140
Genentech Hall Room S252
600 16th St.
San Francisco, CA 94158-2517

http://nic.ucsf.edu
phone 415.514.9709
fax   415.514.4300
James Pawley James Pawley
Reply | Threaded
Open this post in threaded view
|

Re: Checking for flat field illumination

In reply to this post by Sean Speese
>I have recently been trying to determine how flat the illumination field is
>in our spinning disk system to make sure things are "fairly well" aligned,
>but have had some strange results that I do not understand.  I would like
>some feedback on what is acceptable, as I am new to these type of
>measurements.  I started by using a fluor-ref slide from microscopy
>education, which I assume is a good diagnostic slide for such an operation.
>  I will also mention that I was using a Zeiss C-apochromat 40x/1.2 water
>immersion lens with an adjustable collar. With the fluor-ref slide image,
>the field looks very flat, with only about a 3% intensity drop on the edges
>(determined via a line profile across the image).  However, if I take a
>molecular probes bead and move it to different areas of the field, I get a
>very different answer.  I find that I have a relatively large area (~20% of
>the total field) on the bottom left corner that registers a 22% drop in
>intensity via measuring the bead.  I focused up and down to make sure I was
>at the brightest point of the bead, in case the focus had changed in a
>different area of the field, but that did not seem to make a difference. I
>have two questions I would be interested in getting feedback on. 1) What is
>the reason for the difference between the flou-ref slide and the beads? and
>2) What kinds of percent changes in intensity over the field are considered
>acceptable?
>
>Thanks,
>
>
>Sean Speese, Ph.D.
>UMASS Medical School
>Department of Neurobiology

Dear Sean,

The fact that the intensity loss  seems much worse in one corner than
in the other three does suggest an alignment issue. I am only
guessing but it might be that, in that corner, the laser light is not
being as accurately focused into the pinholes by the microlenses.
(But I have no idea how to adjust this.)

However, one should always expect some drop off in the corners for a
variety of reasons.

Aberrations and field curvature, that are corrected almost perfectly
on axis, are usually not corrected so well off the axis. The result
is a larger spot, and therefor a lower intensity of excitation
illumination. The higher aberrations also reduce optical performance
on the fluorescence side, making the larger spot larger again, and
causing more of it to be intercepted by the pinhole. This will be
more obvious with the beads than with a bulk specimen where, to some
extent, light originating nearby will still make it to the image
plane. With the beads, there is no fluorescent material to make
"nearby" light.

Finally, large low-mag, high-NA lenses often suffer from vignetting.
This can be seen clearly in Fig 11.9 on page 246 of the Handbook. The
lower row of images represent performance in the back focal plane of
4 objectives, with a point light source that images 10mm off axis at
the primary image plane. You can see that the BFP in not fully filled
(i.e., not circular, as the top row is). The black area represents
light that was not collected, in this case probably 25% of the total
for the NA 1.2 water lens.

This last point fits in with the recent discussion on the "Re:
Recommendations for commercial multi-photon system purchase" thread
about an NA 1.2 20x objective. If there really isn't room in the old
RMS objective mount for all the light paths from the edges of the
field of view of an NA 1.2 40x, then the matter will be 2x worse with
a 20x. Therefore, one will need a much larger diameter tube and tube
lens to capture all the data from such an objective.

Originally such high-NA, low-mag objectives were not contemplated
because they were harder to make and besides one could not see this
much information with the unaided eye. Now that we decode position
information from either CCDs or mirror position, this is no longer a
limitation and, partially for this reason, we have recently seen the
introduction a variety of "non-standard" microscope configurations
such as the the Agilent/TILL and the new Olympus box scopes.

The brave new world awaits.

Cheers,

Jim P.

PS: We still have 3 open places at the UBC 3D Live-cells Course,
http://www.3dcourse.ubc.ca/
--
               **********************************************
Prof. James B. Pawley,                          Ph.  608-263-3147
Room 223, Zoology Research Building,              
FAX  608-265-5315
1117 Johnson Ave., Madison, WI, 53706  
[hidden email]
3D Microscopy of Living Cells Course, June 13-25, 2009, UBC, Vancouver Canada
Info: http://www.3dcourse.ubc.ca/             Applications still being accepted
               "If it ain't diffraction, it must be statistics." Anon.
Richard Berman Richard Berman
Reply | Threaded
Open this post in threaded view
|

Re: Checking for flat field illumination

In reply to this post by Sean Speese
***commercial interest***

Sean, your question does not specify the type of spinning disk system
you are using but I am going to assume that it is a Yokogawa CSU.

Firstly to your observation that using different techniques results in
different field flatness, we have observed similar puzzling behavior.
Although not definitive we attribute the difference in behavior to
non-confocal light contributing to the response in the case of the
fluorescent slide as described in the previous response by Chris Tully.
We use the fluorescent ink technique for most of our work as described
well in the response by Jon Ekman. Moving a bead around is also an
excellent method as you have already done. In the end, we feel that a
thin sample or bead is much more representative of an actual imaging
situation than the plastic slide.

The more general observation of non-uniformity in the Yokogawa is
inherent in its design and something we have spent a great deal of time
characterizing. Generally we find that the non-uniformity is a direct
result of non-uniform excitation. The output from the fiber that is used
for the Yokogawa has a Gaussian distribution. The optics are such that
only the top of the Gaussian is used. The goal is to use the flattest
part of the distribution, but even so, the result is a hot spot in the
center of the image and roll-off on all sides. A roll-off of 20% is
fairly typical for a well aligned system but it can be much worse. If
the peak of the Gaussian is off-center then the drop-off on the opposite
side can be quite high. In such cases we have measured differences of
more than 50%. Further non-uniformities can result from poor alignment
of the microlenses to pinholes.

By the way, a second result of using only the center of the Gaussian
distribution is that there is a loss of excitation light.

With proper alignment the hot spot should be well centered in the field
of view and the light from the microlenses should be well centered on
the pinholes. Although this adjustment may be good when it ships from
Yokogawa, we have measured new systems that were less than optimum. Some
changes may occur in shipping but there are other causes. A change of
dichroic can lead to misalignment as can a change in the fiber. The
adjustments needed to align the CSU require experience and additional
optics to aid alignment. Most systems can be improved for uniformity,
throughput, and some other less noticed effects just with alignment but
the Gaussian distribution is a limiting factor.

The newer CSU-X1 has improved optics that decrease the non-uniformity.
However, we find that the uniformity is wavelength dependent and what
may look good at one wavelength is not necessarily good at another. For
the X1 there is also greater dependence on the exact fiber used.

OK, now the  commercial  part (living dangerously). At Spectral we have
been aligning these systems for years. The result is that we have
recently developed some new optics for the Yokogawa CSU 10, 22 and X1
which improve uniformity and throughput beyond what even the X1
achieves. We call the system modifications Borealis. We are typically
better than 5% uniform and aiming for 3%. We also improve the excitation
throughput by up to a factor of 4 over the CSU-10. I won't turn this
into an ad but in just a few weeks we will have more information about
these changes on our website. If I don't get flamed, I will post a link
to the information when it is ready.

Of course you are always welcome to contact us directly off line.

Richard

Sean Speese wrote:

> I have recently been trying to determine how flat the illumination field is
> in our spinning disk system to make sure things are "fairly well" aligned,
> but have had some strange results that I do not understand.  I would like
> some feedback on what is acceptable, as I am new to these type of
> measurements.  I started by using a fluor-ref slide from microscopy
> education, which I assume is a good diagnostic slide for such an operation.
>  I will also mention that I was using a Zeiss C-apochromat 40x/1.2 water
> immersion lens with an adjustable collar. With the fluor-ref slide image,
> the field looks very flat, with only about a 3% intensity drop on the edges
> (determined via a line profile across the image).  However, if I take a
> molecular probes bead and move it to different areas of the field, I get a
> very different answer.  I find that I have a relatively large area (~20% of
> the total field) on the bottom left corner that registers a 22% drop in
> intensity via measuring the bead.  I focused up and down to make sure I was
> at the brightest point of the bead, in case the focus had changed in a
> different area of the field, but that did not seem to make a difference. I
> have two questions I would be interested in getting feedback on. 1) What is
> the reason for the difference between the flou-ref slide and the beads? and
> 2) What kinds of percent changes in intensity over the field are considered
> acceptable?  
>
> Thanks,
>
>
> Sean Speese, Ph.D.
> UMASS Medical School
> Department of Neurobiology
>
>  

--
Richard Berman
Spectral Applied Research
9078 Leslie St., Unit 11
Richmond Hill, Ontario
L4B 3L8

905-326-5040 ext. 444

www.spectral.ca
Robert Peterson-3-3 Robert Peterson-3-3
Reply | Threaded
Open this post in threaded view
|

Re: Checking for flat field illumination

In reply to this post by James Pawley


Hi,

We do a lot of tiling on multiple systems. When there is not a flat field of illumination you get a checkerboard effect that ruins your images. So, I have spent a fair amount of time trying to achieve a flat field illumination. With the Zeiss there are a couple things I would suggest.
  1. As was mentioned below there is always an edge effect and we have been told to zoom to at least 2 with our Zeiss objectives to "cut-off" this edge effect.
  2. The stage alignment and stage insert alignment have to be perfect. The Zeiss service tech in our area uses a slide with a grid along the entire length to make sure that it is in the same focal plane at all points. Or, as near to it as possible.
Don't know if this will be helpful or not, but wanted to chime in with some personal experiences.

Thanks,
Robert Peterson


James Pawley wrote:
I have recently been trying to determine how flat the illumination field is
in our spinning disk system to make sure things are "fairly well" aligned,
but have had some strange results that I do not understand.  I would like
some feedback on what is acceptable, as I am new to these type of
measurements.  I started by using a fluor-ref slide from microscopy
education, which I assume is a good diagnostic slide for such an operation.
 I will also mention that I was using a Zeiss C-apochromat 40x/1.2 water
immersion lens with an adjustable collar. With the fluor-ref slide image,
the field looks very flat, with only about a 3% intensity drop on the edges
(determined via a line profile across the image).  However, if I take a
molecular probes bead and move it to different areas of the field, I get a
very different answer.  I find that I have a relatively large area (~20% of
the total field) on the bottom left corner that registers a 22% drop in
intensity via measuring the bead.  I focused up and down to make sure I was
at the brightest point of the bead, in case the focus had changed in a
different area of the field, but that did not seem to make a difference. I
have two questions I would be interested in getting feedback on. 1) What is
the reason for the difference between the flou-ref slide and the beads? and
2) What kinds of percent changes in intensity over the field are considered
acceptable?
Thanks,


Sean Speese, Ph.D.
UMASS Medical School
Department of Neurobiology

Dear Sean,

The fact that the intensity loss  seems much worse in one corner than in the other three does suggest an alignment issue. I am only guessing but it might be that, in that corner, the laser light is not being as accurately focused into the pinholes by the microlenses. (But I have no idea how to adjust this.)

However, one should always expect some drop off in the corners for a variety of reasons.

Aberrations and field curvature, that are corrected almost perfectly on axis, are usually not corrected so well off the axis. The result is a larger spot, and therefor a lower intensity of excitation illumination. The higher aberrations also reduce optical performance on the fluorescence side, making the larger spot larger again, and causing more of it to be intercepted by the pinhole. This will be more obvious with the beads than with a bulk specimen where, to some extent, light originating nearby will still make it to the image plane. With the beads, there is no fluorescent material to make "nearby" light.

Finally, large low-mag, high-NA lenses often suffer from vignetting. This can be seen clearly in Fig 11.9 on page 246 of the Handbook. The lower row of images represent performance in the back focal plane of 4 objectives, with a point light source that images 10mm off axis at the primary image plane. You can see that the BFP in not fully filled (i.e., not circular, as the top row is). The black area represents light that was not collected, in this case probably 25% of the total for the NA 1.2 water lens.

This last point fits in with the recent discussion on the "Re: Recommendations for commercial multi-photon system purchase" thread about an NA 1.2 20x objective. If there really isn't room in the old RMS objective mount for all the light paths from the edges of the field of view of an NA 1.2 40x, then the matter will be 2x worse with a 20x. Therefore, one will need a much larger diameter tube and tube lens to capture all the data from such an objective.

Originally such high-NA, low-mag objectives were not contemplated because they were harder to make and besides one could not see this much information with the unaided eye. Now that we decode position information from either CCDs or mirror position, this is no longer a limitation and, partially for this reason, we have recently seen the introduction a variety of "non-standard" microscope configurations such as the the Agilent/TILL and the new Olympus box scopes.

The brave new world awaits.

Cheers,

Jim P.

PS: We still have 3 open places at the UBC 3D Live-cells Course, http://www.3dcourse.ubc.ca/

peterson.vcf (382 bytes) Download Attachment
mahogny mahogny
Reply | Threaded
Open this post in threaded view
|

Re: Checking for flat field illumination

this problem can be trivially fixed by software, just apply a range
adjustment bi-linearly interpolated over the image. you can find the
optimal parameters using linear least squares or manually. but before
that, make sure the hardware setup is optimal. had this problem on a
simpler microscope, in that case due to a bad connection making the
light come in at a slight angle.

/Johan

>
> Robert Peterson wrote:
>  
>> Hi,
>>
>> We do a lot of tiling on multiple systems. When there is not a flat field of
>> illumination you get a checkerboard effect that ruins your images. So, I have
>> spent a fair amount of time trying to achieve a flat field illumination. With
>> the Zeiss there are a couple things I would suggest.
>>
>>    1. As was mentioned below there is always an edge effect and we have been
>>       told to zoom to at least 2 with our Zeiss objectives to "cut-off" this
>>       edge effect.
>>    2. The stage alignment and stage insert alignment have to be perfect. The
>>       Zeiss service tech in our area uses a slide with a grid along the entire
>>       length to make sure that it is in the same focal plane at all points. Or,
>>       as near to it as possible.
>>
>> Don't know if this will be helpful or not, but wanted to chime in with some
>> personal experiences.
>>
>> Thanks,
>> Robert Peterson
>>
>>
>> James Pawley wrote:
>>    
>>>> I have recently been trying to determine how flat the illumination field is
>>>> in our spinning disk system to make sure things are "fairly well" aligned,
>>>> but have had some strange results that I do not understand.  I would like
>>>> some feedback on what is acceptable, as I am new to these type of
>>>> measurements.  I started by using a fluor-ref slide from microscopy
>>>> education, which I assume is a good diagnostic slide for such an operation.
>>>>  I will also mention that I was using a Zeiss C-apochromat 40x/1.2 water
>>>> immersion lens with an adjustable collar. With the fluor-ref slide image,
>>>> the field looks very flat, with only about a 3% intensity drop on the edges
>>>> (determined via a line profile across the image).  However, if I take a
>>>> molecular probes bead and move it to different areas of the field, I get a
>>>> very different answer.  I find that I have a relatively large area (~20% of
>>>> the total field) on the bottom left corner that registers a 22% drop in
>>>> intensity via measuring the bead.  I focused up and down to make sure I was
>>>> at the brightest point of the bead, in case the focus had changed in a
>>>> different area of the field, but that did not seem to make a difference. I
>>>> have two questions I would be interested in getting feedback on. 1) What is
>>>> the reason for the difference between the flou-ref slide and the beads? and
>>>> 2) What kinds of percent changes in intensity over the field are considered
>>>> acceptable?
>>>> Thanks,
>>>>
>>>>
>>>> Sean Speese, Ph.D.
>>>> UMASS Medical School
>>>> Department of Neurobiology
>>>>        
>>> Dear Sean,
>>>
>>> The fact that the intensity loss  seems much worse in one corner than in the
>>> other three does suggest an alignment issue. I am only guessing but it might
>>> be that, in that corner, the laser light is not being as accurately focused
>>> into the pinholes by the microlenses. (But I have no idea how to adjust this.)
>>>
>>> However, one should always expect some drop off in the corners for a variety
>>> of reasons.
>>>
>>> Aberrations and field curvature, that are corrected almost perfectly on axis,
>>> are usually not corrected so well off the axis. The result is a larger spot,
>>> and therefor a lower intensity of excitation illumination. The higher
>>> aberrations also reduce optical performance on the fluorescence side, making
>>> the larger spot larger again, and causing more of it to be intercepted by the
>>> pinhole. This will be more obvious with the beads than with a bulk specimen
>>> where, to some extent, light originating nearby will still make it to the
>>> image plane. With the beads, there is no fluorescent material to make "nearby"
>>> light.
>>>
>>> Finally, large low-mag, high-NA lenses often suffer from vignetting. This can
>>> be seen clearly in Fig 11.9 on page 246 of the Handbook. The lower row of
>>> images represent performance in the back focal plane of 4 objectives, with a
>>> point light source that images 10mm off axis at the primary image plane. You
>>> can see that the BFP in not fully filled (i.e., not circular, as the top row
>>> is). The black area represents light that was not collected, in this case
>>> probably 25% of the total for the NA 1.2 water lens.
>>>
>>> This last point fits in with the recent discussion on the "Re: Recommendations
>>> for commercial multi-photon system purchase" thread about an NA 1.2 20x
>>> objective. If there really isn't room in the old RMS objective mount for all
>>> the light paths from the edges of the field of view of an NA 1.2 40x, then the
>>> matter will be 2x worse with a 20x. Therefore, one will need a much larger
>>> diameter tube and tube lens to capture all the data from such an objective.
>>>
>>> Originally such high-NA, low-mag objectives were not contemplated because they
>>> were harder to make and besides one could not see this much information with
>>> the unaided eye. Now that we decode position information from either CCDs or
>>> mirror position, this is no longer a limitation and, partially for this
>>> reason, we have recently seen the introduction a variety of "non-standard"
>>> microscope configurations such as the the Agilent/TILL and the new Olympus box
>>> scopes.
>>>
>>> The brave new world awaits.
>>>
>>> Cheers,
>>>
>>> Jim P.
>>>
>>> PS: We still have 3 open places at the UBC 3D Live-cells Course,
>>> http://www.3dcourse.ubc.ca/
>>>      
>>  
>>    
>
>
>  


--
--
------------------------------------------------
Johan Henriksson
MSc Engineering
PhD student, Karolinska Institutet
http://mahogny.areta.org http://www.endrov.net
Robert Peterson-3-3 Robert Peterson-3-3
Reply | Threaded
Open this post in threaded view
|

Question about larger issue (was: Checking for flat field illumination)

Hi,

First, I would like to thank Johan for the response. He was correct that in cases where hardware is the issue, doing post-capture processing would not help. For example, when our stage and stage insert were not well aligned we had different focal depths across the sample. We would see something like an axon running from right to left across 4 or 5 tiles where in the right hand side of each tile it was in focus, but then it would dip out of focus by the left hand side of that tile. Then, it would pop back into focus at the right-hand edge of the next tile.

On the other hand, I like the idea of the range adjustment for a shading problem I have been having on a different system.

However, Johan's suggestion brought up something I had been wanting to ask for quite some time. How much post-capture image manipulation do you think is okay? I know there have been threads about this before, but I was wondering specifically what people do and do not suggest to their users/students/postdocs. In particular, I'm wondering what kind of processing you find yourself doing fairly often. I have a feeling I err on the conservative side and I am definitely willing to broaden my range.

Thanks in advance!

Robert


Johan Henriksson wrote:
this problem can be trivially fixed by software, just apply a range
adjustment bi-linearly interpolated over the image. you can find the
optimal parameters using linear least squares or manually. but before
that, make sure the hardware setup is optimal. had this problem on a
simpler microscope, in that case due to a bad connection making the
light come in at a slight angle.

/Johan
  
Robert Peterson wrote:
  
    
Hi,

We do a lot of tiling on multiple systems. When there is not a flat field of 
illumination you get a checkerboard effect that ruins your images. So, I have 
spent a fair amount of time trying to achieve a flat field illumination. With 
the Zeiss there are a couple things I would suggest.

   1. As was mentioned below there is always an edge effect and we have been
      told to zoom to at least 2 with our Zeiss objectives to "cut-off" this
      edge effect.
   2. The stage alignment and stage insert alignment have to be perfect. The
      Zeiss service tech in our area uses a slide with a grid along the entire
      length to make sure that it is in the same focal plane at all points. Or,
      as near to it as possible.

Don't know if this will be helpful or not, but wanted to chime in with some 
personal experiences.

Thanks,
Robert Peterson


James Pawley wrote:
    
      
I have recently been trying to determine how flat the illumination field is
in our spinning disk system to make sure things are "fairly well" aligned,
but have had some strange results that I do not understand.  I would like
some feedback on what is acceptable, as I am new to these type of
measurements.  I started by using a fluor-ref slide from microscopy
education, which I assume is a good diagnostic slide for such an operation.
 I will also mention that I was using a Zeiss C-apochromat 40x/1.2 water
immersion lens with an adjustable collar. With the fluor-ref slide image,
the field looks very flat, with only about a 3% intensity drop on the edges
(determined via a line profile across the image).  However, if I take a
molecular probes bead and move it to different areas of the field, I get a
very different answer.  I find that I have a relatively large area (~20% of
the total field) on the bottom left corner that registers a 22% drop in
intensity via measuring the bead.  I focused up and down to make sure I was
at the brightest point of the bead, in case the focus had changed in a
different area of the field, but that did not seem to make a difference. I
have two questions I would be interested in getting feedback on. 1) What is
the reason for the difference between the flou-ref slide and the beads? and
2) What kinds of percent changes in intensity over the field are considered
acceptable?
Thanks,


Sean Speese, Ph.D.
UMASS Medical School
Department of Neurobiology
        
          
Dear Sean,

The fact that the intensity loss  seems much worse in one corner than in the 
other three does suggest an alignment issue. I am only guessing but it might 
be that, in that corner, the laser light is not being as accurately focused 
into the pinholes by the microlenses. (But I have no idea how to adjust this.)

However, one should always expect some drop off in the corners for a variety 
of reasons.

Aberrations and field curvature, that are corrected almost perfectly on axis, 
are usually not corrected so well off the axis. The result is a larger spot, 
and therefor a lower intensity of excitation illumination. The higher 
aberrations also reduce optical performance on the fluorescence side, making 
the larger spot larger again, and causing more of it to be intercepted by the 
pinhole. This will be more obvious with the beads than with a bulk specimen 
where, to some extent, light originating nearby will still make it to the 
image plane. With the beads, there is no fluorescent material to make "nearby" 
light.

Finally, large low-mag, high-NA lenses often suffer from vignetting. This can 
be seen clearly in Fig 11.9 on page 246 of the Handbook. The lower row of 
images represent performance in the back focal plane of 4 objectives, with a 
point light source that images 10mm off axis at the primary image plane. You 
can see that the BFP in not fully filled (i.e., not circular, as the top row 
is). The black area represents light that was not collected, in this case 
probably 25% of the total for the NA 1.2 water lens.

This last point fits in with the recent discussion on the "Re: Recommendations 
for commercial multi-photon system purchase" thread about an NA 1.2 20x 
objective. If there really isn't room in the old RMS objective mount for all 
the light paths from the edges of the field of view of an NA 1.2 40x, then the 
matter will be 2x worse with a 20x. Therefore, one will need a much larger 
diameter tube and tube lens to capture all the data from such an objective.

Originally such high-NA, low-mag objectives were not contemplated because they 
were harder to make and besides one could not see this much information with 
the unaided eye. Now that we decode position information from either CCDs or 
mirror position, this is no longer a limitation and, partially for this 
reason, we have recently seen the introduction a variety of "non-standard" 
microscope configurations such as the the Agilent/TILL and the new Olympus box 
scopes.

The brave new world awaits.

Cheers,

Jim P.

PS: We still have 3 open places at the UBC 3D Live-cells Course, 
http://www.3dcourse.ubc.ca/
      
        
  
    
      
  
    


  

peterson.vcf (382 bytes) Download Attachment
mahogny mahogny
Reply | Threaded
Open this post in threaded view
|

Re: Question about larger issue

Robert Peterson wrote:

> Hi,
>
> First, I would like to thank Johan for the response. He was correct that in
> cases where hardware is the issue, doing post-capture processing would not help.
> For example, when our stage and stage insert were not well aligned we had
> different focal depths across the sample. We would see something like an axon
> running from right to left across 4 or 5 tiles where in the right hand side of
> each tile it was in focus, but then it would dip out of focus by the left hand
> side of that tile. Then, it would pop back into focus at the right-hand edge of
> the next tile.
>
> On the other hand, I like the idea of the range adjustment for a shading problem
> I have been having on a different system.
>
> However, Johan's suggestion brought up something I had been wanting to ask for
> quite some time. How much post-capture image manipulation do you think is okay?
> I know there have been threads about this before, but I was wondering
> specifically what people do and do not suggest to their users/students/postdocs.
> In particular, I'm wondering what kind of processing you find yourself doing
> fairly often. I have a feeling I err on the conservative side and I am
> definitely willing to broaden my range.
>  
my opinion, being a phd student but specializing on image processing:

always think of the purpose of the image.
* is it to show the quality of a new type of optics? minimal corrections

* is it to show some biological features? any manipulation is ok.
but 1. always keep raw data 2. always state exactly what you did.
if a paper only says "image processed with photoshop" then it should go
straight back for revision.
ultimately I think software should have the ability to store down how
things are done so it easily
can be repeated like others, like this:
http://endrov.net/images/a/ac/Flowwindow.png
there is no doubt what has been done or how to redo it. no one can
accuse you of research
falsification if you're open about what you are doing.

manipulation is easy to miss. e.g. storing your data as TIFF will remove
valuable information
about the recording conditions. it also cannot be avoided, writing an
image on paper will resample
it and change the colors.

I'm also doubting people's success with #1. I have seen people store
images on USB sticks only,
with almost random names, who then have had trouble finding the images
when I've asked for them.
there are image server solutions but only discipline helps in the end.

/Johan

> Thanks in advance!
>
> Robert
>
>
> Johan Henriksson wrote:
> > this problem can be trivially fixed by software, just apply a range
> > adjustment bi-linearly interpolated over the image. you can find the
> > optimal parameters using linear least squares or manually. but before
> > that, make sure the hardware setup is optimal. had this problem on a
> > simpler microscope, in that case due to a bad connection making the
> > light come in at a slight angle.
> >
> > /Johan
> >  
> >> Robert Peterson wrote:
> >>  
> >>    
> >>> Hi,
> >>>
> >>> We do a lot of tiling on multiple systems. When there is not a flat field of
> >>> illumination you get a checkerboard effect that ruins your images. So, I have
> >>> spent a fair amount of time trying to achieve a flat field illumination. With
> >>> the Zeiss there are a couple things I would suggest.
> >>>
> >>>    1. As was mentioned below there is always an edge effect and we have been
> >>>       told to zoom to at least 2 with our Zeiss objectives to "cut-off" this
> >>>       edge effect.
> >>>    2. The stage alignment and stage insert alignment have to be perfect. The
> >>>       Zeiss service tech in our area uses a slide with a grid along the entire
> >>>       length to make sure that it is in the same focal plane at all points. Or,
> >>>       as near to it as possible.
> >>>
> >>> Don't know if this will be helpful or not, but wanted to chime in with some
> >>> personal experiences.
> >>>
> >>> Thanks,
> >>> Robert Peterson
> >>>
> >>>
> >>> James Pawley wrote:
> >>>    
> >>>      
> >>>>> I have recently been trying to determine how flat the illumination field is
> >>>>> in our spinning disk system to make sure things are "fairly well" aligned,
> >>>>> but have had some strange results that I do not understand.  I would like
> >>>>> some feedback on what is acceptable, as I am new to these type of
> >>>>> measurements.  I started by using a fluor-ref slide from microscopy
> >>>>> education, which I assume is a good diagnostic slide for such an operation.
> >>>>>  I will also mention that I was using a Zeiss C-apochromat 40x/1.2 water
> >>>>> immersion lens with an adjustable collar. With the fluor-ref slide image,
> >>>>> the field looks very flat, with only about a 3% intensity drop on the edges
> >>>>> (determined via a line profile across the image).  However, if I take a
> >>>>> molecular probes bead and move it to different areas of the field, I get a
> >>>>> very different answer.  I find that I have a relatively large area (~20% of
> >>>>> the total field) on the bottom left corner that registers a 22% drop in
> >>>>> intensity via measuring the bead.  I focused up and down to make sure I was
> >>>>> at the brightest point of the bead, in case the focus had changed in a
> >>>>> different area of the field, but that did not seem to make a difference. I
> >>>>> have two questions I would be interested in getting feedback on. 1) What is
> >>>>> the reason for the difference between the flou-ref slide and the beads? and
> >>>>> 2) What kinds of percent changes in intensity over the field are considered
> >>>>> acceptable?
> >>>>> Thanks,
> >>>>>
> >>>>>
> >>>>> Sean Speese, Ph.D.
> >>>>> UMASS Medical School
> >>>>> Department of Neurobiology
> >>>>>        
> >>>>>          
> >>>> Dear Sean,
> >>>>
> >>>> The fact that the intensity loss  seems much worse in one corner than in the
> >>>> other three does suggest an alignment issue. I am only guessing but it might
> >>>> be that, in that corner, the laser light is not being as accurately focused
> >>>> into the pinholes by the microlenses. (But I have no idea how to adjust this.)
> >>>>
> >>>> However, one should always expect some drop off in the corners for a variety
> >>>> of reasons.
> >>>>
> >>>> Aberrations and field curvature, that are corrected almost perfectly on axis,
> >>>> are usually not corrected so well off the axis. The result is a larger spot,
> >>>> and therefor a lower intensity of excitation illumination. The higher
> >>>> aberrations also reduce optical performance on the fluorescence side, making
> >>>> the larger spot larger again, and causing more of it to be intercepted by the
> >>>> pinhole. This will be more obvious with the beads than with a bulk specimen
> >>>> where, to some extent, light originating nearby will still make it to the
> >>>> image plane. With the beads, there is no fluorescent material to make "nearby"
> >>>> light.
> >>>>
> >>>> Finally, large low-mag, high-NA lenses often suffer from vignetting. This can
> >>>> be seen clearly in Fig 11.9 on page 246 of the Handbook. The lower row of
> >>>> images represent performance in the back focal plane of 4 objectives, with a
> >>>> point light source that images 10mm off axis at the primary image plane. You
> >>>> can see that the BFP in not fully filled (i.e., not circular, as the top row
> >>>> is). The black area represents light that was not collected, in this case
> >>>> probably 25% of the total for the NA 1.2 water lens.
> >>>>
> >>>> This last point fits in with the recent discussion on the "Re: Recommendations
> >>>> for commercial multi-photon system purchase" thread about an NA 1.2 20x
> >>>> objective. If there really isn't room in the old RMS objective mount for all
> >>>> the light paths from the edges of the field of view of an NA 1.2 40x, then the
> >>>> matter will be 2x worse with a 20x. Therefore, one will need a much larger
> >>>> diameter tube and tube lens to capture all the data from such an objective.
> >>>>
> >>>> Originally such high-NA, low-mag objectives were not contemplated because they
> >>>> were harder to make and besides one could not see this much information with
> >>>> the unaided eye. Now that we decode position information from either CCDs or
> >>>> mirror position, this is no longer a limitation and, partially for this
> >>>> reason, we have recently seen the introduction a variety of "non-standard"
> >>>> microscope configurations such as the the Agilent/TILL and the new Olympus box
> >>>> scopes.
> >>>>
> >>>> The brave new world awaits.
> >>>>
> >>>> Cheers,
> >>>>
> >>>> Jim P.
> >>>>
> >>>> PS: We still have 3 open places at the UBC 3D Live-cells Course,
> >>>> http://www.3dcourse.ubc.ca/
> >>>>      
> >>>>        
> >>>  
> >>>    
> >>>      
> >>  
> >>    
> >
> >
> >  
>  


--
--
------------------------------------------------
Johan Henriksson
MSc Engineering
PhD student, Karolinska Institutet
http://mahogny.areta.org http://www.endrov.net
Guy Cox Guy Cox
Reply | Threaded
Open this post in threaded view
|

Re: Question about larger issue

What I nowadays put in the Materials and Methods section is something along the lines of "contrast adjustment and image scaling are not specifically noted but other manipulations are mentioned in the figure captions."

The rationale as I see it is that in the days of photography one always adjusted contrast by grade of paper, and set the enlarger to give the size, and maybe rotation, you needed.  So doing these things digitally is no different.

A median filter, on the other hand (which I often use) is not part of standard expectations and so should be noted.  (I did use to do unsharp masking photographically, but I would always mention that whether done digitally or photographically).

One interesting point which isn't often noted is that if one scans a photographic image (which even now I quite often have to do with EM pictures) the scan software will by default use a sharpening filter without telling you.  I turn it off (if I remember) but I wonder how many others even think of it?

                                    Guy



Optical Imaging Techniques in Cell Biology
by Guy Cox    CRC Press / Taylor & Francis
    http://www.guycox.com/optical.htm
______________________________________________
Associate Professor Guy Cox, MA, DPhil(Oxon)
Electron Microscope Unit, Madsen Building F09,
University of Sydney, NSW 2006
______________________________________________
Phone +61 2 9351 3176     Fax +61 2 9351 7682
Mobile 0413 281 861
______________________________________________
     http://www.guycox.net
-----Original Message-----
From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of Johan Henriksson
Sent: Tuesday, 23 June 2009 4:49 PM
To: [hidden email]
Subject: Re: Question about larger issue

Robert Peterson wrote:

> Hi,
>
> First, I would like to thank Johan for the response. He was correct
> that in cases where hardware is the issue, doing post-capture processing would not help.
> For example, when our stage and stage insert were not well aligned we
> had different focal depths across the sample. We would see something
> like an axon running from right to left across 4 or 5 tiles where in
> the right hand side of each tile it was in focus, but then it would
> dip out of focus by the left hand side of that tile. Then, it would
> pop back into focus at the right-hand edge of the next tile.
>
> On the other hand, I like the idea of the range adjustment for a
> shading problem I have been having on a different system.
>
> However, Johan's suggestion brought up something I had been wanting to
> ask for quite some time. How much post-capture image manipulation do you think is okay?
> I know there have been threads about this before, but I was wondering
> specifically what people do and do not suggest to their users/students/postdocs.
> In particular, I'm wondering what kind of processing you find yourself
> doing fairly often. I have a feeling I err on the conservative side
> and I am definitely willing to broaden my range.
>  
my opinion, being a phd student but specializing on image processing:

always think of the purpose of the image.
* is it to show the quality of a new type of optics? minimal corrections

* is it to show some biological features? any manipulation is ok.
but 1. always keep raw data 2. always state exactly what you did.
if a paper only says "image processed with photoshop" then it should go straight back for revision.
ultimately I think software should have the ability to store down how things are done so it easily can be repeated like others, like this:
http://endrov.net/images/a/ac/Flowwindow.png
there is no doubt what has been done or how to redo it. no one can accuse you of research falsification if you're open about what you are doing.

manipulation is easy to miss. e.g. storing your data as TIFF will remove valuable information about the recording conditions. it also cannot be avoided, writing an image on paper will resample it and change the colors.

I'm also doubting people's success with #1. I have seen people store images on USB sticks only, with almost random names, who then have had trouble finding the images when I've asked for them.
there are image server solutions but only discipline helps in the end.

/Johan

> Thanks in advance!
>
> Robert
>
>
> Johan Henriksson wrote:
> > this problem can be trivially fixed by software, just apply a range
> > adjustment bi-linearly interpolated over the image. you can find the
> > optimal parameters using linear least squares or manually. but
> > before that, make sure the hardware setup is optimal. had this
> > problem on a simpler microscope, in that case due to a bad
> > connection making the light come in at a slight angle.
> >
> > /Johan
> >  
> >> Robert Peterson wrote:
> >>  
> >>    
> >>> Hi,
> >>>
> >>> We do a lot of tiling on multiple systems. When there is not a
> >>> flat field of illumination you get a checkerboard effect that
> >>> ruins your images. So, I have spent a fair amount of time trying
> >>> to achieve a flat field illumination. With the Zeiss there are a couple things I would suggest.
> >>>
> >>>    1. As was mentioned below there is always an edge effect and we have been
> >>>       told to zoom to at least 2 with our Zeiss objectives to "cut-off" this
> >>>       edge effect.
> >>>    2. The stage alignment and stage insert alignment have to be perfect. The
> >>>       Zeiss service tech in our area uses a slide with a grid along the entire
> >>>       length to make sure that it is in the same focal plane at all points. Or,
> >>>       as near to it as possible.
> >>>
> >>> Don't know if this will be helpful or not, but wanted to chime in
> >>> with some personal experiences.
> >>>
> >>> Thanks,
> >>> Robert Peterson
> >>>
> >>>
> >>> James Pawley wrote:
> >>>    
> >>>      
> >>>>> I have recently been trying to determine how flat the
> >>>>> illumination field is in our spinning disk system to make sure
> >>>>> things are "fairly well" aligned, but have had some strange
> >>>>> results that I do not understand.  I would like some feedback on
> >>>>> what is acceptable, as I am new to these type of measurements.  
> >>>>> I started by using a fluor-ref slide from microscopy education, which I assume is a good diagnostic slide for such an operation.
> >>>>>  I will also mention that I was using a Zeiss C-apochromat
> >>>>> 40x/1.2 water immersion lens with an adjustable collar. With the
> >>>>> fluor-ref slide image, the field looks very flat, with only
> >>>>> about a 3% intensity drop on the edges (determined via a line
> >>>>> profile across the image).  However, if I take a molecular
> >>>>> probes bead and move it to different areas of the field, I get a
> >>>>> very different answer.  I find that I have a relatively large
> >>>>> area (~20% of the total field) on the bottom left corner that
> >>>>> registers a 22% drop in intensity via measuring the bead.  I
> >>>>> focused up and down to make sure I was at the brightest point of
> >>>>> the bead, in case the focus had changed in a different area of
> >>>>> the field, but that did not seem to make a difference. I have
> >>>>> two questions I would be interested in getting feedback on. 1)
> >>>>> What is the reason for the difference between the flou-ref slide
> >>>>> and the beads? and
> >>>>> 2) What kinds of percent changes in intensity over the field are
> >>>>> considered acceptable?
> >>>>> Thanks,
> >>>>>
> >>>>>
> >>>>> Sean Speese, Ph.D.
> >>>>> UMASS Medical School
> >>>>> Department of Neurobiology
> >>>>>        
> >>>>>          
> >>>> Dear Sean,
> >>>>
> >>>> The fact that the intensity loss  seems much worse in one corner
> >>>> than in the other three does suggest an alignment issue. I am
> >>>> only guessing but it might be that, in that corner, the laser
> >>>> light is not being as accurately focused into the pinholes by the
> >>>> microlenses. (But I have no idea how to adjust this.)
> >>>>
> >>>> However, one should always expect some drop off in the corners
> >>>> for a variety of reasons.
> >>>>
> >>>> Aberrations and field curvature, that are corrected almost
> >>>> perfectly on axis, are usually not corrected so well off the
> >>>> axis. The result is a larger spot, and therefor a lower intensity
> >>>> of excitation illumination. The higher aberrations also reduce
> >>>> optical performance on the fluorescence side, making the larger
> >>>> spot larger again, and causing more of it to be intercepted by
> >>>> the pinhole. This will be more obvious with the beads than with a
> >>>> bulk specimen where, to some extent, light originating nearby will still make it to the image plane. With the beads, there is no fluorescent material to make "nearby"
> >>>> light.
> >>>>
> >>>> Finally, large low-mag, high-NA lenses often suffer from
> >>>> vignetting. This can be seen clearly in Fig 11.9 on page 246 of
> >>>> the Handbook. The lower row of images represent performance in
> >>>> the back focal plane of 4 objectives, with a point light source
> >>>> that images 10mm off axis at the primary image plane. You can see
> >>>> that the BFP in not fully filled (i.e., not circular, as the top
> >>>> row is). The black area represents light that was not collected, in this case probably 25% of the total for the NA 1.2 water lens.
> >>>>
> >>>> This last point fits in with the recent discussion on the "Re:
> >>>> Recommendations for commercial multi-photon system purchase"
> >>>> thread about an NA 1.2 20x objective. If there really isn't room
> >>>> in the old RMS objective mount for all the light paths from the
> >>>> edges of the field of view of an NA 1.2 40x, then the matter will
> >>>> be 2x worse with a 20x. Therefore, one will need a much larger diameter tube and tube lens to capture all the data from such an objective.
> >>>>
> >>>> Originally such high-NA, low-mag objectives were not contemplated
> >>>> because they were harder to make and besides one could not see
> >>>> this much information with the unaided eye. Now that we decode
> >>>> position information from either CCDs or mirror position, this is
> >>>> no longer a limitation and, partially for this reason, we have recently seen the introduction a variety of "non-standard"
> >>>> microscope configurations such as the the Agilent/TILL and the
> >>>> new Olympus box scopes.
> >>>>
> >>>> The brave new world awaits.
> >>>>
> >>>> Cheers,
> >>>>
> >>>> Jim P.
> >>>>
> >>>> PS: We still have 3 open places at the UBC 3D Live-cells Course,
> >>>> http://www.3dcourse.ubc.ca/
> >>>>      
> >>>>        
> >>>  
> >>>    
> >>>      
> >>  
> >>    
> >
> >
> >  
>  


--
--
------------------------------------------------
Johan Henriksson
MSc Engineering
PhD student, Karolinska Institutet
http://mahogny.areta.org http://www.endrov.net

Internal Virus Database is out-of-date.
Checked by AVG.
Version: 7.5.560 / Virus Database: 270.12.26/2116 - Release Date: 15/05/2009 6:16 AM
 

Internal Virus Database is out-of-date.
Checked by AVG.
Version: 7.5.560 / Virus Database: 270.12.26/2116 - Release Date: 15/05/2009 6:16 AM
 
Jeremy Adler-2 Jeremy Adler-2
Reply | Threaded
Open this post in threaded view
|

Re: Question about larger issue

Digital rotation, unless by an integer multiple of 90 degrees, does differ
from rotating film. In a digital rotation there is not an exact mapping
between the original and rotated pixels and 'new' intensities are created
using the some combination of the original neighbouring pixels.
 

Dr Jeremy Adler

F451a

Cell Biologi

Wenner-Gren Inst.

The Arhenius Lab

Stockholm University

S-106 91 Stockholm

Sweden

tel +46 (0)8 16 2759
-----Original Message-----
From: Confocal Microscopy List [mailto:[hidden email]] On
Behalf Of Guy Cox
Sent: den 23 juni 2009 09:13
To: [hidden email]
Subject: Re: Question about larger issue

What I nowadays put in the Materials and Methods section is something along
the lines of "contrast adjustment and image scaling are not specifically
noted but other manipulations are mentioned in the figure captions."

The rationale as I see it is that in the days of photography one always
adjusted contrast by grade of paper, and set the enlarger to give the size,
and maybe rotation, you needed.  So doing these things digitally is no
different.

A median filter, on the other hand (which I often use) is not part of
standard expectations and so should be noted.  (I did use to do unsharp
masking photographically, but I would always mention that whether done
digitally or photographically).

One interesting point which isn't often noted is that if one scans a
photographic image (which even now I quite often have to do with EM
pictures) the scan software will by default use a sharpening filter without
telling you.  I turn it off (if I remember) but I wonder how many others
even think of it?

                                    Guy



Optical Imaging Techniques in Cell Biology
by Guy Cox    CRC Press / Taylor & Francis
    http://www.guycox.com/optical.htm
______________________________________________
Associate Professor Guy Cox, MA, DPhil(Oxon)
Electron Microscope Unit, Madsen Building F09,
University of Sydney, NSW 2006
______________________________________________
Phone +61 2 9351 3176     Fax +61 2 9351 7682
Mobile 0413 281 861
______________________________________________
     http://www.guycox.net
-----Original Message-----
From: Confocal Microscopy List [mailto:[hidden email]] On
Behalf Of Johan Henriksson
Sent: Tuesday, 23 June 2009 4:49 PM
To: [hidden email]
Subject: Re: Question about larger issue

Robert Peterson wrote:
> Hi,
>
> First, I would like to thank Johan for the response. He was correct
> that in cases where hardware is the issue, doing post-capture processing
would not help.

> For example, when our stage and stage insert were not well aligned we
> had different focal depths across the sample. We would see something
> like an axon running from right to left across 4 or 5 tiles where in
> the right hand side of each tile it was in focus, but then it would
> dip out of focus by the left hand side of that tile. Then, it would
> pop back into focus at the right-hand edge of the next tile.
>
> On the other hand, I like the idea of the range adjustment for a
> shading problem I have been having on a different system.
>
> However, Johan's suggestion brought up something I had been wanting to
> ask for quite some time. How much post-capture image manipulation do you
think is okay?
> I know there have been threads about this before, but I was wondering
> specifically what people do and do not suggest to their
users/students/postdocs.
> In particular, I'm wondering what kind of processing you find yourself
> doing fairly often. I have a feeling I err on the conservative side
> and I am definitely willing to broaden my range.
>  
my opinion, being a phd student but specializing on image processing:

always think of the purpose of the image.
* is it to show the quality of a new type of optics? minimal corrections

* is it to show some biological features? any manipulation is ok.
but 1. always keep raw data 2. always state exactly what you did.
if a paper only says "image processed with photoshop" then it should go
straight back for revision.
ultimately I think software should have the ability to store down how things
are done so it easily can be repeated like others, like this:
http://endrov.net/images/a/ac/Flowwindow.png
there is no doubt what has been done or how to redo it. no one can accuse
you of research falsification if you're open about what you are doing.

manipulation is easy to miss. e.g. storing your data as TIFF will remove
valuable information about the recording conditions. it also cannot be
avoided, writing an image on paper will resample it and change the colors.

I'm also doubting people's success with #1. I have seen people store images
on USB sticks only, with almost random names, who then have had trouble
finding the images when I've asked for them.
there are image server solutions but only discipline helps in the end.

/Johan

> Thanks in advance!
>
> Robert
>
>
> Johan Henriksson wrote:
> > this problem can be trivially fixed by software, just apply a range
> > adjustment bi-linearly interpolated over the image. you can find the
> > optimal parameters using linear least squares or manually. but
> > before that, make sure the hardware setup is optimal. had this
> > problem on a simpler microscope, in that case due to a bad
> > connection making the light come in at a slight angle.
> >
> > /Johan
> >  
> >> Robert Peterson wrote:
> >>  
> >>    
> >>> Hi,
> >>>
> >>> We do a lot of tiling on multiple systems. When there is not a
> >>> flat field of illumination you get a checkerboard effect that
> >>> ruins your images. So, I have spent a fair amount of time trying
> >>> to achieve a flat field illumination. With the Zeiss there are a
couple things I would suggest.
> >>>
> >>>    1. As was mentioned below there is always an edge effect and we
have been
> >>>       told to zoom to at least 2 with our Zeiss objectives to
"cut-off" this
> >>>       edge effect.
> >>>    2. The stage alignment and stage insert alignment have to be
perfect. The
> >>>       Zeiss service tech in our area uses a slide with a grid along
the entire
> >>>       length to make sure that it is in the same focal plane at all
points. Or,

> >>>       as near to it as possible.
> >>>
> >>> Don't know if this will be helpful or not, but wanted to chime in
> >>> with some personal experiences.
> >>>
> >>> Thanks,
> >>> Robert Peterson
> >>>
> >>>
> >>> James Pawley wrote:
> >>>    
> >>>      
> >>>>> I have recently been trying to determine how flat the
> >>>>> illumination field is in our spinning disk system to make sure
> >>>>> things are "fairly well" aligned, but have had some strange
> >>>>> results that I do not understand.  I would like some feedback on
> >>>>> what is acceptable, as I am new to these type of measurements.  
> >>>>> I started by using a fluor-ref slide from microscopy education,
which I assume is a good diagnostic slide for such an operation.

> >>>>>  I will also mention that I was using a Zeiss C-apochromat
> >>>>> 40x/1.2 water immersion lens with an adjustable collar. With the
> >>>>> fluor-ref slide image, the field looks very flat, with only
> >>>>> about a 3% intensity drop on the edges (determined via a line
> >>>>> profile across the image).  However, if I take a molecular
> >>>>> probes bead and move it to different areas of the field, I get a
> >>>>> very different answer.  I find that I have a relatively large
> >>>>> area (~20% of the total field) on the bottom left corner that
> >>>>> registers a 22% drop in intensity via measuring the bead.  I
> >>>>> focused up and down to make sure I was at the brightest point of
> >>>>> the bead, in case the focus had changed in a different area of
> >>>>> the field, but that did not seem to make a difference. I have
> >>>>> two questions I would be interested in getting feedback on. 1)
> >>>>> What is the reason for the difference between the flou-ref slide
> >>>>> and the beads? and
> >>>>> 2) What kinds of percent changes in intensity over the field are
> >>>>> considered acceptable?
> >>>>> Thanks,
> >>>>>
> >>>>>
> >>>>> Sean Speese, Ph.D.
> >>>>> UMASS Medical School
> >>>>> Department of Neurobiology
> >>>>>        
> >>>>>          
> >>>> Dear Sean,
> >>>>
> >>>> The fact that the intensity loss  seems much worse in one corner
> >>>> than in the other three does suggest an alignment issue. I am
> >>>> only guessing but it might be that, in that corner, the laser
> >>>> light is not being as accurately focused into the pinholes by the
> >>>> microlenses. (But I have no idea how to adjust this.)
> >>>>
> >>>> However, one should always expect some drop off in the corners
> >>>> for a variety of reasons.
> >>>>
> >>>> Aberrations and field curvature, that are corrected almost
> >>>> perfectly on axis, are usually not corrected so well off the
> >>>> axis. The result is a larger spot, and therefor a lower intensity
> >>>> of excitation illumination. The higher aberrations also reduce
> >>>> optical performance on the fluorescence side, making the larger
> >>>> spot larger again, and causing more of it to be intercepted by
> >>>> the pinhole. This will be more obvious with the beads than with a
> >>>> bulk specimen where, to some extent, light originating nearby will
still make it to the image plane. With the beads, there is no fluorescent
material to make "nearby"
> >>>> light.
> >>>>
> >>>> Finally, large low-mag, high-NA lenses often suffer from
> >>>> vignetting. This can be seen clearly in Fig 11.9 on page 246 of
> >>>> the Handbook. The lower row of images represent performance in
> >>>> the back focal plane of 4 objectives, with a point light source
> >>>> that images 10mm off axis at the primary image plane. You can see
> >>>> that the BFP in not fully filled (i.e., not circular, as the top
> >>>> row is). The black area represents light that was not collected, in
this case probably 25% of the total for the NA 1.2 water lens.
> >>>>
> >>>> This last point fits in with the recent discussion on the "Re:
> >>>> Recommendations for commercial multi-photon system purchase"
> >>>> thread about an NA 1.2 20x objective. If there really isn't room
> >>>> in the old RMS objective mount for all the light paths from the
> >>>> edges of the field of view of an NA 1.2 40x, then the matter will
> >>>> be 2x worse with a 20x. Therefore, one will need a much larger
diameter tube and tube lens to capture all the data from such an objective.
> >>>>
> >>>> Originally such high-NA, low-mag objectives were not contemplated
> >>>> because they were harder to make and besides one could not see
> >>>> this much information with the unaided eye. Now that we decode
> >>>> position information from either CCDs or mirror position, this is
> >>>> no longer a limitation and, partially for this reason, we have
recently seen the introduction a variety of "non-standard"

> >>>> microscope configurations such as the the Agilent/TILL and the
> >>>> new Olympus box scopes.
> >>>>
> >>>> The brave new world awaits.
> >>>>
> >>>> Cheers,
> >>>>
> >>>> Jim P.
> >>>>
> >>>> PS: We still have 3 open places at the UBC 3D Live-cells Course,
> >>>> http://www.3dcourse.ubc.ca/
> >>>>      
> >>>>        
> >>>  
> >>>    
> >>>      
> >>  
> >>    
> >
> >
> >  
>  


--
--
------------------------------------------------
Johan Henriksson
MSc Engineering
PhD student, Karolinska Institutet
http://mahogny.areta.org http://www.endrov.net

Internal Virus Database is out-of-date.
Checked by AVG.
Version: 7.5.560 / Virus Database: 270.12.26/2116 - Release Date: 15/05/2009
6:16 AM
 

Internal Virus Database is out-of-date.
Checked by AVG.
Version: 7.5.560 / Virus Database: 270.12.26/2116 - Release Date: 15/05/2009
6:16 AM
 
Guy Cox Guy Cox
Reply | Threaded
Open this post in threaded view
|

Re: Question about larger issue

The same is true of digital enlargement and reduction.  In fact there are considerable differences between different algorithms for enlarging and reducing.  (See my chapter in Jim Pawley's Handbook).  But the fact is no paper journal can reproduce your image pixel for pixel, and the printing process will have a very different gamma from your screen, so what's the issue?  If you print on grade 3 paper instead of grade 1 you will have very different intensity values in your image.  So should we insist on grade 2 (so-called normal)?  But the contrast difference between a cold-cathode enlarger (very low), a condenser enlarger (high) and a point-source enlarger (very high) are equally substantial.  

Numerical measurements should be made from unmodified images.  But the printed image is by definition modified, so our aim should be to get it to look like what we see.

                                    Guy



Optical Imaging Techniques in Cell Biology
by Guy Cox    CRC Press / Taylor & Francis
    http://www.guycox.com/optical.htm
______________________________________________
Associate Professor Guy Cox, MA, DPhil(Oxon)
Electron Microscope Unit, Madsen Building F09,
University of Sydney, NSW 2006
______________________________________________
Phone +61 2 9351 3176     Fax +61 2 9351 7682
Mobile 0413 281 861
______________________________________________
     http://www.guycox.net
-----Original Message-----
From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of Jeremy Adler
Sent: Tuesday, 23 June 2009 6:22 PM
To: [hidden email]
Subject: Re: Question about larger issue

Digital rotation, unless by an integer multiple of 90 degrees, does differ from rotating film. In a digital rotation there is not an exact mapping between the original and rotated pixels and 'new' intensities are created using the some combination of the original neighbouring pixels.
 

Dr Jeremy Adler

F451a

Cell Biologi

Wenner-Gren Inst.

The Arhenius Lab

Stockholm University

S-106 91 Stockholm

Sweden

tel +46 (0)8 16 2759
-----Original Message-----
From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of Guy Cox
Sent: den 23 juni 2009 09:13
To: [hidden email]
Subject: Re: Question about larger issue

What I nowadays put in the Materials and Methods section is something along the lines of "contrast adjustment and image scaling are not specifically noted but other manipulations are mentioned in the figure captions."

The rationale as I see it is that in the days of photography one always adjusted contrast by grade of paper, and set the enlarger to give the size, and maybe rotation, you needed.  So doing these things digitally is no different.

A median filter, on the other hand (which I often use) is not part of standard expectations and so should be noted.  (I did use to do unsharp masking photographically, but I would always mention that whether done digitally or photographically).

One interesting point which isn't often noted is that if one scans a photographic image (which even now I quite often have to do with EM
pictures) the scan software will by default use a sharpening filter without telling you.  I turn it off (if I remember) but I wonder how many others even think of it?

                                    Guy



Optical Imaging Techniques in Cell Biology
by Guy Cox    CRC Press / Taylor & Francis
    http://www.guycox.com/optical.htm
______________________________________________
Associate Professor Guy Cox, MA, DPhil(Oxon) Electron Microscope Unit, Madsen Building F09, University of Sydney, NSW 2006 ______________________________________________
Phone +61 2 9351 3176     Fax +61 2 9351 7682
Mobile 0413 281 861
______________________________________________
     http://www.guycox.net
-----Original Message-----
From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of Johan Henriksson
Sent: Tuesday, 23 June 2009 4:49 PM
To: [hidden email]
Subject: Re: Question about larger issue

Robert Peterson wrote:
> Hi,
>
> First, I would like to thank Johan for the response. He was correct
> that in cases where hardware is the issue, doing post-capture
> processing
would not help.

> For example, when our stage and stage insert were not well aligned we
> had different focal depths across the sample. We would see something
> like an axon running from right to left across 4 or 5 tiles where in
> the right hand side of each tile it was in focus, but then it would
> dip out of focus by the left hand side of that tile. Then, it would
> pop back into focus at the right-hand edge of the next tile.
>
> On the other hand, I like the idea of the range adjustment for a
> shading problem I have been having on a different system.
>
> However, Johan's suggestion brought up something I had been wanting to
> ask for quite some time. How much post-capture image manipulation do
> you
think is okay?
> I know there have been threads about this before, but I was wondering
> specifically what people do and do not suggest to their
users/students/postdocs.
> In particular, I'm wondering what kind of processing you find yourself
> doing fairly often. I have a feeling I err on the conservative side
> and I am definitely willing to broaden my range.
>  
my opinion, being a phd student but specializing on image processing:

always think of the purpose of the image.
* is it to show the quality of a new type of optics? minimal corrections

* is it to show some biological features? any manipulation is ok.
but 1. always keep raw data 2. always state exactly what you did.
if a paper only says "image processed with photoshop" then it should go straight back for revision.
ultimately I think software should have the ability to store down how things are done so it easily can be repeated like others, like this:
http://endrov.net/images/a/ac/Flowwindow.png
there is no doubt what has been done or how to redo it. no one can accuse you of research falsification if you're open about what you are doing.

manipulation is easy to miss. e.g. storing your data as TIFF will remove valuable information about the recording conditions. it also cannot be avoided, writing an image on paper will resample it and change the colors.

I'm also doubting people's success with #1. I have seen people store images on USB sticks only, with almost random names, who then have had trouble finding the images when I've asked for them.
there are image server solutions but only discipline helps in the end.

/Johan

> Thanks in advance!
>
> Robert
>
>
> Johan Henriksson wrote:
> > this problem can be trivially fixed by software, just apply a range
> > adjustment bi-linearly interpolated over the image. you can find the
> > optimal parameters using linear least squares or manually. but
> > before that, make sure the hardware setup is optimal. had this
> > problem on a simpler microscope, in that case due to a bad
> > connection making the light come in at a slight angle.
> >
> > /Johan
> >  
> >> Robert Peterson wrote:
> >>  
> >>    
> >>> Hi,
> >>>
> >>> We do a lot of tiling on multiple systems. When there is not a
> >>> flat field of illumination you get a checkerboard effect that
> >>> ruins your images. So, I have spent a fair amount of time trying
> >>> to achieve a flat field illumination. With the Zeiss there are a
couple things I would suggest.
> >>>
> >>>    1. As was mentioned below there is always an edge effect and we
have been
> >>>       told to zoom to at least 2 with our Zeiss objectives to
"cut-off" this
> >>>       edge effect.
> >>>    2. The stage alignment and stage insert alignment have to be
perfect. The
> >>>       Zeiss service tech in our area uses a slide with a grid
> >>> along
the entire
> >>>       length to make sure that it is in the same focal plane at
> >>> all
points. Or,

> >>>       as near to it as possible.
> >>>
> >>> Don't know if this will be helpful or not, but wanted to chime in
> >>> with some personal experiences.
> >>>
> >>> Thanks,
> >>> Robert Peterson
> >>>
> >>>
> >>> James Pawley wrote:
> >>>    
> >>>      
> >>>>> I have recently been trying to determine how flat the
> >>>>> illumination field is in our spinning disk system to make sure
> >>>>> things are "fairly well" aligned, but have had some strange
> >>>>> results that I do not understand.  I would like some feedback on
> >>>>> what is acceptable, as I am new to these type of measurements.
> >>>>> I started by using a fluor-ref slide from microscopy education,
which I assume is a good diagnostic slide for such an operation.

> >>>>>  I will also mention that I was using a Zeiss C-apochromat
> >>>>> 40x/1.2 water immersion lens with an adjustable collar. With the
> >>>>> fluor-ref slide image, the field looks very flat, with only
> >>>>> about a 3% intensity drop on the edges (determined via a line
> >>>>> profile across the image).  However, if I take a molecular
> >>>>> probes bead and move it to different areas of the field, I get a
> >>>>> very different answer.  I find that I have a relatively large
> >>>>> area (~20% of the total field) on the bottom left corner that
> >>>>> registers a 22% drop in intensity via measuring the bead.  I
> >>>>> focused up and down to make sure I was at the brightest point of
> >>>>> the bead, in case the focus had changed in a different area of
> >>>>> the field, but that did not seem to make a difference. I have
> >>>>> two questions I would be interested in getting feedback on. 1)
> >>>>> What is the reason for the difference between the flou-ref slide
> >>>>> and the beads? and
> >>>>> 2) What kinds of percent changes in intensity over the field are
> >>>>> considered acceptable?
> >>>>> Thanks,
> >>>>>
> >>>>>
> >>>>> Sean Speese, Ph.D.
> >>>>> UMASS Medical School
> >>>>> Department of Neurobiology
> >>>>>        
> >>>>>          
> >>>> Dear Sean,
> >>>>
> >>>> The fact that the intensity loss  seems much worse in one corner
> >>>> than in the other three does suggest an alignment issue. I am
> >>>> only guessing but it might be that, in that corner, the laser
> >>>> light is not being as accurately focused into the pinholes by the
> >>>> microlenses. (But I have no idea how to adjust this.)
> >>>>
> >>>> However, one should always expect some drop off in the corners
> >>>> for a variety of reasons.
> >>>>
> >>>> Aberrations and field curvature, that are corrected almost
> >>>> perfectly on axis, are usually not corrected so well off the
> >>>> axis. The result is a larger spot, and therefor a lower intensity
> >>>> of excitation illumination. The higher aberrations also reduce
> >>>> optical performance on the fluorescence side, making the larger
> >>>> spot larger again, and causing more of it to be intercepted by
> >>>> the pinhole. This will be more obvious with the beads than with a
> >>>> bulk specimen where, to some extent, light originating nearby
> >>>> will
still make it to the image plane. With the beads, there is no fluorescent material to make "nearby"

> >>>> light.
> >>>>
> >>>> Finally, large low-mag, high-NA lenses often suffer from
> >>>> vignetting. This can be seen clearly in Fig 11.9 on page 246 of
> >>>> the Handbook. The lower row of images represent performance in
> >>>> the back focal plane of 4 objectives, with a point light source
> >>>> that images 10mm off axis at the primary image plane. You can see
> >>>> that the BFP in not fully filled (i.e., not circular, as the top
> >>>> row is). The black area represents light that was not collected,
> >>>> in
this case probably 25% of the total for the NA 1.2 water lens.
> >>>>
> >>>> This last point fits in with the recent discussion on the "Re:
> >>>> Recommendations for commercial multi-photon system purchase"
> >>>> thread about an NA 1.2 20x objective. If there really isn't room
> >>>> in the old RMS objective mount for all the light paths from the
> >>>> edges of the field of view of an NA 1.2 40x, then the matter will
> >>>> be 2x worse with a 20x. Therefore, one will need a much larger
diameter tube and tube lens to capture all the data from such an objective.
> >>>>
> >>>> Originally such high-NA, low-mag objectives were not contemplated
> >>>> because they were harder to make and besides one could not see
> >>>> this much information with the unaided eye. Now that we decode
> >>>> position information from either CCDs or mirror position, this is
> >>>> no longer a limitation and, partially for this reason, we have
recently seen the introduction a variety of "non-standard"

> >>>> microscope configurations such as the the Agilent/TILL and the
> >>>> new Olympus box scopes.
> >>>>
> >>>> The brave new world awaits.
> >>>>
> >>>> Cheers,
> >>>>
> >>>> Jim P.
> >>>>
> >>>> PS: We still have 3 open places at the UBC 3D Live-cells Course,
> >>>> http://www.3dcourse.ubc.ca/
> >>>>      
> >>>>        
> >>>  
> >>>    
> >>>      
> >>  
> >>    
> >
> >
> >  
>  


--
--
------------------------------------------------
Johan Henriksson
MSc Engineering
PhD student, Karolinska Institutet
http://mahogny.areta.org http://www.endrov.net

Internal Virus Database is out-of-date.
Checked by AVG.
Version: 7.5.560 / Virus Database: 270.12.26/2116 - Release Date: 15/05/2009
6:16 AM
 

Internal Virus Database is out-of-date.
Checked by AVG.
Version: 7.5.560 / Virus Database: 270.12.26/2116 - Release Date: 15/05/2009
6:16 AM
 

Internal Virus Database is out-of-date.
Checked by AVG.
Version: 7.5.560 / Virus Database: 270.12.26/2116 - Release Date: 15/05/2009 6:16 AM
 

Internal Virus Database is out-of-date.
Checked by AVG.
Version: 7.5.560 / Virus Database: 270.12.26/2116 - Release Date: 15/05/2009 6:16 AM
 
Jason Swedlow Jason Swedlow
Reply | Threaded
Open this post in threaded view
|

Re: Question about larger issue

Just my two cents:

In most cases, the electronic document is now the publication of record, so arguments about contrast of paper, etc. have been replaced about arguments about compression, color mapping, etc.  Same general problem, different tech.

IMHO, Ken Yamada and Mike Rossner have done the best job of delineating what is reasonable in image data presentation in a publication.  See http://jcb.rupress.org/cgi/content/full/166/1/11.  Note most other journals now have similar guidelines.  See for example, http://www.nature.com/authors/editorial_policies/image.html.

Ideally, data would be published alongisde a publication, and held in a public repository.  One step towards this is the JCB DataViewer:  http://jcb-dataviewer.rupress.org/.  DECLARED CONFLICT:

2009/6/23 Guy Cox <[hidden email]>
The same is true of digital enlargement and reduction.  In fact there are considerable differences between different algorithms for enlarging and reducing.  (See my chapter in Jim Pawley's Handbook).  But the fact is no paper journal can reproduce your image pixel for pixel, and the printing process will have a very different gamma from your screen, so what's the issue?  If you print on grade 3 paper instead of grade 1 you will have very different intensity values in your image.  So should we insist on grade 2 (so-called normal)?  But the contrast difference between a cold-cathode enlarger (very low), a condenser enlarger (high) and a point-source enlarger (very high) are equally substantial.

Numerical measurements should be made from unmodified images.  But the printed image is by definition modified, so our aim should be to get it to look like what we see.

                                   Guy



Optical Imaging Techniques in Cell Biology
by Guy Cox    CRC Press / Taylor & Francis
   http://www.guycox.com/optical.htm
______________________________________________
Associate Professor Guy Cox, MA, DPhil(Oxon)
Electron Microscope Unit, Madsen Building F09,
University of Sydney, NSW 2006
______________________________________________
Phone +61 2 9351 3176     Fax +61 2 9351 7682
Mobile 0413 281 861
______________________________________________
    http://www.guycox.net
-----Original Message-----
From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of Jeremy Adler
Sent: Tuesday, 23 June 2009 6:22 PM
To: [hidden email]
Subject: Re: Question about larger issue

Digital rotation, unless by an integer multiple of 90 degrees, does differ from rotating film. In a digital rotation there is not an exact mapping between the original and rotated pixels and 'new' intensities are created using the some combination of the original neighbouring pixels.


Dr Jeremy Adler

F451a

Cell Biologi

Wenner-Gren Inst.

The Arhenius Lab

Stockholm University

S-106 91 Stockholm

Sweden

tel +46 (0)8 16 2759
-----Original Message-----
From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of Guy Cox
Sent: den 23 juni 2009 09:13
To: [hidden email]
Subject: Re: Question about larger issue

What I nowadays put in the Materials and Methods section is something along the lines of "contrast adjustment and image scaling are not specifically noted but other manipulations are mentioned in the figure captions."

The rationale as I see it is that in the days of photography one always adjusted contrast by grade of paper, and set the enlarger to give the size, and maybe rotation, you needed.  So doing these things digitally is no different.

A median filter, on the other hand (which I often use) is not part of standard expectations and so should be noted.  (I did use to do unsharp masking photographically, but I would always mention that whether done digitally or photographically).

One interesting point which isn't often noted is that if one scans a photographic image (which even now I quite often have to do with EM
pictures) the scan software will by default use a sharpening filter without telling you.  I turn it off (if I remember) but I wonder how many others even think of it?

                                   Guy



Optical Imaging Techniques in Cell Biology
by Guy Cox    CRC Press / Taylor & Francis
   http://www.guycox.com/optical.htm
______________________________________________
Associate Professor Guy Cox, MA, DPhil(Oxon) Electron Microscope Unit, Madsen Building F09, University of Sydney, NSW 2006 ______________________________________________
Phone +61 2 9351 3176     Fax +61 2 9351 7682
Mobile 0413 281 861
______________________________________________
    http://www.guycox.net
-----Original Message-----
From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of Johan Henriksson
Sent: Tuesday, 23 June 2009 4:49 PM
To: [hidden email]
Subject: Re: Question about larger issue

Robert Peterson wrote:
> Hi,
>
> First, I would like to thank Johan for the response. He was correct
> that in cases where hardware is the issue, doing post-capture
> processing
would not help.
> For example, when our stage and stage insert were not well aligned we
> had different focal depths across the sample. We would see something
> like an axon running from right to left across 4 or 5 tiles where in
> the right hand side of each tile it was in focus, but then it would
> dip out of focus by the left hand side of that tile. Then, it would
> pop back into focus at the right-hand edge of the next tile.
>
> On the other hand, I like the idea of the range adjustment for a
> shading problem I have been having on a different system.
>
> However, Johan's suggestion brought up something I had been wanting to
> ask for quite some time. How much post-capture image manipulation do
> you
think is okay?
> I know there have been threads about this before, but I was wondering
> specifically what people do and do not suggest to their
users/students/postdocs.
> In particular, I'm wondering what kind of processing you find yourself
> doing fairly often. I have a feeling I err on the conservative side
> and I am definitely willing to broaden my range.
>
my opinion, being a phd student but specializing on image processing:

always think of the purpose of the image.
* is it to show the quality of a new type of optics? minimal corrections

* is it to show some biological features? any manipulation is ok.
but 1. always keep raw data 2. always state exactly what you did.
if a paper only says "image processed with photoshop" then it should go straight back for revision.
ultimately I think software should have the ability to store down how things are done so it easily can be repeated like others, like this:
http://endrov.net/images/a/ac/Flowwindow.png
there is no doubt what has been done or how to redo it. no one can accuse you of research falsification if you're open about what you are doing.

manipulation is easy to miss. e.g. storing your data as TIFF will remove valuable information about the recording conditions. it also cannot be avoided, writing an image on paper will resample it and change the colors.

I'm also doubting people's success with #1. I have seen people store images on USB sticks only, with almost random names, who then have had trouble finding the images when I've asked for them.
there are image server solutions but only discipline helps in the end.

/Johan
> Thanks in advance!
>
> Robert
>
>
> Johan Henriksson wrote:
> > this problem can be trivially fixed by software, just apply a range
> > adjustment bi-linearly interpolated over the image. you can find the
> > optimal parameters using linear least squares or manually. but
> > before that, make sure the hardware setup is optimal. had this
> > problem on a simpler microscope, in that case due to a bad
> > connection making the light come in at a slight angle.
> >
> > /Johan
> >
> >> Robert Peterson wrote:
> >>
> >>
> >>> Hi,
> >>>
> >>> We do a lot of tiling on multiple systems. When there is not a
> >>> flat field of illumination you get a checkerboard effect that
> >>> ruins your images. So, I have spent a fair amount of time trying
> >>> to achieve a flat field illumination. With the Zeiss there are a
couple things I would suggest.
> >>>
> >>>    1. As was mentioned below there is always an edge effect and we
have been
> >>>       told to zoom to at least 2 with our Zeiss objectives to
"cut-off" this
> >>>       edge effect.
> >>>    2. The stage alignment and stage insert alignment have to be
perfect. The
> >>>       Zeiss service tech in our area uses a slide with a grid
> >>> along
the entire
> >>>       length to make sure that it is in the same focal plane at
> >>> all
points. Or,
> >>>       as near to it as possible.
> >>>
> >>> Don't know if this will be helpful or not, but wanted to chime in
> >>> with some personal experiences.
> >>>
> >>> Thanks,
> >>> Robert Peterson
> >>>
> >>>
> >>> James Pawley wrote:
> >>>
> >>>
> >>>>> I have recently been trying to determine how flat the
> >>>>> illumination field is in our spinning disk system to make sure
> >>>>> things are "fairly well" aligned, but have had some strange
> >>>>> results that I do not understand.  I would like some feedback on
> >>>>> what is acceptable, as I am new to these type of measurements.
> >>>>> I started by using a fluor-ref slide from microscopy education,
which I assume is a good diagnostic slide for such an operation.
> >>>>>  I will also mention that I was using a Zeiss C-apochromat
> >>>>> 40x/1.2 water immersion lens with an adjustable collar. With the
> >>>>> fluor-ref slide image, the field looks very flat, with only
> >>>>> about a 3% intensity drop on the edges (determined via a line
> >>>>> profile across the image).  However, if I take a molecular
> >>>>> probes bead and move it to different areas of the field, I get a
> >>>>> very different answer.  I find that I have a relatively large
> >>>>> area (~20% of the total field) on the bottom left corner that
> >>>>> registers a 22% drop in intensity via measuring the bead.  I
> >>>>> focused up and down to make sure I was at the brightest point of
> >>>>> the bead, in case the focus had changed in a different area of
> >>>>> the field, but that did not seem to make a difference. I have
> >>>>> two questions I would be interested in getting feedback on. 1)
> >>>>> What is the reason for the difference between the flou-ref slide
> >>>>> and the beads? and
> >>>>> 2) What kinds of percent changes in intensity over the field are
> >>>>> considered acceptable?
> >>>>> Thanks,
> >>>>>
> >>>>>
> >>>>> Sean Speese, Ph.D.
> >>>>> UMASS Medical School
> >>>>> Department of Neurobiology
> >>>>>
> >>>>>
> >>>> Dear Sean,
> >>>>
> >>>> The fact that the intensity loss  seems much worse in one corner
> >>>> than in the other three does suggest an alignment issue. I am
> >>>> only guessing but it might be that, in that corner, the laser
> >>>> light is not being as accurately focused into the pinholes by the
> >>>> microlenses. (But I have no idea how to adjust this.)
> >>>>
> >>>> However, one should always expect some drop off in the corners
> >>>> for a variety of reasons.
> >>>>
> >>>> Aberrations and field curvature, that are corrected almost
> >>>> perfectly on axis, are usually not corrected so well off the
> >>>> axis. The result is a larger spot, and therefor a lower intensity
> >>>> of excitation illumination. The higher aberrations also reduce
> >>>> optical performance on the fluorescence side, making the larger
> >>>> spot larger again, and causing more of it to be intercepted by
> >>>> the pinhole. This will be more obvious with the beads than with a
> >>>> bulk specimen where, to some extent, light originating nearby
> >>>> will
still make it to the image plane. With the beads, there is no fluorescent material to make "nearby"
> >>>> light.
> >>>>
> >>>> Finally, large low-mag, high-NA lenses often suffer from
> >>>> vignetting. This can be seen clearly in Fig 11.9 on page 246 of
> >>>> the Handbook. The lower row of images represent performance in
> >>>> the back focal plane of 4 objectives, with a point light source
> >>>> that images 10mm off axis at the primary image plane. You can see
> >>>> that the BFP in not fully filled (i.e., not circular, as the top
> >>>> row is). The black area represents light that was not collected,
> >>>> in
this case probably 25% of the total for the NA 1.2 water lens.
> >>>>
> >>>> This last point fits in with the recent discussion on the "Re:
> >>>> Recommendations for commercial multi-photon system purchase"
> >>>> thread about an NA 1.2 20x objective. If there really isn't room
> >>>> in the old RMS objective mount for all the light paths from the
> >>>> edges of the field of view of an NA 1.2 40x, then the matter will
> >>>> be 2x worse with a 20x. Therefore, one will need a much larger
diameter tube and tube lens to capture all the data from such an objective.
> >>>>
> >>>> Originally such high-NA, low-mag objectives were not contemplated
> >>>> because they were harder to make and besides one could not see
> >>>> this much information with the unaided eye. Now that we decode
> >>>> position information from either CCDs or mirror position, this is
> >>>> no longer a limitation and, partially for this reason, we have
recently seen the introduction a variety of "non-standard"
> >>>> microscope configurations such as the the Agilent/TILL and the
> >>>> new Olympus box scopes.
> >>>>
> >>>> The brave new world awaits.
> >>>>
> >>>> Cheers,
> >>>>
> >>>> Jim P.
> >>>>
> >>>> PS: We still have 3 open places at the UBC 3D Live-cells Course,
> >>>> http://www.3dcourse.ubc.ca/
> >>>>
> >>>>
> >>>
> >>>
> >>>
> >>
> >>
> >
> >
> >
>


--
--
------------------------------------------------
Johan Henriksson
MSc Engineering
PhD student, Karolinska Institutet
http://mahogny.areta.org http://www.endrov.net

Internal Virus Database is out-of-date.
Checked by AVG.
Version: 7.5.560 / Virus Database: 270.12.26/2116 - Release Date: 15/05/2009
6:16 AM


Internal Virus Database is out-of-date.
Checked by AVG.
Version: 7.5.560 / Virus Database: 270.12.26/2116 - Release Date: 15/05/2009
6:16 AM


Internal Virus Database is out-of-date.
Checked by AVG.
Version: 7.5.560 / Virus Database: 270.12.26/2116 - Release Date: 15/05/2009 6:16 AM


Internal Virus Database is out-of-date.
Checked by AVG.
Version: 7.5.560 / Virus Database: 270.12.26/2116 - Release Date: 15/05/2009 6:16 AM




--
**************************
Wellcome Trust Centre for Gene Regulation & Expression
College of Life Sciences
MSI/WTB/JBC Complex
University of Dundee
Dow Street
Dundee  DD1 5EH
United Kingdom

phone (01382) 385819
Intl phone:  44 1382 385819
FAX   (01382) 388072
email: [hidden email]

Lab Page: http://www.dundee.ac.uk/lifesciences/swedlow/
Open Microscopy Environment: http://openmicroscopy.org
**************************
Jason Swedlow Jason Swedlow
Reply | Threaded
Open this post in threaded view
|

Re: Question about larger issue

Apologies... some weird combination of keystrokes made Gmail send prematurely.  Full text:

Just my two cents:

In most cases, the electronic document is now the publication of record, so arguments about contrast of paper, etc. have been replaced about arguments about compression, color mapping, etc.  Same general problem, different tech.

IMHO, Ken Yamada and Mike Rossner have done the best job of delineating what is reasonable in image data presentation in a publication.  See http://jcb.rupress.org/cgi/content/full/166/1/11.  Note most other journals now have similar guidelines.  See for example, http://www.nature.com/authors/editorial_policies/image.html.

Ideally, data would be published alongisde a publication, and held in a public repository.  One step towards this is the JCB DataViewer:  http://jcb-dataviewer.rupress.org/.  DECLARED CONFLICT: I FOUNDED GLENCOE SOFTWARE, THAT BUILT THE JCB DATAVIEWER IN COLLABORATION WITH ROCKEFELLER UNIVERSITY PRESS.  A description of this, and what we are trying to do, is at http://jcb.rupress.org/cgi/content/full/jcb.200811132.

Cheers,

Jason



2009/6/23 Guy Cox <[hidden email]>

The same is true of digital enlargement and reduction.  In fact there are considerable differences between different algorithms for enlarging and reducing.  (See my chapter in Jim Pawley's Handbook).  But the fact is no paper journal can reproduce your image pixel for pixel, and the printing process will have a very different gamma from your screen, so what's the issue?  If you print on grade 3 paper instead of grade 1 you will have very different intensity values in your image.  So should we insist on grade 2 (so-called normal)?  But the contrast difference between a cold-cathode enlarger (very low), a condenser enlarger (high) and a point-source enlarger (very high) are equally substantial.

Numerical measurements should be made from unmodified images.  But the printed image is by definition modified, so our aim should be to get it to look like what we see.

                                   Guy



Optical Imaging Techniques in Cell Biology
by Guy Cox    CRC Press / Taylor & Francis
   http://www.guycox.com/optical.htm
______________________________________________
Associate Professor Guy Cox, MA, DPhil(Oxon)
Electron Microscope Unit, Madsen Building F09,
University of Sydney, NSW 2006
______________________________________________
Phone +61 2 9351 3176     Fax +61 2 9351 7682
Mobile 0413 281 861
______________________________________________
    http://www.guycox.net
-----Original Message-----
From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of Jeremy Adler
Sent: Tuesday, 23 June 2009 6:22 PM
To: [hidden email]
Subject: Re: Question about larger issue

Digital rotation, unless by an integer multiple of 90 degrees, does differ from rotating film. In a digital rotation there is not an exact mapping between the original and rotated pixels and 'new' intensities are created using the some combination of the original neighbouring pixels.


Dr Jeremy Adler

F451a

Cell Biologi

Wenner-Gren Inst.

The Arhenius Lab

Stockholm University

S-106 91 Stockholm

Sweden

tel +46 (0)8 16 2759
-----Original Message-----
From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of Guy Cox
Sent: den 23 juni 2009 09:13
To: [hidden email]
Subject: Re: Question about larger issue

What I nowadays put in the Materials and Methods section is something along the lines of "contrast adjustment and image scaling are not specifically noted but other manipulations are mentioned in the figure captions."

The rationale as I see it is that in the days of photography one always adjusted contrast by grade of paper, and set the enlarger to give the size, and maybe rotation, you needed.  So doing these things digitally is no different.

A median filter, on the other hand (which I often use) is not part of standard expectations and so should be noted.  (I did use to do unsharp masking photographically, but I would always mention that whether done digitally or photographically).

One interesting point which isn't often noted is that if one scans a photographic image (which even now I quite often have to do with EM
pictures) the scan software will by default use a sharpening filter without telling you.  I turn it off (if I remember) but I wonder how many others even think of it?

                                   Guy



Optical Imaging Techniques in Cell Biology
by Guy Cox    CRC Press / Taylor & Francis
   http://www.guycox.com/optical.htm
______________________________________________
Associate Professor Guy Cox, MA, DPhil(Oxon) Electron Microscope Unit, Madsen Building F09, University of Sydney, NSW 2006 ______________________________________________
Phone +61 2 9351 3176     Fax +61 2 9351 7682
Mobile 0413 281 861
______________________________________________
    http://www.guycox.net
-----Original Message-----
From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of Johan Henriksson
Sent: Tuesday, 23 June 2009 4:49 PM
To: [hidden email]
Subject: Re: Question about larger issue

Robert Peterson wrote:
> Hi,
>
> First, I would like to thank Johan for the response. He was correct
> that in cases where hardware is the issue, doing post-capture
> processing
would not help.
> For example, when our stage and stage insert were not well aligned we
> had different focal depths across the sample. We would see something
> like an axon running from right to left across 4 or 5 tiles where in
> the right hand side of each tile it was in focus, but then it would
> dip out of focus by the left hand side of that tile. Then, it would
> pop back into focus at the right-hand edge of the next tile.
>
> On the other hand, I like the idea of the range adjustment for a
> shading problem I have been having on a different system.
>
> However, Johan's suggestion brought up something I had been wanting to
> ask for quite some time. How much post-capture image manipulation do
> you
think is okay?
> I know there have been threads about this before, but I was wondering
> specifically what people do and do not suggest to their
users/students/postdocs.
> In particular, I'm wondering what kind of processing you find yourself
> doing fairly often. I have a feeling I err on the conservative side
> and I am definitely willing to broaden my range.
>
my opinion, being a phd student but specializing on image processing:

always think of the purpose of the image.
* is it to show the quality of a new type of optics? minimal corrections

* is it to show some biological features? any manipulation is ok.
but 1. always keep raw data 2. always state exactly what you did.
if a paper only says "image processed with photoshop" then it should go straight back for revision.
ultimately I think software should have the ability to store down how things are done so it easily can be repeated like others, like this:
http://endrov.net/images/a/ac/Flowwindow.png
there is no doubt what has been done or how to redo it. no one can accuse you of research falsification if you're open about what you are doing.

manipulation is easy to miss. e.g. storing your data as TIFF will remove valuable information about the recording conditions. it also cannot be avoided, writing an image on paper will resample it and change the colors.

I'm also doubting people's success with #1. I have seen people store images on USB sticks only, with almost random names, who then have had trouble finding the images when I've asked for them.
there are image server solutions but only discipline helps in the end.

/Johan
> Thanks in advance!
>
> Robert
>
>
> Johan Henriksson wrote:
> > this problem can be trivially fixed by software, just apply a range
> > adjustment bi-linearly interpolated over the image. you can find the
> > optimal parameters using linear least squares or manually. but
> > before that, make sure the hardware setup is optimal. had this
> > problem on a simpler microscope, in that case due to a bad
> > connection making the light come in at a slight angle.
> >
> > /Johan
> >
> >> Robert Peterson wrote:
> >>
> >>
> >>> Hi,
> >>>
> >>> We do a lot of tiling on multiple systems. When there is not a
> >>> flat field of illumination you get a checkerboard effect that
> >>> ruins your images. So, I have spent a fair amount of time trying
> >>> to achieve a flat field illumination. With the Zeiss there are a
couple things I would suggest.
> >>>
> >>>    1. As was mentioned below there is always an edge effect and we
have been
> >>>       told to zoom to at least 2 with our Zeiss objectives to
"cut-off" this
> >>>       edge effect.
> >>>    2. The stage alignment and stage insert alignment have to be
perfect. The
> >>>       Zeiss service tech in our area uses a slide with a grid
> >>> along
the entire
> >>>       length to make sure that it is in the same focal plane at
> >>> all
points. Or,
> >>>       as near to it as possible.
> >>>
> >>> Don't know if this will be helpful or not, but wanted to chime in
> >>> with some personal experiences.
> >>>
> >>> Thanks,
> >>> Robert Peterson
> >>>
> >>>
> >>> James Pawley wrote:
> >>>
> >>>
> >>>>> I have recently been trying to determine how flat the
> >>>>> illumination field is in our spinning disk system to make sure
> >>>>> things are "fairly well" aligned, but have had some strange
> >>>>> results that I do not understand.  I would like some feedback on
> >>>>> what is acceptable, as I am new to these type of measurements.
> >>>>> I started by using a fluor-ref slide from microscopy education,
which I assume is a good diagnostic slide for such an operation.
> >>>>>  I will also mention that I was using a Zeiss C-apochromat
> >>>>> 40x/1.2 water immersion lens with an adjustable collar. With the
> >>>>> fluor-ref slide image, the field looks very flat, with only
> >>>>> about a 3% intensity drop on the edges (determined via a line
> >>>>> profile across the image).  However, if I take a molecular
> >>>>> probes bead and move it to different areas of the field, I get a
> >>>>> very different answer.  I find that I have a relatively large
> >>>>> area (~20% of the total field) on the bottom left corner that
> >>>>> registers a 22% drop in intensity via measuring the bead.  I
> >>>>> focused up and down to make sure I was at the brightest point of
> >>>>> the bead, in case the focus had changed in a different area of
> >>>>> the field, but that did not seem to make a difference. I have
> >>>>> two questions I would be interested in getting feedback on. 1)
> >>>>> What is the reason for the difference between the flou-ref slide
> >>>>> and the beads? and
> >>>>> 2) What kinds of percent changes in intensity over the field are
> >>>>> considered acceptable?
> >>>>> Thanks,
> >>>>>
> >>>>>
> >>>>> Sean Speese, Ph.D.
> >>>>> UMASS Medical School
> >>>>> Department of Neurobiology
> >>>>>
> >>>>>
> >>>> Dear Sean,
> >>>>
> >>>> The fact that the intensity loss  seems much worse in one corner
> >>>> than in the other three does suggest an alignment issue. I am
> >>>> only guessing but it might be that, in that corner, the laser
> >>>> light is not being as accurately focused into the pinholes by the
> >>>> microlenses. (But I have no idea how to adjust this.)
> >>>>
> >>>> However, one should always expect some drop off in the corners
> >>>> for a variety of reasons.
> >>>>
> >>>> Aberrations and field curvature, that are corrected almost
> >>>> perfectly on axis, are usually not corrected so well off the
> >>>> axis. The result is a larger spot, and therefor a lower intensity
> >>>> of excitation illumination. The higher aberrations also reduce
> >>>> optical performance on the fluorescence side, making the larger
> >>>> spot larger again, and causing more of it to be intercepted by
> >>>> the pinhole. This will be more obvious with the beads than with a
> >>>> bulk specimen where, to some extent, light originating nearby
> >>>> will
still make it to the image plane. With the beads, there is no fluorescent material to make "nearby"
> >>>> light.
> >>>>
> >>>> Finally, large low-mag, high-NA lenses often suffer from
> >>>> vignetting. This can be seen clearly in Fig 11.9 on page 246 of
> >>>> the Handbook. The lower row of images represent performance in
> >>>> the back focal plane of 4 objectives, with a point light source
> >>>> that images 10mm off axis at the primary image plane. You can see
> >>>> that the BFP in not fully filled (i.e., not circular, as the top
> >>>> row is). The black area represents light that was not collected,
> >>>> in
this case probably 25% of the total for the NA 1.2 water lens.
> >>>>
> >>>> This last point fits in with the recent discussion on the "Re:
> >>>> Recommendations for commercial multi-photon system purchase"
> >>>> thread about an NA 1.2 20x objective. If there really isn't room
> >>>> in the old RMS objective mount for all the light paths from the
> >>>> edges of the field of view of an NA 1.2 40x, then the matter will
> >>>> be 2x worse with a 20x. Therefore, one will need a much larger
diameter tube and tube lens to capture all the data from such an objective.
> >>>>
> >>>> Originally such high-NA, low-mag objectives were not contemplated
> >>>> because they were harder to make and besides one could not see
> >>>> this much information with the unaided eye. Now that we decode
> >>>> position information from either CCDs or mirror position, this is
> >>>> no longer a limitation and, partially for this reason, we have
recently seen the introduction a variety of "non-standard"
> >>>> microscope configurations such as the the Agilent/TILL and the
> >>>> new Olympus box scopes.
> >>>>
> >>>> The brave new world awaits.
> >>>>
> >>>> Cheers,
> >>>>
> >>>> Jim P.
> >>>>
> >>>> PS: We still have 3 open places at the UBC 3D Live-cells Course,
> >>>> http://www.3dcourse.ubc.ca/
> >>>>
> >>>>
> >>>
> >>>
> >>>
> >>
> >>
> >
> >
> >
>


--
--
------------------------------------------------
Johan Henriksson
MSc Engineering
PhD student, Karolinska Institutet
http://mahogny.areta.org http://www.endrov.net

Internal Virus Database is out-of-date.
Checked by AVG.
Version: 7.5.560 / Virus Database: 270.12.26/2116 - Release Date: 15/05/2009
6:16 AM


Internal Virus Database is out-of-date.
Checked by AVG.
Version: 7.5.560 / Virus Database: 270.12.26/2116 - Release Date: 15/05/2009
6:16 AM


Internal Virus Database is out-of-date.
Checked by AVG.
Version: 7.5.560 / Virus Database: 270.12.26/2116 - Release Date: 15/05/2009 6:16 AM


Internal Virus Database is out-of-date.
Checked by AVG.
Version: 7.5.560 / Virus Database: 270.12.26/2116 - Release Date: 15/05/2009 6:16 AM




--
**************************
Wellcome Trust Centre for Gene Regulation & Expression
College of Life Sciences
MSI/WTB/JBC Complex
University of Dundee
Dow Street
Dundee  DD1 5EH
United Kingdom

phone (01382) 385819
Intl phone:  44 1382 385819
FAX   (01382) 388072
email: [hidden email]

Lab Page: http://www.dundee.ac.uk/lifesciences/swedlow/
Open Microscopy Environment: http://openmicroscopy.org
**************************



--
**************************
Wellcome Trust Centre for Gene Regulation & Expression
College of Life Sciences
MSI/WTB/JBC Complex
University of Dundee
Dow Street
Dundee  DD1 5EH
United Kingdom

phone (01382) 385819
Intl phone:  44 1382 385819
FAX   (01382) 388072
email: [hidden email]

Lab Page: http://www.dundee.ac.uk/lifesciences/swedlow/
Open Microscopy Environment: http://openmicroscopy.org
**************************
Renato A. Mortara Renato A. Mortara
Reply | Threaded
Open this post in threaded view
|

How thick can samples be to be imaged on a Spinning Disk Confocal ?

Hello,
 
I am in the process of deciding the best possible configuration to assemble a spinning disk confocal with the Yokogawa CSU-X1 scanning head.
 
It is common knowledge that imaging 'thick' samples can be tricky or simply not feasible with spinning disk confocals.
 
Does anyone there have practical experience for instance, with mouse brain sections ?
 
Many thanks for the input,
 
Best
 
Renato
 
Renato A. Mortara
Parasitology Division
UNIFESP - Escola Paulista de Medicina
Rua Botucatu, 862, 6th floor
São Paulo, SP
04023-062
Brazil
Phone: 55 11 5579-8306
Fax:     55 11 5571-1095
home page: www.ecb.epm.br/~ramortara
 
 
EricMarino EricMarino
Reply | Threaded
Open this post in threaded view
|

Re: How thick can samples be to be imaged on a Spinning Disk Confocal ?

I would think spherical aberration would be more of a concern than the spinning disk. What type of objective are you using?

 

 

Eric Marino

[hidden email]

 

From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of Renato Mortara
Sent: Monday, October 05, 2009 4:04 PM
To: [hidden email]
Subject: How thick can samples be to be imaged on a Spinning Disk Confocal ?

 

Hello,

 

I am in the process of deciding the best possible configuration to assemble a spinning disk confocal with the Yokogawa CSU-X1 scanning head.

 

It is common knowledge that imaging 'thick' samples can be tricky or simply not feasible with spinning disk confocals.

 

Does anyone there have practical experience for instance, with mouse brain sections ?

 

Many thanks for the input,

 

Best

 

Renato

 

Renato A. Mortara

Parasitology Division

UNIFESP - Escola Paulista de Medicina

Rua Botucatu, 862, 6th floor

São Paulo, SP

04023-062

Brazil

Phone: 55 11 5579-8306

Fax:     55 11 5571-1095

email: [hidden email]

home page: www.ecb.epm.br/~ramortara

 

 

Eric Marino
Senior Imaging Specialist
Program in Cellular and Molecular Medicine
Boston Children's Hospital
kspencer007 kspencer007
Reply | Threaded
Open this post in threaded view
|

Re: How thick can samples be to be imaged on a Spinning Disk Confocal ?

In reply to this post by Renato A. Mortara

Hi Renato;

            We routinely image 200 micron mouse brain sections with our Yokogawa CSU-10 spinning disk. Effectively, we can see about 100 microns well. We are using a 20x LUCPlan FluorN 0.45 dry objective. This is mostly due to constraints of using Millipore chambers for culturing the mouse brains; we need the extra working distance. I realize the mismatch between the pinholes and the objective NA. Our four day, time-lapse images are really very nice with this objective, although our expression levels need to be rather high for the cell processes to be seen.

            Best,

            Kathy

 

 

From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of Renato Mortara
Sent: Monday, October 05, 2009 1:04 PM
To: [hidden email]
Subject: How thick can samples be to be imaged on a Spinning Disk Confocal ?

 

Hello,

 

I am in the process of deciding the best possible configuration to assemble a spinning disk confocal with the Yokogawa CSU-X1 scanning head.

 

It is common knowledge that imaging 'thick' samples can be tricky or simply not feasible with spinning disk confocals.

 

Does anyone there have practical experience for instance, with mouse brain sections ?

 

Many thanks for the input,

 

Best

 

Renato

 

Renato A. Mortara

Parasitology Division

UNIFESP - Escola Paulista de Medicina

Rua Botucatu, 862, 6th floor

São Paulo, SP

04023-062

Brazil

Phone: 55 11 5579-8306

Fax:     55 11 5571-1095

email: [hidden email]

home page: www.ecb.epm.br/~ramortara

 

 

George Peeters-2 George Peeters-2
Reply | Threaded
Open this post in threaded view
|

Re: How thick can samples be to be imaged on a Spinning Disk Confocal ?

Commercial interest:

We have set up several CSU-10s and CSUX systems for similar purposes. One paper describing this system was published last year.

Disease Models Mech. 1, 155 (2008)  cited as an editors Choice in Science Magazine. 

80 to 100 ums into live tissue was obtainable in 4 colors using air lens. 
Water or glycerin immersion lens should improve this. 

Best regards

George A. Peeters MD, MS

President,  Solamere Technology Group Inc

1427 Perry Ave

Salt Lake City, UT 84103

www.solameretech.com

801 322-2645 office          801 322-2645 fax

801 232-6911 cell


On Oct 5, 2009, at 2:50 PM, Kathryn Spencer wrote:

Hi Renato;
            We routinely image 200 micron mouse brain sections with our Yokogawa CSU-10 spinning disk. Effectively, we can see about 100 microns well. We are using a 20x LUCPlan FluorN 0.45 dry objective. This is mostly due to constraints of using Millipore chambers for culturing the mouse brains; we need the extra working distance. I realize the mismatch between the pinholes and the objective NA. Our four day, time-lapse images are really very nice with this objective, although our expression levels need to be rather high for the cell processes to be seen.
            Best,
            Kathy
 
 
From: Confocal Microscopy List [[hidden email]] On Behalf Of Renato Mortara
Sent: Monday, October 05, 2009 1:04 PM
To: [hidden email]
Subject: How thick can samples be to be imaged on a Spinning Disk Confocal ?
 
Hello,
 
I am in the process of deciding the best possible configuration to assemble a spinning disk confocal with the Yokogawa CSU-X1 scanning head.
 
It is common knowledge that imaging 'thick' samples can be tricky or simply not feasible with spinning disk confocals.
 
Does anyone there have practical experience for instance, with mouse brain sections ?
 
Many thanks for the input,
 
Best
 
Renato
 
Renato A. Mortara
Parasitology Division
UNIFESP - Escola Paulista de Medicina
Rua Botucatu, 862, 6th floor
São Paulo, SP
04023-062
Brazil
Phone: 55 11 5579-8306
Fax:     55 11 5571-1095
 
 

anurag pandey anurag pandey
Reply | Threaded
Open this post in threaded view
|

Re: How thick can samples be to be imaged on a Spinning Disk Confocal ?

In reply to this post by kspencer007
 Hi Kathy,
Do you really need to use millicells during imaging process ? I feel if you use
conffetti membranes from millipore for culturing slices and do imaging by water
immersion lense, you can improve it.
 All d best,
Anurag




On Tue, October 6, 2009 2:20 am, Kathryn Spencer wrote:

> Hi Renato;
>             We routinely image 200 micron mouse brain sections with our Yokogawa
> CSU-10 spinning disk. Effectively, we can see about 100 microns
> well. We are using a 20x LUCPlan FluorN 0.45 dry objective. This is
> mostly due to constraints of using Millipore chambers for culturing
> the mouse brains; we need the extra working distance. I realize the
> mismatch between the pinholes and the objective NA. Our four day,
> time-lapse images are really very nice with this objective, although
> our expression levels need to be rather high for the cell processes
> to be seen.
>             Best,
>             Kathy
>
>
> From: Confocal Microscopy List [mailto:[hidden email]] On
> Behalf Of Renato Mortara
> Sent: Monday, October 05, 2009 1:04 PM
> To: [hidden email]
> Subject: How thick can samples be to be imaged on a Spinning Disk Confocal ?
>
> Hello,
>
> I am in the process of deciding the best possible configuration to assemble a
> spinning disk confocal with the Yokogawa CSU-X1 scanning head.
>
> It is common knowledge that imaging 'thick' samples can be tricky or simply not
> feasible with spinning disk confocals.
>
> Does anyone there have practical experience for instance, with mouse brain
> sections ?
>
> Many thanks for the input,
>
> Best
>
> Renato
>
> Renato A. Mortara
> Parasitology Division
> UNIFESP - Escola Paulista de Medicina
> Rua Botucatu, 862, 6th floor
> São Paulo, SP
> 04023-062
> Brazil
> Phone: 55 11 5579-8306
> Fax:     55 11 5571-1095
> email: [hidden email]<mailto:[hidden email]>
> home page: www.ecb.epm.br/~ramortara
>
>
>
> --
> This message has been scanned for viruses and
> dangerous content by MailScanner, and is
> believed to be clean.
>
>


--
ANURAG PANDEY
Electro physiology Group
M.B.U.
I.I.Sc.



--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.