Plenoptics: Some Questions

classic Classic list List threaded Threaded
1 message Options
Peter Werner Peter Werner
Reply | Threaded
Open this post in threaded view
|

Plenoptics: Some Questions

*****
To join, leave or search the confocal microscopy listserv, go to:
http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
*****

As I mentioned, I still have many questions about light field imaging  
technology, remaining even after having done some introductory reading  
on the topic. I know there are plenty of knowledgeable folks on this  
list who might be able to help me with at least some of this –

1) Could somebody provide me with or point me to a good intro to the  
concept of "light fields"? (Either web or book source is fine. There's  
a Wikipedia article, but I don't think it introduces the concept to  
beginners particularly well.) My understanding is that it is a 4-
dimensional or even 5-dimensional model for light that, in addition to  
x, y, and z spatial information, also includes one (or sometimes two)  
vectors for direction of travel of a photon/light ray. But that's  
about the extent of what I understand it to be; I'd love to learn more  
about it, but will need to do so from simple geometric optics on up.

2) I've read that light-field microscopes and cameras have not only  
spatial resolution, like any optical instrument, but angular  
resolution that is inherent to the ability to detect the light field,  
and that there's a trade-off between the two. I'm a bit confused by  
this, as I've long been taught that spatial resolution is essentially  
a function of the angular resolution of a lens system (hence, why the  
same size object appears smaller at a greater distance - it intercepts  
a smaller angle of the eye's field of view). Any idea how angular and  
spatial resolution are differentiated when describing plenoptic sensing?

3) The key component in plenoptic microscopes and cameras is a  
microlens array. (Although there are a few alternate plenoptic camera  
technologies that use other interference patters to derive angular  
information.) Of course, many standard CCDs have microlens arrays  
associated with them, designed in such a way as to focus light on the  
sensor rather than non-sensing parts of the chip. However, the  
microlens array in a light-field sensor is set up in such a way as to  
provide angular information. How is this microlens set up different  
than a standard CCD microlens and how does this provide angular/4D  
information?

4) 4D light-field information is readily focusable using 3D  
deconvolution algorithms. I'm not sure why this should be and would  
appreciate an explanation. Also, I'm told that the reason 4D ray  
tracing algorithms used, for example, in the Lytro camera will not  
work with light-field microscopy because optical sections in the  
latter are too narrow to provide sufficient angular information to use  
such algorithms. Am I correct about this?

5) Has anybody used or heard of a Raytrix light-field camera used as a  
camera for an otherwise standard trinocular microscope? If so, did it  
work for getting light-field microscope images, or was further  
modification of the microscope itself needed? Was deconvolution used  
for focusing the image?

A lot of questions, I know, but answers to any of the above would be  
greatly appreciated.

Peter G. Werner
Program Assistant, Merritt College Microscopy Program