http://confocal-microscopy-list.275.s1.nabble.com/Digital-imaging-ethics-as-pertaining-to-the-enhancement-of-microscopy-images-with-artificial-intellie-tp7588915p7588925.html
yet. How could IT know what is unknown???
> *****
> To join, leave or search the confocal microscopy listserv, go to:
>
http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy> Post images on
http://www.imgur.com and include the link in your posting.
> *****
>
> Here are my two cents, I think the simplest rubric for using an AI
> algorithm in science is, "Have I ever seen a paper where this was done by
> hand?"
>
> For example, tracking objects, identifying subsets of objects, counting
> objects are all the types of image processing tasks that have been done by
> hand, and therefore are excellent candidates for AI. The reason these are
> valid is that a human can look at the data and immediately ascertain the
> result, but the challenge is in coming up with a computational algorithm
> that will robustly define the problem as clearly as we are seeing it.
> Whenever I teach people computational image processing, I always point out
> that there are certain tasks that are trivial for computers but are
> non-trivial for humans (convolutions, counting and measuring thousands of
> objects, measuring distances between all pairs of objects, etc.) and tasks
> that are often trivial for humans but non-trivial for computers (image
> segmentation in a complex feature space, tracking objects in a 2D image
> that can cross in 3D space, etc.), and therefore sometimes the best
> solution is to write an algorithm that lets the human do the things they
> are good at, and the computer does the rest.
>
> To me, the goal of AI in image processing is to replicate what we as
> researchers would have done anyway, just in a manner where the computer
> does it on its own. For example, using a well designed convolutional
> neural network to learn to segment an image is an excellent application.
> You could find an segmentation algorithm by hand, tuning bandpass filters,
> trying different convolutions, adjusting both the magnitude and order of
> these parameters to get to some lowest energy state. With convolutional
> neural networks, computers can effectively perform this exact same task,
> and even in relatively the same manner that we would have, allowing us to
> go do something else. On top of that, they can search a much broader
> parameter space than anyone would ever want to do by hand, and therefore
> possibly come up with an even more robust algorithm than we could have.
> Therefore, if it will likely be faster to build a training dataset than to
> try and explore the entire parameter space needed to segment an image, AI
> is a good candidate for these tasks.
>
> Using the same rubric I mentioned at the start, I have never seen a paper
> where someone showed an artist a bunch of high resolution images, and then
> asked them to draw high resolution interpretations of low resolution
> images, and then publishing meaningful conclusions from this (for example,
> scientific conclusions should not be drawn from artistic renderings of
> exoplanets). The reason we don't do this is that the task is premised on
> the assumption that the low resolution images have the same feature space
> as the high resolution ones. Therefore, the only case where the assumption
> that the artist's renderings are correct are if our low resolution images
> have the same information as the high resolution ones, meaning the low res
> images are completely redundant and useless in the sense that they offer no
> additional information.
>
> Along these lines, asking a neural network to draw in features in images
> based on previous images is not science, because you are forcing the
> assumption that all the data matches the hypothesis. Google's DeepDream is
> an excellent case of this, where depending on what the algorithm was
> trained on, it will then put those features into any other image. This is
> a great video explaining the impact of using different network layers in
> image reconstruction:
https://www.youtube.com/watch?v=BsSmBPmPeYQ>
> Therefore, if you use a neural network in image reconstruction, what you
> are really doing is having a computer or artist to draw a high resolution
> version of your image that matches your hypothesis. While this is
> inherently not science (as you are changing the data to match your
> hypothesis, rather than the other way around), this type of image
> reconstruction still has a place in medical science. For example, a human
> body does have a well defined and constrained feature space. As such, if
> you took a low-res CT scan of a human, reconstruction that image based on a
> training set of a human would create a meaningful image that could guide
> diagnoses, and allow for significantly lower X-ray exposure to the
> patient. Therefore, while AI image reconstruction in science appears to be
> a case of circular logic, in medicine it can have very meaningful
> applications.
>
> Just my own to cents, and looking forward to hearing other people's
> insights,
> Ben Smith
>
> On Sun, Nov 18, 2018 at 3:55 AM Andreas Bruckbauer <
>
[hidden email]> wrote:
>
>> *****
>> To join, leave or search the confocal microscopy listserv, go to:
>>
http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy>> Post images on
http://www.imgur.com and include the link in your posting.
>> *****
>>
>> Thanks John for bringing us up-to date on image processing, these are
>> indeed very important developments. I think there will be great changes
>> coming over the next years driven by the AI revolution in image and video
>> processing. But the fundamental limit should is that one cannot increase
>> the information content of an image beyond what was originally recorded. Of
>> course the missing information can be replaced by knowledge derived from
>> other images. But then the new AI algorithms will have similar flaws as
>> human perception (optical illusions). Science should be about measuring
>> accurately instead of guessing.
>> My criticism of current publications and promotional videos in image
>> processing using AI is that they show the cases when the algorithms
>> actually work well (which might be most of the time), the important
>> question is when do they fail and produce the wrong results? With the fast
>> development in the field, today's problems are often solved a few days
>> later with a new generation of the algorithm, so detecting flaws in these
>> algorithms is an ungrateful task. But I think it is important and we will
>> need to come up with very good quality control standards to accept these
>> results for medical imaging or scientific publications.
>> A few years ago I was very excited about using compressed sensing in
>> microscopy to "break the Nyquist barrier", but after looking into this in
>> more detail, I came to the conclusion that this only works well when images
>> are already heavily oversampled like in normal photography (more megapixels
>> sell better). When microscopy data is taken at the resolution limit there
>> is not much room for further compression. I would expect the same for
>> neural network approaches, works well when you have a lot of pixels and the
>> information content is not so high or accuracy is not so important.
>> So the question is what is actually necessary for a given experiment? If
>> one wants to track some cells in a fluorescence time laps movie, maybe
>> noisy (low exposure) jpeg compressed data combined with the latest AI
>> algorithm trained on this problem is better than the perfect exposure
>> needed for current segmentation methods and raw data recording, as in the
>> latter case the cells might be killed a short time after the experiment
>> started by the higher light exposure.
>> best wishes
>> Andreas
>>
>>
>>
>> -----Original Message-----
>> From: John Oreopoulos <
[hidden email]>
>> To: CONFOCALMICROSCOPY <
[hidden email]>
>> Sent: Sat, 17 Nov 2018 2:33
>> Subject: Digital imaging ethics as pertaining to the enhancement of
>> microscopy images with artificial intelligence
>>
>> *****
>> To join, leave or search the confocal microscopy listserv, go to:
>>
http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy>> Post images on
http://www.imgur.com and include the link in your posting.
>> *****
>>
>> Earlier today a few people (including myself) brought up Doug Cromey's
>> excellent treatise on digital imaging ethics in a related thread that dealt
>> with training new microscope users within a research setting. Lately I've
>> been hearing a lot about applications of machine learning and artificial
>> intelligence to "improve", "de-noise", or "fix" images (microscopy or
>> otherwise), extracting new information from low-resolution images, and even
>> creating new 3D views of samples with very little information. Here is just
>> one such example from Nvidia and MIT:
>>
>>
>>
https://news.developer.nvidia.com/ai-can-now-fix-your-grainy-photos-by-only-looking-at-grainy-photos/>>
>>
https://www.youtube.com/watch?time_continue=84&v=pp7HdI0-MIo>>
>> It's clear that the microscopy world will eventually come to a head with
>> this technology. I think I've seen a few research articles on this topic
>> now, and this month's issue of Nature Methods has a paper on this topic too:
>>
>>
https://www.nature.com/articles/s41592-018-0194-9>>
>> I've been wondering if and how Cromey's guide for digital imaging ethics
>> should be altered when it comes to AI-assisted microscope imaging. Should
>> it be allowed/accepted? Other readings of mine on AI show that machine
>> learning algorithms can produce biased results if the training datasets are
>> incomplete in some way, and the very nature of machine learning makes it
>> difficult to understand why it produced a certain result, since the deep
>> learning neural networks that are used to generate the results are
>> essentially black boxes that can't easily be probed. But on the other hand,
>> I'm constantly blown away by what I've seen so far online for other various
>> applications of AI (facial recognition, translation, etc.).
>>
>> I also just finished a good read about AI from the perspective of
>> economics:
>>
>>
https://www.predictionmachines.ai/>>
>>
https://youtu.be/5G0PbwtiMJk>>
>> The basic message of this book is that AI makes prediction cheap. When
>> something is cheap, we use more of it. Other processes that complement
>> prediction, like judgement (by a human or otherwise) becomes more valuable.
>> It's easy to see how the lessons of this book could be re-framed for
>> imaging science.
>>
>> Curious to know the community's opinion on this matter. I used to laugh at
>> the following video, but now I'm not laughing:
>>
>>
https://www.youtube.com/watch?v=LhF_56SxrGk>>
>> John Oreopoulos
>>
>
Speaker Scientific Advisory Board "German Society for Microscopy and Image Analysis"
Hartmannstr. 14