http://confocal-microscopy-list.275.s1.nabble.com/Lightsheet-imaging-analysis-Workstation-Specs-tp7590391p7590418.html
algebra (MATLAB is a prominent example of this). As a consequence, many of
stably. It's a sad state of affairs, but I would generally advise that you
stick with Intel platforms for now. Hopefully the future will bring better
> *****
> To join, leave or search the confocal microscopy listserv, go to:
>
http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy> Post images on
http://www.imgur.com and include the link in your posting.
> *****
>
> Hi, quick follow up: Has anyone built a workstation for microscopy analysis
> using AMD hardware? Their price/thread is way better than Intel's (both
> customer- and server-grade CPUs) and the only downside seems to be that
> they run a little hotter. Is there limited support in the various software
> packages? Or is it just a matter of legacy infrastructure? Thanks,
> Francesco
>
>
> On Thu, Jan 16, 2020 at 9:50 AM Olaf Selchow <
[hidden email]>
> wrote:
>
> > *****
> > To join, leave or search the confocal microscopy listserv, go to:
> >
http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy> > Post images on
http://www.imgur.com and include the link in your
> posting.
> > *****
> >
> > Hello everybody,
> >
> > Happy to share a bit of my experience on this:
> >
> > - Re the multiview deconvolution by Preibisch et al - with GPU
> > enhancement: It works but my experience also is that it could be a bit
> > easier to get the installation done. CUDA support is not depreciated
> > though, to my knowledge.
> >
> > - Typical data set size in what I have done in Light Sheet Microscopy in
> > recent years is 500 GB to 10 TB both, in live imaging (long timer series
> > with multiple views) and imaging of optically cleared specimen.
> >
> > - Re the specs of a suitable machine, the following things I would
> > consider (depends on how many people will use it, how many microscopes
> > connected, installation location of the processing computer)
> >
> > 1. With Imaris and / or arivis vision4D to be run on the same machine as
> > Fiji and maybe the manufacturers software (Zeiss ZEN, Leica LAS X, 3i
> > Slidebook, etc) you want to run Windows, I guess.
> >
> > 2. If you want to be able to use it with multiple users in parallel you
> > want/need Windows Server.
> >
> > 3. I would make sure that the microscopes are connected with direct 10
> > Gbit. Over an institutional network, even if 10 Gbit, you might be
> limited
> > in bandwidth and so you can migrate the data only after acquisition from
> > the acquisition PC to the Storage. This costs time, blocks the
> acquisition
> > PC and essentially duplicates the required storage capacity you need. I
> > always try to bring the microscopes with the processing & analysis
> > computers in a 10 Gbit subnet that I can manage with a dedicated
> > router/firewall/switch. That prevents interferences with all the other
> > traffic in your institute.
> >
> > 4. I would not use a NAS. Processing data by loading data into memory /
> > software via network (and saving the results / temp data via the same
> line)
> > can take ages. 10 Gbit is far too slow in my view. I’d use a prcessing
> > machine with a strong RAID controler and a large RAID array directly
> > attached to the processing unit. And I would save the data from the
> > microscopes directly to this volume.
> >
> > 5. if you need super large RAM depends on how many people are supposed to
> > work in parallel on this machine, if you want to use VMs, and what
> software
> > you use (arivis needs much less RAM than others. Fiji benefits a lot from
> > super large RAM, etc.). But if you choose the right motherboard and OS,
> you
> > can always and easily upgrade RAM later.
> >
> > 6. CPU: generally, large multicore CPUs speed up things. But some
> > software, even today, doesn’t make much use of parallelization. If you
> buy
> > a very expensive dual-44-core CPU for thousands of $/€ you might end up
> > with the software not using it. Actual check out the workflows. Some
> > vendors might say „yes, our software uses all cores available“ , but in
> the
> > end the processing function you are using most might still be running on
> a
> > single or only 2 cores.
> >
> > 7. Monitor: I have worked with a number of 32 inch UHD 4K (3840 x
> > 2160, 16:9) and 32 inch WQHD 3.6K (2560 x 1440, 16:9) monitors and never
> > had a real problem. But thi smight depend on the GPU you use.
> >
> > 8. GPUs:
> > - if you want to use 3D renderings over remote access (e.g., RDP)
> > sessions, I strongly recommend professional GPUs. I know, they are
> > expensive. But the drivers on the Geforce or other gaming-grade GPUs can
> > give you a hard time when working remotely. I have good experience with
> > NVIDIA Quadro RTX boards (they are, in some GPU CUDA processing tasks, 2x
> > faster than the previous P-seroes that is otherwise also perfectly fine.)
> > For 3D viewing / rendering / analyzing data, make sure the VRAM on the
> GPU
> > is 11 GB / 16 GB or larger.
> > - for some software and for VMs, it makes sense to think of the option of
> > multiple GPUs. Maybe you just want to make sure you can fit a 2nd or 3rd
> > later on. In SVI Huygens you can, for example, assign instances of DCV to
> > certain GPUs - so multiple GPUs can speed up your work. You need to buy
> the
> > respective licenses from SVI though.
> >
> > 9. Storage volume: make sure you have multiple (2 or more) fast volumes
> > (RAIDs of HDD or SSDs) to have space for the software where it can save
> > temp data - on an independent volume. Multiple simultaneous read/write
> > processes can slow down even fast RAID configs. Also keep in mind that
> SSDs
> > are more convenient and faster, but still more expensive and still have a
> > higher failure rate. Mak sure you consider hard drives 7 ssds as
> > consumables. In a RAID of 15 HDDs, it is perfectly normal that you have 1
> > HDD/year in average that fails and needs replacement. SSDs maybe even
> more
> > often.
> >
> > The Lenovo Think Station P920 is certainly a great hardware. You’ll still
> > have to invest a bit of time and money to get it ready to work for your
> > applications. Networking i, etc.
> >
> > I would also point out that there is commercial options that provide you
> a
> > turnkey solution with support that can scale / grow with your needs.
> > I have worked with ACQUIFER HIVEs a lot. Check with them or a similar
> > provider if your budget allows a high end solution for 60k (+) $/€ and if
> > you are looking for a solution provider that saves you from configuring
> > network adapters yourself …
> > Note: I used to have consultancy projects with ACQUIFER and might have
> > more in the future. So I am a bit biased, but mostly because I think they
> > have great hardware and services and I worked wit them to position their
> > products and improve them. I do not (!) have a direct financial interest
> in
> > them selling a platform unit to you!
> >
> > I hope this helps.
> > best regards,
> > Olaf
> >
> > ———
> > Dr. Olaf Selchow
> > --
> > Microscopy & BioImaging Consulting
> > Image Processing & Large Data Handling
> > --
> >
[hidden email]
> >
>
>
> --
> Francesco S. Pasqualini
> Visiting Professor University of Pavia
> Associate Harvard University
>
> tel: +39 351-521-7788 (IT)
> tel: +1 617-401-5243 (USA)
>