I'm in the midst of finally deciding how to deal with different pixel
formats in my program (eg 8bit, 16bit). it's a pain-staking trade-off and I wish to start a discussion here to get the best result. I am grateful for responses from both users and commercial/open source developers. one one hand: performance is higher if the right format is used. 8bit takes half the memory of 16bit, transfer is 100% faster. on the other hand: we have unsigned 8bit, 12bit, 16bit, 32bit, signed 8bit, 12bit, 16bit, 32bit, float, double, 8bit palette color, 5+6+5 packed color, HD formats etc. that's 10+ formats!! if you write an algorithm, you need to produce 10 versions of it. sometimes conversion is needed with potential loss of information. for every format we can slay, complexity is reduced. less complexity means fewer bugs and more features faster. so what are everyones position on this? here is mine: * the need of easy scientific manipulation is totally separate from fast rendering. packed formats and other arcane solutions belong in games only. * we need floating point for calculations. having both float and double is a minor complication since the range is "virtually the same". * we need integers because algorithms like levelsets are based on a proper ordering of numbers. it's possible but hard to do in floating point. storage of common images is simplified in integer since many compression algorithms relies on the quantization. * even trivial operations like subtract and laplace spit out negative numbers. java does not cope well with unsigned integers and conversion to signed ones is messy. hence all formats should be signed. in particular, all unsigned integers should be banned. this is a radical decision!! * supporting 4 integer formats is as easy as supporting 2 but not as easy as only 1. if we go for 1, it should be 16bit or 32bit. * metaprogramming is the only way out to support all formats with kept performance. most programmers do not know it and many languages do not have built-in support for it. how do we deal with all the unsigned data in existence? most is unsigned 8bit to my knowledge. drop one bit to fit it in 7? put it in 16bit? Here I would say use 16. for 16bit data, dropping 1 bit does not hurt that much. I want to know, how many does actually use the entire range of 16 bits? a loss of 0.00001% precision is not much to cry for. once again, thankful for any comments /Johan -- -- ------------------------------------------------ Johan Henriksson MSc Engineering PhD student, Karolinska Institutet http://mahogny.areta.org http://www.endrov.net |
Johan,
Some of these formats automatically drop out. 5-5-6 is not to my knowledge used for anything scientific and indeed would not give adequate results. Palette color is something quite different in scientific imaging from 'popular' aplications - GIF, for example, uses palette color to approximate full color, but in science it's just a false color palette on a grey scale image, so it's essentially just an 8 bit grey scale image and doesn't need any separate treatment. If you are going into Fourier space you automatically need floating point, if not you can probably forget it. It is certainly unfortunate that so many cameras and confocals produce 12-bit data but almost everyone just stores that in a 16-bit format. You DO need the full 16 bits if you are going to do ratio imaging or other similar math. It seems to me that if you really do need signed integers dropping 16-bit data to 15 bit wouldn't be such a big deal but dropping 8 bit to 7 would be. But in general negative values in an image are regarded with suspicion! I'd say that if high speed is important (eg real-time rendering) and you have data which is anyway 8 bit (which a computer display generally is) then that part should be done in 8-bit. So it really comes round to what you are doing. I wouldn't write each algorithm multiple times - but I would suggest deciding what is actually needed for the particular operation. Guy Optical Imaging Techniques in Cell Biology by Guy Cox CRC Press / Taylor & Francis http://www.guycox.com/optical.htm ______________________________________________ Associate Professor Guy Cox, MA, DPhil(Oxon) Electron Microscope Unit, Madsen Building F09, University of Sydney, NSW 2006 ______________________________________________ Phone +61 2 9351 3176 Fax +61 2 9351 7682 Mobile 0413 281 861 ______________________________________________ http://www.guycox.net -----Original Message----- From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of Johan Henriksson Sent: Wednesday, 4 February 2009 8:23 AM To: [hidden email] Subject: in the swamp of pixel formats - what is the future? I'm in the midst of finally deciding how to deal with different pixel formats in my program (eg 8bit, 16bit). it's a pain-staking trade-off and I wish to start a discussion here to get the best result. I am grateful for responses from both users and commercial/open source developers. one one hand: performance is higher if the right format is used. 8bit takes half the memory of 16bit, transfer is 100% faster. on the other hand: we have unsigned 8bit, 12bit, 16bit, 32bit, signed 8bit, 12bit, 16bit, 32bit, float, double, 8bit palette color, 5+6+5 packed color, HD formats etc. that's 10+ formats!! if you write an algorithm, you need to produce 10 versions of it. sometimes conversion is needed with potential loss of information. for every format we can slay, complexity is reduced. less complexity means fewer bugs and more features faster. so what are everyones position on this? here is mine: * the need of easy scientific manipulation is totally separate from fast rendering. packed formats and other arcane solutions belong in games only. * we need floating point for calculations. having both float and double is a minor complication since the range is "virtually the same". * we need integers because algorithms like levelsets are based on a proper ordering of numbers. it's possible but hard to do in floating point. storage of common images is simplified in integer since many compression algorithms relies on the quantization. * even trivial operations like subtract and laplace spit out negative numbers. java does not cope well with unsigned integers and conversion to signed ones is messy. hence all formats should be signed. in particular, all unsigned integers should be banned. this is a radical decision!! * supporting 4 integer formats is as easy as supporting 2 but not as easy as only 1. if we go for 1, it should be 16bit or 32bit. * metaprogramming is the only way out to support all formats with kept performance. most programmers do not know it and many languages do not have built-in support for it. how do we deal with all the unsigned data in existence? most is unsigned 8bit to my knowledge. drop one bit to fit it in 7? put it in 16bit? Here I would say use 16. for 16bit data, dropping 1 bit does not hurt that much. I want to know, how many does actually use the entire range of 16 bits? a loss of 0.00001% precision is not much to cry for. once again, thankful for any comments /Johan -- -- ------------------------------------------------ Johan Henriksson MSc Engineering PhD student, Karolinska Institutet http://mahogny.areta.org http://www.endrov.net No virus found in this incoming message. Checked by AVG. Version: 7.5.552 / Virus Database: 270.10.16/1929 - Release Date: 1/02/2009 6:02 PM No virus found in this outgoing message. Checked by AVG. Version: 7.5.552 / Virus Database: 270.10.16/1929 - Release Date: 1/02/2009 6:02 PM |
Free forum by Nabble | Edit this page |