Ugh, Pathology image processing is really annoying.
IF Philips is going to stick to the DICOM format, and not add lots of proprietary stuff, _and_ it's the format that it uses internally, then this will be good.
For example, folks can check out OpenSlide (https://openslide.org) and have a look at all the different slide formats that exist. If you dig in to Philips' entry, you'll see that OpenSlide does not support Philips' non-TIFF format (iSyntax), and that the TIFF format is an "export format".
If you have a Philips microscope that uses iSyntax, you are very limited on what non-Philips software you can use. If you want files in TIFF format, you (the lab tech) have to take an action to export a side in TIFF. It can take up a fair amount of lab tech time.
Ideally, the microscope should immediately store the images in an open format, with metadata that workflow software can use to check if a scanning run is complete. I _hope_ that will be able to happen here!
> If you want files in TIFF format, you (the lab tech) have to take an action to export a side in TIFF. It can take up a fair amount of lab tech time.
Worse, you have to do it manually one by one in their interface, it takes like 30 minutes per slide and you only have like 20 minutes after it's done to pick it up and save it somewhere useful otherwise the temporary file gets lost.
DICOM is of course the way to go, but it does have its rough edges - stupid multiple files, sparse shit, concatenated levels and now Philips is the only vendor who makes JPEG XL (next to jpeg, jp2k and jpeg xr).
We learnt to live with iSyntax (and iSyntax2), if you can get access to them that is. In most deployments the whole system is a closed appliance and you have no access to the filesystem to get the damn files out.
In case others are not aware what "pathology scanner" is, apparently it is a device to scan/image microscope slides. Found some specs, apparently these Philips units do 0.25um/px and 15mm x 15mm imaging area, making the output images presumably 60000 x 60000 pixels in size. Apparently Philips previously used their own "iSyntax" format, and also JPEG2000 DICOM files for these devices.
Those writing the new Rust decoder are largely people who worked on the standard and on the original C++ implementation, + contributions from the author of jxl-oxide (who is not at Google).
Sheesh. Google isn't trying to kill jxl, they just think its a bad fit for their product.
There is a huge difference between deciding not to do something because the benefit vs complexity trade off doesn't make sense, and actively trying to kill something.
FWIW i agree with google, avif is a much better format for the web. Pathology imaging is a bit of a different use case, where jpeg-xl is a better fit than avif would be.
Other than Jon at Cloudinary, everyone involved with JXL development, from creation of the standard to the libjxl library, works at Google Research in Zurich. The Chrome team in California has zero authority over them. They've also made a lot of stuff that's in Chrome, like Lossless WebP, Brotli, WOFF, the Highway SIMD library (actually created for libjxl and later spun off).
It's more likely related to security, image formats are a huge attack surface for browsers and they are hard to remove once added.
JPEG XL was written in C++ in a completely different part of Google without any of the safe vanity wuffs style code, and the Chrome team probably had its share of trouble with half baked compression formats (webp)
I'd argue the thread up through the comment you are replying to is fact-free gossiping - I'm wondering if it was an invitation to repeat the fact-free gossip, the comment doesn't read that way. Reads to me as more exasperated, so exasperated they're willing to speak publicly and establish facts.
My $0.02, since the gap here on perception of the situation fascinates me:
JPEG XL as a technical project was a real nightmare, I am not surprised at all to find Mozilla is waiting for a real decoder.
If you get _any_ FAANG engineer involved in this mess a beer || truth serum, they'll have 0 idea why this has so much mindshare, modulo it sounds like something familiar (JPEG) and people invented nonsense like "Chrome want[s] to kill it" while it has the attention of an absurd amount of engineers to get it into shipping shape.
(surprisingly, Firefox is not attributed this - they also do not support it yet, and they are not doing anything _other_ than awaiting Chrome's work for it!)
> JPEG XL as a technical project was a real nightmare
Why?
> (surprisingly, Firefox is not attributed this - they also do not support it yet, and they are not doing anything _other_ than awaiting Chrome's work for it!)
> (surprisingly, Firefox is not attributed this - they also do not support it yet, and they are not doing anything _other_ than awaiting Chrome's work for it!)
The fuck are you talking about? The jxl-rs library Firefox is waiting on is developed by mostly the exact same people who made libjxl which you say sucks so much.
In any case, JXL obviously has mindshare due to the features it has as a format, not the merits of the reference decoder.
> they'll have 0 idea why this has so much mindshare
Considering the amount of storage all of these companies are likely allocating to storing jpegs + the bandwidth of it all - maybe the instant file size wins?
Hard disk and bandwidth of jpegs are almost certainly negligible in the era of streaming video. The biggest selling point is probably client side latency from downloading the file.
We barely even have movement to webp &avif, if this was a critical issue i would expect a lot more movement on that front since it already exists. From what i understand avif gives better compression (except for lossless) and has better decoding speed than jxl anyways.
If you look at CDNs, WebP and AVIF are very popular.
> From what i understand avif gives better compression (except for lossless) and has better decoding speed than jxl anyways.
AVIF is better at low to medium quality, and JXL is better at medium to high quality. JXL decoding speed is pretty much constant regardless of how you vary the quality parameter, but AVIF gets faster and faster to decode as you reduce the quality; it's only faster to decode than JXL for low quality images. And about half of all JPEG images on the web are high quality.
The Chrome team really dislikes the concept of high quality images on the web for some reason though, that's why they only push formats that are optimized for low quality. WebP beats JPEG at low quality, but is literally incapable of very high quality[1] and is worse than JPEG at high quality. AVIF is really good at low quality but fails to be much of an improvement at high quality. For high resolution in combination with high quality, AVIF even manages to be worse than JPEG.
[1] Except for the lossless mode which was developed by Jyrki at Google Zurich in response to Mozilla's demand that any new web image format should have good lossless support.
>The Chrome team really dislikes the concept of high quality images on the web for some reason though, that's why they only push formats that are optimized for low quality.
It would be more accurate to say Bit per Pixel (BPP) rather than quality. And that is despite the Chrome team themselves showing 80%+ of images served online are in the medium BPP range or above where JPEG XL excel.
Isn't medium quality the thing to optimize for? If you are doing high quality you've already made the tradeoff that you care about quality more than latency, so the precieved benefit of mild latency improvement is going to be lower.
Seeing JPEG XL integrated into the DICOM standard was a particularly proud moment for me as the manager of the effort at Google.
It felt like closing a major circle in my career, because I spent the first 16 years of my career in the medical industry, working on Neurosurgical Robots (Oulu Neuronavigator System), and one of the first tools I built was an ACR-NEMA 1.0 parser, ACR-NEMA being the direct predecessor to DICOM, and then continuing on radiation treatment planning systems with plenty of DICOM work within them. To now contribute back to that very standard is incredibly rewarding.
I find Philips quite an interesting player in imaging, PACS and related space. At the innovation space they might be coming up with new technologies and solutions but during service and delivery side they provide quite an awful service, at least in my personal experience in Singapore. I have friends at various healthcare institutions and we were just surprised how could the service be so bad for a vendor who is considered as a valuable one in the industry.
Their RIS and PACS software is also objectively poor, and they actively promote vendor lockin with solutions like iSyntax in the old IntelliSpace, and a horrifically bad and non-conforming IHE SWF implementation in Vue (which is partly Carestream’s fault to be fair).
They will also prefer to gaslight their clients rather than fix issues, and good luck if you’re already committed to an (un)managed service from them.
My first ever job in software was working for PathXL (a Belfast startup implementing digital pathology software). Lots of fond memories working there, including how cool it was working on what was effectively Google Maps but for massive tissue sample images. PathXL actually ended up getting acquired by Philips, seems like a great match if they're building the hardware for this.
Basically there is a conspiracy theory that google is trying to kill jpeg xl, so the anti-google crowd is excited someone is using it.
The truth is that every image format added to a web browser has to be supported forever, so chrome team is wary of adding new file formats unless its an above and beyond improvement. Jpeg XL isn't (relative to avif) so google decided not to implement. Its not some malicious conspiracy, it just didn't make sense from a product perspective.
From what i understand https://storage.googleapis.com/avif-comparison/index.html is what was used to justify google chosing avif over jpeg-xl. Jpeg-xl was better at lossless images but avif was better at lossy, and lossy is the usecase that matters more to the web.
After reading a bit about jpeg xl [0], the bit depth, channel count and pixel count seem promising. Devils in the details. How will multiple focal planes be implemented?
Ugh, Pathology image processing is really annoying.
IF Philips is going to stick to the DICOM format, and not add lots of proprietary stuff, _and_ it's the format that it uses internally, then this will be good.
For example, folks can check out OpenSlide (https://openslide.org) and have a look at all the different slide formats that exist. If you dig in to Philips' entry, you'll see that OpenSlide does not support Philips' non-TIFF format (iSyntax), and that the TIFF format is an "export format".
If you have a Philips microscope that uses iSyntax, you are very limited on what non-Philips software you can use. If you want files in TIFF format, you (the lab tech) have to take an action to export a side in TIFF. It can take up a fair amount of lab tech time.
Ideally, the microscope should immediately store the images in an open format, with metadata that workflow software can use to check if a scanning run is complete. I _hope_ that will be able to happen here!
> If you want files in TIFF format, you (the lab tech) have to take an action to export a side in TIFF. It can take up a fair amount of lab tech time.
Worse, you have to do it manually one by one in their interface, it takes like 30 minutes per slide and you only have like 20 minutes after it's done to pick it up and save it somewhere useful otherwise the temporary file gets lost.
DICOM is of course the way to go, but it does have its rough edges - stupid multiple files, sparse shit, concatenated levels and now Philips is the only vendor who makes JPEG XL (next to jpeg, jp2k and jpeg xr).
We learnt to live with iSyntax (and iSyntax2), if you can get access to them that is. In most deployments the whole system is a closed appliance and you have no access to the filesystem to get the damn files out.
In case others are not aware what "pathology scanner" is, apparently it is a device to scan/image microscope slides. Found some specs, apparently these Philips units do 0.25um/px and 15mm x 15mm imaging area, making the output images presumably 60000 x 60000 pixels in size. Apparently Philips previously used their own "iSyntax" format, and also JPEG2000 DICOM files for these devices.
https://connect.mozilla.org/t5/ideas/support-jpeg-xl/idi-p/1...
vote for this feature to be natively supported in browsers
It's nice to see Safari lead the pack: https://caniuse.com/jpegxl
It's already under consideration but needs some work first: https://github.com/mozilla/standards-positions/pull/1064
Strange that Mozilla is going to rely on an internal team at Google to build a decoder for them in Rust, when Google is the one trying to kill JPEGXL.
Those writing the new Rust decoder are largely people who worked on the standard and on the original C++ implementation, + contributions from the author of jxl-oxide (who is not at Google).
Sheesh. Google isn't trying to kill jxl, they just think its a bad fit for their product.
There is a huge difference between deciding not to do something because the benefit vs complexity trade off doesn't make sense, and actively trying to kill something.
FWIW i agree with google, avif is a much better format for the web. Pathology imaging is a bit of a different use case, where jpeg-xl is a better fit than avif would be.
It's two different teams inside Google. Some part of the Chrome team is trying to quash JPEG XL.
Sure, but if it becomes political I expect the Chrome team to fully quash the JPEG XL team to hurt Firefox and JPEG XL in one go.
Other than Jon at Cloudinary, everyone involved with JXL development, from creation of the standard to the libjxl library, works at Google Research in Zurich. The Chrome team in California has zero authority over them. They've also made a lot of stuff that's in Chrome, like Lossless WebP, Brotli, WOFF, the Highway SIMD library (actually created for libjxl and later spun off).
It's more likely related to security, image formats are a huge attack surface for browsers and they are hard to remove once added.
JPEG XL was written in C++ in a completely different part of Google without any of the safe vanity wuffs style code, and the Chrome team probably had its share of trouble with half baked compression formats (webp)
I'd argue the thread up through the comment you are replying to is fact-free gossiping - I'm wondering if it was an invitation to repeat the fact-free gossip, the comment doesn't read that way. Reads to me as more exasperated, so exasperated they're willing to speak publicly and establish facts.
My $0.02, since the gap here on perception of the situation fascinates me:
JPEG XL as a technical project was a real nightmare, I am not surprised at all to find Mozilla is waiting for a real decoder.
If you get _any_ FAANG engineer involved in this mess a beer || truth serum, they'll have 0 idea why this has so much mindshare, modulo it sounds like something familiar (JPEG) and people invented nonsense like "Chrome want[s] to kill it" while it has the attention of an absurd amount of engineers to get it into shipping shape.
(surprisingly, Firefox is not attributed this - they also do not support it yet, and they are not doing anything _other_ than awaiting Chrome's work for it!)
> JPEG XL as a technical project was a real nightmare
Why?
> (surprisingly, Firefox is not attributed this - they also do not support it yet, and they are not doing anything _other_ than awaiting Chrome's work for it!)
There is no waiting on Chrome involved in: https://bugzilla.mozilla.org/show_bug.cgi?id=1986393
> (surprisingly, Firefox is not attributed this - they also do not support it yet, and they are not doing anything _other_ than awaiting Chrome's work for it!)
The fuck are you talking about? The jxl-rs library Firefox is waiting on is developed by mostly the exact same people who made libjxl which you say sucks so much.
In any case, JXL obviously has mindshare due to the features it has as a format, not the merits of the reference decoder.
> they'll have 0 idea why this has so much mindshare
Considering the amount of storage all of these companies are likely allocating to storing jpegs + the bandwidth of it all - maybe the instant file size wins?
Hard disk and bandwidth of jpegs are almost certainly negligible in the era of streaming video. The biggest selling point is probably client side latency from downloading the file.
We barely even have movement to webp &avif, if this was a critical issue i would expect a lot more movement on that front since it already exists. From what i understand avif gives better compression (except for lossless) and has better decoding speed than jxl anyways.
> We barely even have movement to webp &avif
If you look at CDNs, WebP and AVIF are very popular.
> From what i understand avif gives better compression (except for lossless) and has better decoding speed than jxl anyways.
AVIF is better at low to medium quality, and JXL is better at medium to high quality. JXL decoding speed is pretty much constant regardless of how you vary the quality parameter, but AVIF gets faster and faster to decode as you reduce the quality; it's only faster to decode than JXL for low quality images. And about half of all JPEG images on the web are high quality.
The Chrome team really dislikes the concept of high quality images on the web for some reason though, that's why they only push formats that are optimized for low quality. WebP beats JPEG at low quality, but is literally incapable of very high quality[1] and is worse than JPEG at high quality. AVIF is really good at low quality but fails to be much of an improvement at high quality. For high resolution in combination with high quality, AVIF even manages to be worse than JPEG.
[1] Except for the lossless mode which was developed by Jyrki at Google Zurich in response to Mozilla's demand that any new web image format should have good lossless support.
>AVIF is better at low to medium quality,
>The Chrome team really dislikes the concept of high quality images on the web for some reason though, that's why they only push formats that are optimized for low quality.
It would be more accurate to say Bit per Pixel (BPP) rather than quality. And that is despite the Chrome team themselves showing 80%+ of images served online are in the medium BPP range or above where JPEG XL excel.
Isn't medium quality the thing to optimize for? If you are doing high quality you've already made the tradeoff that you care about quality more than latency, so the precieved benefit of mild latency improvement is going to be lower.
jxl let’s you further compress existing JPEG files without additional artifacting, which is significant given how many jpeg files already exist.
And then to actually support the HDR images that can be encoded with JPEG XL, they'd have to implement HDR in the browser graphics pipeline.
Any decade now, any decade...
Voting for tickets does nothing when there are real reasons not to implement.
Seeing JPEG XL integrated into the DICOM standard was a particularly proud moment for me as the manager of the effort at Google.
It felt like closing a major circle in my career, because I spent the first 16 years of my career in the medical industry, working on Neurosurgical Robots (Oulu Neuronavigator System), and one of the first tools I built was an ACR-NEMA 1.0 parser, ACR-NEMA being the direct predecessor to DICOM, and then continuing on radiation treatment planning systems with plenty of DICOM work within them. To now contribute back to that very standard is incredibly rewarding.
I find Philips quite an interesting player in imaging, PACS and related space. At the innovation space they might be coming up with new technologies and solutions but during service and delivery side they provide quite an awful service, at least in my personal experience in Singapore. I have friends at various healthcare institutions and we were just surprised how could the service be so bad for a vendor who is considered as a valuable one in the industry.
Their RIS and PACS software is also objectively poor, and they actively promote vendor lockin with solutions like iSyntax in the old IntelliSpace, and a horrifically bad and non-conforming IHE SWF implementation in Vue (which is partly Carestream’s fault to be fair).
They will also prefer to gaslight their clients rather than fix issues, and good luck if you’re already committed to an (un)managed service from them.
My first ever job in software was working for PathXL (a Belfast startup implementing digital pathology software). Lots of fond memories working there, including how cool it was working on what was effectively Google Maps but for massive tissue sample images. PathXL actually ended up getting acquired by Philips, seems like a great match if they're building the hardware for this.
They sold them off to Cirdan, they are not doing much with the software...
Oh interesting, I missed that. Pity that Philips sold them off.
Can someone comment on what is newsworthy about this?
JPEG XL is alive despite google trying their best to kill it and is used to treat cancer
Someone using JPEGXL in a real world product
jpegxl is supported by pretty much every relevant program that deals with images. The web situation is purely because of Google's monopoly.
But exceedingly few cameras (this is the only one I'm aware of). If I had to guess it's probably encoding in software but still, it's a start.
There has to be someone else since my dad just emailed me a JPEGXL image less than 15 minutes ago. No idea on how he produced or procured it.
Basically there is a conspiracy theory that google is trying to kill jpeg xl, so the anti-google crowd is excited someone is using it.
The truth is that every image format added to a web browser has to be supported forever, so chrome team is wary of adding new file formats unless its an above and beyond improvement. Jpeg XL isn't (relative to avif) so google decided not to implement. Its not some malicious conspiracy, it just didn't make sense from a product perspective.
From what i understand https://storage.googleapis.com/avif-comparison/index.html is what was used to justify google chosing avif over jpeg-xl. Jpeg-xl was better at lossless images but avif was better at lossy, and lossy is the usecase that matters more to the web.
A medical device that outputs a standard image format instead of proprietary garbage
The cluster fuck that is DICOM and HL7 once vendors go to town is far from the ‘open’ utopia we dream of.
nerds desperately clinging to any hope that jpeg xl will be revived
Nerds like JPEG XL but Google is trying to kill it.
Why does it try to kill it ?
Because they can't control it
hardly, it's a google team that made the thing.
Then why does Google not want JPEG XL support in Chrome?
Google is not a monolith. The Chrome team doesn't want it in Chrome, but many other parts of Google likes it.
Always impressed when someone does anything with DICOM, it's a bit complex format IMHO.
Image data is just encapsulated: you just take a jpeg file and write it to bytes and wrap it a little.
That’s misleading.
- medicine chooses lossless formats
- there are security concerns with decoders and operating systems
- once you build a medical device, the future of your company depends on being able to expensively patch it
After reading a bit about jpeg xl [0], the bit depth, channel count and pixel count seem promising. Devils in the details. How will multiple focal planes be implemented?
https://www.abyssmedia.com/heic-converter/avif-heic-jpegxl.s...
WebP artifacts not pathological enough?