JPEG (actually named after the committee who created it), tried as early as the year 2000 to replace the original standard, with “JPEG 2000”. They were ultimately unsuccessful dethroning JPEG 1, despite large improvements with it’s compression. Meaning it caused less information loss, and looked better then a JPEG image at an equivalent file size. There was also a large range of expansions, such as the support for motion and lossless compression. Ultimately is was probably patents and the associated license costs that prevented JP2 from going mainstream, a lesson which would later be learnt…. Progressive decoding is a useful feature for image loading perception. What is does is structure the file in a way so that first couple of KB’s contains a full (very un-detailed) image, it will then load more detail onto of the base, rather than loading a full-quality image top-to-bottom. This feature is only useful to images codecs, so it’s no surprise that both JPEG and JXL have this feature. Again this is only perception, it doesn’t actually make it load faster. Still I believe this will play a role in JPEG XL becoming preferred on the web.
This is a fascinating write up on the “history” of image formats. The words “lossy” and “lossless” are usually used interchangeably, but have something to do with the compression formats. Computer vision and on-device image recognition software have them as the baseline. Image extraction and associated details are exciting to understand about the way we can calibrate them (or “reconstruct” them) for viewing and training datasets. It is not a surprise that some pure research is focused on the imaging formats and their effective utilisation in the patient-workflows.