Study Finds Novel Method of Resolving JPEG Compression Defects in Computer Vision Datasets Study Finds Novel Method of Resolving JPEG Compression Defects in Computer Vision Datasets
Artificial intelligence (AI) is only as good as the data you give it. That’s a foundational rule for applying any kind... Study Finds Novel Method of Resolving JPEG Compression Defects in Computer Vision Datasets

Artificial intelligence (AI) is only as good as the data you give it. That’s a foundational rule for applying any kind of data analytics effectively, yet it remains a challenge in many cases. It’s particularly challenging with computer vision, thanks to JPEG compression defects.

Compression issues lead to poor-quality images full of noise, often called JPEG artifacts. These artifacts hinder machine vision’s efficacy, as low-quality data produces low-quality results.

A new study from the University of Maryland and Facebook AI has found a new solution to this problem.

Why Do JPEG Artifacts Exist?

JPEGs are the most common image file type on the web, but their “lossy” compression poses challenges for machine vision. These files reduce their size by removing detail, so they compress and lose quality as they shrink. As a result, after a few saves, they’ll look fuzzy or pixelated.

Other file types don’t include this compromise in quality, but JPEGs have become synonymous with online images. Many organizations are also slow to adopt newer technologies and systems. Some government offices still run 35-year-old legacy software, which likely won’t support any newer, lossless image types.

Considering how much data it takes to train any AI model, it’s easy to see how this creates a problem for machine vision. JPEGs are the most readily available images, so teams training computer vision algorithms must account for this quality loss.

The University of Maryland study emphasized how impactful JPEG artifacts can be. Even moderate compression comes with a significant performance penalty. These artifacts make it difficult for AI to identify image contents and reduce the quality of image-creation systems like deepfakes.


Resolving JPEG Compression Defects

In this study, researchers compared their new method to two existing compression mitigation techniques: supervised fine-tuning and artifact correction.

Supervised fine-tuning uses pre-trained weights and labeled data to help the model make more informed decisions. This metadata helps the algorithm recognize when images are low-quality, letting it adjust as necessary to make the most of the data it has.

While this method produces impressive results and acts quickly, it sacrifices performance on uncompressed images. It’s also hard to implement on a consumer level, thanks to the need for labeled data.

Artifact correction involves removing JPEG artifacts from low-quality images before giving them to the machine vision model. Many commercial products now include artifact removal tools that work by blurring rough edges, enhancing contrast and saturation, or reversing the compression algorithm. While this option is fairly easy to implement, the extra step can take time, and it doesn’t produce the best performance.

The new method – task-targeted artifact correction – offers a better solution. In this technique, algorithms analyze image quality to see if it needs to correct any artifacts. They then pass along the high-quality images while correcting the compressed ones. As the machine vision algorithm learns, it also fine-tunes the correction process based on repeated errors it notices.

Since task-targeted artifact correction doesn’t attempt to fix images that don’t need it, it saves time and maintains consistent quality. The fine-tuning aspect of its correction also helps produce better results than traditional artifact correction. It also avoids the need for expensive, hard-to-gather metadata, making it a good fit for consumer applications.

Applications for Removing JPEG Artifacts

This improved artifact removal technique has substantial potential for machine vision applications. For example, if it proves effective at scale, it could help program self-driving cars for inclement weather. Snow and rain obscure and confuse sensors, lowering image quality and hindering safe navigation. But if machine vision can account for this lower quality, these vehicles could drive safely.

Removing the need for higher-quality image datasets will make it far easier to train machine vision algorithms, too. Teams will no longer have to look for relevant metadata or use smaller, lower-quality image databases to train their models. Task-targeted artifact correction could let them use larger data sets without concerns over their quality, streamlining development.

Task-targeted artifact correction has applications beyond machine vision, too. Its efficient approach to artifact removal could help fix widespread image quality issues throughout the web. Artists and web designers could ensure their images and user interfaces are more appealing without taking as much time or resources to remove artifacts.

Higher-Quality Images Unlock Computer Vision’s Potential

Task-targeted artifact correction helps manage one of machine vision’s most pressing challenges. Finding high-quality data from the beginning is less of a concern when the algorithm can improve the quality of the data it has. With this advantage, computer vision could reach its full potential in far less time and with far less investment.

April Miller

April Miller

April Miller is a staff writer at ReHack Magazine who specializes in AI, machine learning while writing on topics across the technology sphere. You can find her work on ReHack.com and by following ReHack's Twitter page.