Concerns about the risks posed by tampered images have been regularly exhibited in research over the past few years, particularly in light of a new surge in AI-based image editing frameworks that can modify existing images rather than fully creating existing images.
Most of the proposed detection systems addressing this type of content fall into one of two camps. watermark – The image-built fallback approach is currently being promoted by a coalition of content origin and reliability (C2PA).
The C2PA’s transparent procedure is a fallback when the image content is separated from its original source and continuous source “manifest.” Source: https://www.imatag.com/blog/enhancing-content-integrity-c2painvisible-watermarking
These “secret signals” must be robust to the auto-reencoding/optimization procedures that often occur when images are transited through social networks and portals and platforms, but are often not resilient in the lossy reencoding types applied via JPEG compression (and are used for an estimated 74.5% despite competition from gifts such as JPEG format).
The second approach is to tamper with images, as originally proposed in a 2013 paper Image Integrity Authentication Scheme Based on Fixed Point Theory. Instead of relying on watermarks and digital signatures, this method used a mathematical transformation called Gaussian convolution and deconvolution (GCD) Press the image towards a stable state that will break if it is changed.
From the paper, “Image Integrity Authentication Scheme Based on Fixed Point Theory”: Tamper localization using fixed point images with peak signal-to-noise (PSNR) of 59.7802 dB. A white rectangle indicates the area exposed to attack. Panel A (left) shows applied fixes, such as localized noise, filtering, and copy-based attacks. Panel B (right) shows the corresponding detection output, highlighting the tampering area identified by the authentication process. Source: https://arxiv.org/pdf/1308.0679
This concept is probably most easily understood in the context of repairing delicate lace fabrics. No matter how amazing the ability to patch a filigree, repaired sections are inevitable to be identifiable.
This type of transformation will gradually push the transformation into a state where repeated application to a grayscale image. No further changes occur.
This stable version of the image is called a Fixed point. Fixed points are rare and extremely sensitive to change. Small changes to fixed point images will almost certainly break Repaired Status and tampering can be easily detected.
As always with such an approach, artifacts from JPEG compression can threaten the integrity of the scheme.
On the left side, a watermark is applied to the surface of the iconic “Renna” image. This is clear under normal compression. On the right, we see that 90% JPEG compression reduces the distinction between perceived watermark and JPEG noise growth. After multiple reevaluations or at the best compression settings, the majority of watermark schemes face the issue of JPEG compression artifacts. Source: https://arxiv.org/pdf/2106.14150
What if instead, a JPEG compression artifact can be used as a central means of actually getting fixed points? In such cases, no additional bolt-on systems are required, as the same mechanisms that normally cause trouble in detecting watermarks and tampering forms the basis of the tamper detection framework itself.
JPEG compression as a security baseline
Such a system has been submitted to a new paper by two researchers at the University of Buffalo at the State University of New York. title Tamper-prevention images using JPEG fixed pointsThe new product is based on the work from 2013 and related works, and is based on the related works by formulating its central principle for the first time, and by skillfully exploiting JPEG compression itself as a way to potentially generate “self-aware” images.
The author expands:
‘This study reveals that after experiencing several rounds of the same JPEG compression and decompression process, the image does not change.
“In other words, if a single cycle of JPEG compression and decompression is considered an image transformation called a JPEG transformation, this transformation exhibits the property of having a fixed point, i.e. an image, with an image that remains unchanged when the JPEG transformation is applied.”
An illustration of the convergence of JPEG fixed points from a new paper. The top row shows an example of repeated JPEG compression, showing the number and location of pixels where each iteration changes. The bottom column plots the per-pixel L2 distance between successive iterations with different compression quality settings. Ironically, a better resolution for this image is not available. Source: https://arxiv.org/pdf/2504.17594
The new paper defines the JPEG process itself as a dynamic system, rather than introducing external transformations or watermarks. In this model, each compression and decompression cycle moves the image towards a fixed point. The authors demonstrate that after a finite number of iterations, any image reaches or approximates a state where further compression does not cause any change.
Researchers stated*:
“Changing the image will result in a deviation from the JPEG fixed point, which can be detected as a change in the JPEG block after one round of JPEG compression and decompression.
‘The proposed tamper evidence image based on JPEG fixed points has two advantages. First, tamper-proof images eliminate the need for external storage of verifiable features, such as image watermarks, for image fingerprints (schemes) or embedding hidden traces. The image itself acts as a proof of its authenticity, making the scheme essentially self-explanatory.
“Secondly, JPEG is a widely used format and is often the final step in an image processing pipeline, so the proposed method is restored to a JPEG operation. This is in contrast to the original (approach), which could lose consistency traces by the JPEG.
An important insight from the paper is that JPEG convergence is not just a by-product of its design, but a mathematically inevitable outcome of its operation. Discrete cosine transform, quantization, rounding, and truncation form transforms that lead to predictable fixed points (under appropriate conditions).
A schema for JPEG compression/depressurization processes formulated for new work.
Unlike watermarks, this method is necessary There are no embedded signals. The only reference is the consistency of the image itself under further compression. If the recompression is not changed, the image is presumed to be authentic. If so, tampering is indicated by deviation.
test
The authors tested this behavior using 1 million randomly generated patches of one-eighth of the 8-bit grayscale image data. By applying repeated JPEG compression and decompression to these synthetic patches, we observed that convergence to a fixed point occurs within a finite number of steps. This process was monitored by measuring pixel-by-pixel L2 distances between successive iterations, and the difference was reduced until the patch was stable.
L2 difference between successive iterations of 1 million 8×8 patches measured under different JPEG compression qualities. Each process starts with a single JPEG compression patch and tracks the reduction in the difference between the overall repeated compression.
To assess tamper detection, the authors created tamper-evident JPEG images and applied four types of attacks. Salt and pepper noise; Copy Move operation; Splicing from an external source;and Double JPEG Compression Use a different quantization table.
Examples of fixed-point RGB images with tampering detection and localization, including four destruction methods used by the authors. In the bottom row, we see that each perturbation style betrays itself for the generated fixed point image.
After tampering, the images were recompressed using the original quantization matrix. Deviations from the fixed point were detected by identifying image blocks that showed non-zero differences after recompression, allowing both detection and localization of the tampered regions.
This method is entirely based on standard JPEG operations, so fixed point images work fine for normal JPEG viewers and editors. However, the authors note that if images are recompressed at different quality levels, they may break authentication and may lose fixed point status, which should be handled with caution in actual use.
This is not just a tool for analyzing JPEG output, but it’s not that complicated. As a rule, you can slot into existing workflows with minimal costs and confusion.
This paper acknowledges that sophisticated enemies may attempt to create hostile changes that maintain fixed point status. However, researchers argue that such efforts are likely to introduce visible artifacts and undermine attacks.
The authors do not argue that fixed point JPEGs can replace a wider source system, such as C2PA, but suggest that fixed point methods can complement external metadata frameworks by providing an additional layer of tamper evidence that persists even when the metadata is peeled or lost.
Conclusion
The JPEG fixed point approach offers a simple, self-contained alternative to traditional authentication systems. This does not require embedded metadata, watermarks, or external reference files, and instead derives reliability directly from the predictable behavior of the compression process.
In this way, this method regains JPEG compression (frequent data degradation), a frequent source of data degradation, as a mechanism for integrity verification. In this respect, the new paper is one of the most innovative and original approaches to the problems I have encountered over the past few years.
The new work points to a shift from layered add-ons for security to an approach that utilizes the built-in characteristics of the media itself. As tampering methods become more refined, techniques for testing the internal structure of images may become more important.
Furthermore, many alternative systems proposed to address this issue result in considerable friction by requiring changes to image processing workflows established over the years.
* My conversion from author inline quotes to hyperlinks.
First released on Friday, April 25th, 2025