<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Deep Learning on Hanguangwu</title><link>https://hanguangwu.github.io/blog/en/tags/deep-learning/</link><description>Recent content in Deep Learning on Hanguangwu</description><generator>Hugo -- gohugo.io</generator><language>en</language><copyright>hanguangwu</copyright><lastBuildDate>Mon, 23 Mar 2026 13:34:25 -0800</lastBuildDate><atom:link href="https://hanguangwu.github.io/blog/en/tags/deep-learning/index.xml" rel="self" type="application/rss+xml"/><item><title>GitHub Repo Deep-Learning-Based-Image-Compression</title><link>https://hanguangwu.github.io/blog/en/p/github-repo-deep-learning-based-image-compression/</link><pubDate>Mon, 23 Mar 2026 13:34:25 -0800</pubDate><guid>https://hanguangwu.github.io/blog/en/p/github-repo-deep-learning-based-image-compression/</guid><description>&lt;h1 id="awesome-public-datasets"&gt;Awesome Public Datasets
&lt;/h1&gt;&lt;h2 id="introduction"&gt;Introduction
&lt;/h2&gt;&lt;p&gt;&lt;a class="link" href="https://github.com/ppingzhang/Deep-Learning-Based-Image-Compression" target="_blank" rel="noopener"
&gt;The paper list about deep learning based image compression&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="paper-list"&gt;Paper List
&lt;/h2&gt;&lt;h3 id="generative-compression"&gt;Generative compression
&lt;/h3&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Title&lt;/th&gt;
&lt;th style="text-align: center"&gt;Pub. &amp;amp; Date&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/07338.pdf" target="_blank" rel="noopener"
&gt;Rate-Distortion-Cognition Controllable Versatile Neural Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ECCV 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/07844.pdf" target="_blank" rel="noopener"
&gt;Lossy Image Compression with Foundation Diffusion Models&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ECCV 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/05155.pdf" target="_blank" rel="noopener"
&gt;EGIC: Enhanced Low-Bit-Rate Generative Image Compression Guided by Semantic Segmentation&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ECCV 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2507.04947" target="_blank" rel="noopener"
&gt;DC-AR: Efficient Masked Autoregressive Image Generation with Deep Compression Hybrid Tokenizer&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICCV 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2506.21977" target="_blank" rel="noopener"
&gt;StableCodec: Taming One-Step Diffusion for Extreme Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICCV 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://iccv.thecvf.com/virtual/2025/poster/577" target="_blank" rel="noopener"
&gt;DLF: Extreme Image Compression with Dual-generative Latent Fusion&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICCV 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://iccv.thecvf.com/virtual/2025/poster/2681" target="_blank" rel="noopener"
&gt;Cross-Granularity Online Optimization with Masked Compensated Information for Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICCV 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2025/papers/Xu_Decouple_Distortion_from_Perception_Region_Adaptive_Diffusion_for_Extreme-low_Bitrate_CVPR_2025_paper.pdf" target="_blank" rel="noopener"
&gt;Decouple Distortion from Perception: Region Adaptive Diffusion for Extreme-low Bitrate Perception Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;CVPR 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openreview.net/forum?id=xiVuqZZ59O" target="_blank" rel="noopener"
&gt;Ultra Lowrate Image Compression with Semantic Residual Coding and Compression-aware Diffusion&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICML 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openreview.net/pdf?id=qi7udwV66M" target="_blank" rel="noopener"
&gt;Zero-Shot Image Compression with Diffusion-Based Posterior Sampling&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openreview.net/forum?id=z0hUsPhwUN" target="_blank" rel="noopener"
&gt;Once-for-All: Controllable Generative Image Compression with Dynamic Granularity Adaptation&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICLR 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ojs.aaai.org/index.php/AAAI/article/view/33403" target="_blank" rel="noopener"
&gt;Conditional Latent Coding with Learnable Synthesized Reference for Deep Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;AAAI 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ojs.aaai.org/index.php/AAAI/article/view/33175" target="_blank" rel="noopener"
&gt;GLIC: General Format Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;AAAI 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2411.10185v3" target="_blank" rel="noopener"
&gt;Efficient Progressive Image Compression with Variance-aware Masking&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="None" &gt;UniMIC: Towards Universal Multi-modality Perceptual Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2412.10935v2" target="_blank" rel="noopener"
&gt;Progressive Compression with Universally Quantized Diffusion Models&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2412.11379v1" target="_blank" rel="noopener"
&gt;Controllable Distortion-Perception Tradeoff Through Latent Diffusion for Neural Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;AAAI 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2407.19651" target="_blank" rel="noopener"
&gt;ComNeck: Bridging Compressed Image Latents and Multimodal LLMs via Universal Transform-Neck&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2412.12982v1" target="_blank" rel="noopener"
&gt;Stable Diffusion is a Natural Cross-Modal Decoder for Layered AI-generated Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2407.12538v1" target="_blank" rel="noopener"
&gt;Linearly transformed color guide for low-bitrate diffusion based image compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;TIP 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2408.08459" target="_blank" rel="noopener"
&gt;JPEG-LM: LLMs as Image Generators with Canonical Codec Representations&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10566414&amp;amp;casa_token=4U0sgUNsxyQAAAAA:0ayUIqrQmKrwfM8v1sE67ZZaS48OiReJjRZdRqHyTlnCHI4zm_PSEqwM4QsvNI7qccQzSXg" target="_blank" rel="noopener"
&gt;Image Encryption and Compression Based on Reversed Diffusion Model&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;PCS 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2406.00758" target="_blank" rel="noopener"
&gt;Once-for-All: Controllable Generative Image Compression with Dynamic Granularity Adaption&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10570244&amp;amp;casa_token=xkZkXmlgP3wAAAAA:DYmBBrPQf2IwWoUAF70Te7XtdfSg85ud771PVI_vkfwCbjPUTB1cGuM3k_levF40o4NmV-s" target="_blank" rel="noopener"
&gt;Machine Perception-Driven Facial Image Compression: A Layered Generative Approach&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;TCSVT 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2407.07723" target="_blank" rel="noopener"
&gt;Understanding is Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2407.17060" target="_blank" rel="noopener"
&gt;High Efficiency Image Compression for Large Visual-Language Models&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openreview.net/pdf?id=nSUMQhITdd" target="_blank" rel="noopener"
&gt;Consistency Guided Diffusion Model with Neural Syntax for Perceptual Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ACM MM 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2407.09896" target="_blank" rel="noopener"
&gt;Zero-Shot Image Compression with Diffusion-Based Posterior Sampling&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2407.12538" target="_blank" rel="noopener"
&gt;High Frequency Matters: Uncertainty Guided Image Compression with Wavelet Diffusion&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2407.12295" target="_blank" rel="noopener"
&gt;Exploiting Inter-Image Similarity Prior for Low-Bitrate Remote Sensing Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2406.03961" target="_blank" rel="noopener"
&gt;LDM-RSIC: Exploring Distortion Prior with Latent Diffusion Models for Remote Sensing Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2024/papers/Jia_Generative_Latent_Coding_for_Ultra-Low_Bitrate_Image_Compression_CVPR_2024_paper.pdf" target="_blank" rel="noopener"
&gt;Generative Latent Coding for Ultra-Low Bitrate Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;CVPR 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2406.00758" target="_blank" rel="noopener"
&gt;Once-for-All: Controllable Generative Image Compression with Dynamic Granularity Adaption&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/abs/2406.09356" target="_blank" rel="noopener"
&gt;CMC-Bench: Towards a New Paradigm of Visual Signal Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id="lossless-compression"&gt;Lossless Compression
&lt;/h3&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Title&lt;/th&gt;
&lt;th style="text-align: center"&gt;Pub. &amp;amp; Date&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2025/papers/Zhang_Fitted_Neural_Lossless_Image_Compression_CVPR_2025_paper.pdf" target="_blank" rel="noopener"
&gt;Fitted Neural Lossless Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;CVPR 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2411.12448" target="_blank" rel="noopener"
&gt;Large Language Models for Lossless Image Compression: Next-Pixel Prediction in Language Space is All You Need&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;NeurIPS 2026&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2509.07704" target="_blank" rel="noopener"
&gt;SEEC: Segmentation-Assisted Multi-Entropy Models for Learned Lossless Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/abs/2412.17464" target="_blank" rel="noopener"
&gt;CALLIC: Content Adaptive Learning for Lossless Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;AAAI 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2410.17814" target="_blank" rel="noopener"
&gt;Learning Lossless Compression for High Bit-Depth Volumetric Medical Image&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10647302&amp;amp;casa_token=xbYddfRSqMoAAAAA:19cLT7kxdjVYv0j84IsNlUYujos72wpW_2phbqj45fjq-mNwLktHwGzZwENu4faVl1nvkhA" target="_blank" rel="noopener"
&gt;Rate-Complexity Optimization in Lossless Neural-Based Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICIP 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2412.00369v1" target="_blank" rel="noopener"
&gt;Random Cycle Coding: Lossless Compression of Cluster Assignments via Bits-Back Coding&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://www.sciencedirect.com/science/article/abs/pii/S0031320324003832" target="_blank" rel="noopener"
&gt;Hybrid-context-based multi-prior entropy modeling for learned lossless image compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;Pattern Recognition 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_Learned_Lossless_Image_Compression_based_on_Bit_Plane_Slicing_CVPR_2024_paper.pdf" target="_blank" rel="noopener"
&gt;Learned Lossless Image Compression based on Bit Plane Slicing&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;Pattern Recognition 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id="variable-rate--scalable-compression"&gt;Variable Rate / Scalable Compression
&lt;/h3&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Title&lt;/th&gt;
&lt;th style="text-align: center"&gt;Pub. &amp;amp; Date&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openreview.net/pdf?id=1groaXTrKo" target="_blank" rel="noopener"
&gt;Towards Scalable Compression with Universally Quantized Diffusion Models&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;NeurIPSW 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://neurips.cc/virtual/2024/98246" target="_blank" rel="noopener"
&gt;Flexible image decoding in learned image compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;NeurIPSW 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/1907.07875v1" target="_blank" rel="noopener"
&gt;Variable-size Symmetry-based Graph Fourier Transforms for image compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2207.04324v2" target="_blank" rel="noopener"
&gt;Latent Variables Coding for Perceptual Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ACM MM 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2410.00557" target="_blank" rel="noopener"
&gt;STanH: Parametric Quantization for Variable Rate Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2405.14222" target="_blank" rel="noopener"
&gt;RAQ-VAE: Rate-Adaptive Vector-Quantized Variational Autoencoder&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;Arxiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id="quantization"&gt;Quantization
&lt;/h3&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Title&lt;/th&gt;
&lt;th style="text-align: center"&gt;Pub. &amp;amp; Date&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2025/papers/Relic_Bridging_the_Gap_between_Gaussian_Diffusion_Models_and_Universal_Quantization_CVPR_2025_paper.pdf" target="_blank" rel="noopener"
&gt;Bridging the Gap between Gaussian Diffusion Models and Universal Quantization for Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;CVPR 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openreview.net/pdf?id=wqN6rWwYsr" target="_blank" rel="noopener"
&gt;Bridging the Gap between Diffusion Models and Universal Quantization for Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;NeurIPSW 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2411.16119v1" target="_blank" rel="noopener"
&gt;Learning Optimal Lattice Vector Quantizers for End-to-end Neural Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=10689618" target="_blank" rel="noopener"
&gt;Convolution Filter Compression via Sparse Linear Combinations of Quantized Basis&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;TNNLS 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2409.09488v1" target="_blank" rel="noopener"
&gt;Lossy Image Compression with Stochastic Quantization&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://www.arxiv.org/pdf/2408.12691" target="_blank" rel="noopener"
&gt;Quantization-free Lossy Image Compression Using Integer Matrix Factorization&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2408.12150" target="_blank" rel="noopener"
&gt;DeepHQ: Learned Hierarchical Quantizer for Progressive Deep Image Coding&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10566339&amp;amp;casa_token=6MnXai1ergEAAAAA:98ttJhOF_UU12y_KPlwG0kWpI35xBScxcKz4gIbyAdOow-5pe4hasuqIPeC7nBrnavlgr7Y" target="_blank" rel="noopener"
&gt;A Quantization Loss Compensation Network for Remote Sensing Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;PCS 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2406.07548" target="_blank" rel="noopener"
&gt;Image and Video Tokenization with Binary Spherical Quantization&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10531761" target="_blank" rel="noopener"
&gt;NLIC: Non-uniform Quantization based Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;TCSVT 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id="entropy-model"&gt;Entropy Model
&lt;/h3&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Title&lt;/th&gt;
&lt;th style="text-align: center"&gt;Pub. &amp;amp; Date&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/abs/2507.19125" target="_blank" rel="noopener"
&gt;Learned Image Compression with Hierarchical Progressive Context Modeling&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICCV 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openreview.net/forum?id=bsnRUkVn63" target="_blank" rel="noopener"
&gt;Test-time Adaptation for Image Compression with Distribution Regularization&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICLR 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2509.05169" target="_blank" rel="noopener"
&gt;Exploring Autoregressive Vision Foundation Models for Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openreview.net/pdf?id=J28aP5HsRJ" target="_blank" rel="noopener"
&gt;Learned Image Compression Framework with Quad-Prior Entropy Model&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2509.18815" target="_blank" rel="noopener"
&gt;FlashGMM: Fast Gaussian Mixture Entropy Model for Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2411.19320v1" target="_blank" rel="noopener"
&gt;Generalized Gaussian Model for Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2501.12330v1" target="_blank" rel="noopener"
&gt;The Gap Between Principle and Practice of Lossy Image Coding&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2405.09152v5" target="_blank" rel="noopener"
&gt;Group Image Compression for Dual Use of Machine and Human Vision&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;TCSVT 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2411.05832v1" target="_blank" rel="noopener"
&gt;Diversify, Contextualize, and Adapt: Efficient Entropy Modeling for Neural Image Codec&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2410.07669" target="_blank" rel="noopener"
&gt;Delta-ICM: Entropy Modeling with Delta Function for Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2410.04847" target="_blank" rel="noopener"
&gt;Causal Context Adjustment Loss for Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;NeurIPS 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openreview.net/pdf?id=YTNN0mOPQN" target="_blank" rel="noopener"
&gt;Spatial-Temporal Context Model for Remote Sensing Imagery Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ACM MM 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2407.09983" target="_blank" rel="noopener"
&gt;WeConvene: Learned Image Compression with Wavelet-Domain Convolution and Entropy Model&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2407.11590" target="_blank" rel="noopener"
&gt;Rethinking Learned Image Compression: Context is All You Need&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2407.10632" target="_blank" rel="noopener"
&gt;Bidirectional Stereo Image Compression with Cross-Dimensional Entropy Model&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id="achitecture"&gt;Achitecture
&lt;/h3&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Title&lt;/th&gt;
&lt;th style="text-align: center"&gt;Pub. &amp;amp; Date&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/06635.pdf" target="_blank" rel="noopener"
&gt;WeConvene: Learned Image Compression with Wavelet-Domain Convolution and Entropy Model&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ECCV 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/06270.pdf" target="_blank" rel="noopener"
&gt;Region-Adaptive Transform with Segmentation Prior for Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ECCV 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/03640.pdf" target="_blank" rel="noopener"
&gt;BaSIC: BayesNet Structure Learning for Computational Scalable Neural Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ECCV 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2509.10366" target="_blank" rel="noopener"
&gt;Efficient Learned Image Compression Through Knowledge Distillation&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICCV 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://iccv.thecvf.com/virtual/2025/poster/2181" target="_blank" rel="noopener"
&gt;Cassic: Towards Content-Adaptive State-Space Models for Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICCV 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2025/papers/Xu_PICD_Versatile_Perceptual_Image_Compression_with_Diffusion_Rendering_CVPR_2025_paper.pdf" target="_blank" rel="noopener"
&gt;PICD: Versatile Perceptual Image Compression with Diffusion Rendering&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;CVPR 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2025/papers/Zeng_MambaIC_State_Space_Models_for_High-Performance_Learned_Image_Compression_CVPR_2025_paper.pdf" target="_blank" rel="noopener"
&gt;MambaIC: State Space Models for High-Performance Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;CVPR 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openreview.net/pdf?id=gIrVoQEDQv" target="_blank" rel="noopener"
&gt;Unraveling Neural Cellular Automata for Lightweight Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openreview.net/pdf?id=Tv36j85SqR" target="_blank" rel="noopener"
&gt;Approaching Rate-Distortion Limits in Neural Compression with Lattice Transform Coding&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICLR 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2411.18494v1" target="_blank" rel="noopener"
&gt;Learning Optimal Linear Block Transform by Rate Distortion Minimization&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2412.15752v1" target="_blank" rel="noopener"
&gt;Sparse Point Clouds Assisted Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;TCSVT 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2501.13751v1" target="_blank" rel="noopener"
&gt;On Disentangled Training for Nonlinear Transform in Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2410.12191" target="_blank" rel="noopener"
&gt;Test-time adaptation for image compression with distribution regularization&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2409.18730" target="_blank" rel="noopener"
&gt;Effectiveness of learning-based image codecs on fingerprint storage&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/abs/2410.02981" target="_blank" rel="noopener"
&gt;Gabic: Graph-Based Attention Block for Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICIP 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2409.17134" target="_blank" rel="noopener"
&gt;Streaming Neural Images&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICIP 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://www.arxiv.org/pdf/2408.03842" target="_blank" rel="noopener"
&gt;Bi-Level Spatial and Channel-aware Transformer for Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10743248&amp;amp;casa_token=uVcLjjVsiIAAAAAA:umWqK3-lWEAaYZLS6bGRwU83D_HltSVBFOPPF547AAOr-fKWKk4cWWscip13hDKI1ZYlPoc" target="_blank" rel="noopener"
&gt;Extreme Low Bitrate Image Compression System for Mobile Deployment&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;MMSP 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2409.14090" target="_blank" rel="noopener"
&gt;Window-based Channel Attention for Wavelet-enhanced Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10647907&amp;amp;casa_token=_xL4m5ekrn0AAAAA:c7C1H9icT_KyIsjmgCz2uuikwvp8ukPivv5cDm_3V5nCspElz4BQXWWPxnrtmZmGv4pYddY" target="_blank" rel="noopener"
&gt;Feature Enhanced Learning Image Compression With Recurrent Criss-Cross Attention&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICIP 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2408.17073" target="_blank" rel="noopener"
&gt;Approximately Invertible Neural Network for Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2408.14127" target="_blank" rel="noopener"
&gt;Rate-Distortion-Perception Controllable Joint Source-Channel Coding for High-Fidelity Generative Communications&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;Arxiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10648236/authors#authors" target="_blank" rel="noopener"
&gt;Structured Pruning and Quantization for Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICIP 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10566341&amp;amp;casa_token=5IwoTIplk3sAAAAA:qmSZUREE9iZFM3FtnOzIscEwUAonnBfKeBw8tRob7l35ZWuRRaxxcKx68NXw8vRraaBVmrU" target="_blank" rel="noopener"
&gt;Practical Learned Image Compression with Online Encoder Optimization&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;PCS 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2406.10361" target="_blank" rel="noopener"
&gt;On Efficient Neural Network Architectures for Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2406.13709" target="_blank" rel="noopener"
&gt;A Study on the Effect of Color Spaces in Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10558571&amp;amp;casa_token=7OHwnFHkwDUAAAAA:fZ9rVL-B_QI8BT4AWEJkS8-M07rg9VWUxSY3Z1MBlWqoNQtpc4l9wDjz4uchHFS2SPZErEI" target="_blank" rel="noopener"
&gt;Learning-Based Conditional Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ISCAS 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10558635&amp;amp;casa_token=iR30sgfqXX0AAAAA:CygeYdTY8WGiAaUw68kNTiQAcmmiu1nSCbQ13daszhrMk4SO72ODDxLDgjAmHnlCXWRBwBs" target="_blank" rel="noopener"
&gt;Asymmetric Neural Image Compression with High-Preserving Information&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ISCAS 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10566428&amp;amp;casa_token=wYpGkb8wjkQAAAAA:xImfyLYnypOrxhvo6O4UHwHGsOVstRa_6jbBbmRMPdlJLMkBZsULXdcdHJ2wWnVIxkZkmsI" target="_blank" rel="noopener"
&gt;Wavelet-like Transform with Subbands Fusion in Decoupled Structure for Deep Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;PCS 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10559830&amp;amp;casa_token=FWJQglVJO3MAAAAA:BTaIvWu6YnP42QFsGfQak48wjhoAfmxhLVSZjJX-kgjRJ-2dH3y3tteKQn8h5-U-YCZP-IE" target="_blank" rel="noopener"
&gt;FDNet: Frequency Decomposition Network for Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;TCSVT 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2407.09853" target="_blank" rel="noopener"
&gt;Image Compression for Machine and Human Vision with Spatial-Frequency Adaptation&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2407.11700" target="_blank" rel="noopener"
&gt;Rate-Distortion-Cognition Controllable Versatile Neural Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2405.15413" target="_blank" rel="noopener"
&gt;MambaVC: Learned Visual Compression with Selective State Spaces&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id="screen-content-image"&gt;Screen Content Image
&lt;/h3&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Title&lt;/th&gt;
&lt;th style="text-align: center"&gt;Pub. &amp;amp; Date&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/06526.pdf" target="_blank" rel="noopener"
&gt;Learned HDR Image Compression for Perceptually Optimal Storage and Display&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ECCV 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://www.ijcai.org/proceedings/2024/0134.pdf" target="_blank" rel="noopener"
&gt;Efficient Screen Content Image Compression via Superpixel-based Content Aggregation and Dynamic Feature Fusion&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;IJCAI 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10577165&amp;amp;casa_token=ddUlyV468d4AAAAA:Ep5T9S4nD7zCZWS-ml46aRYuuKqAYMW518K3gLntWQ7GDCjuPpxRY5M7B7UtF42qZ_KiiuU&amp;amp;tag=1" target="_blank" rel="noopener"
&gt;DSCIC: Deep Screen Content Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;TCSVT 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id="hdr-image"&gt;HDR Image
&lt;/h3&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Title&lt;/th&gt;
&lt;th style="text-align: center"&gt;Pub. &amp;amp; Date&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2407.13179v1" target="_blank" rel="noopener"
&gt;Breaking Boundaries: Unifying Imaging and Compression for HDR Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;TIP 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2407.13179" target="_blank" rel="noopener"
&gt;Learned HDR Image Compression for Perceptually Optimal Storage and Display&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id="image-coding-for-machine-vision"&gt;Image coding for machine vision
&lt;/h3&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Title&lt;/th&gt;
&lt;th style="text-align: center"&gt;Pub. &amp;amp; Date&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/06823.pdf" target="_blank" rel="noopener"
&gt;Image Compression for Machine and Human Vision With Spatial-Frequency Adaptation&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ECCV 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/09009.pdf" target="_blank" rel="noopener"
&gt;A Unified Image Compression Method for Human Perception and Multiple Vision Tasks&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ECCV 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://dl.acm.org/doi/10.1145/3708347" target="_blank" rel="noopener"
&gt;Neural Image Compression with Regional Decoding&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ToMM 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2402.08862v1" target="_blank" rel="noopener"
&gt;Saliency Segmentation Oriented Deep Image Compression With Novel Bit Allocation&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;TIP 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="None" &gt;LL-ICM: Image Compression for Low-level Machine Vision via Large Vision-Language Model&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2310.09382v1" target="_blank" rel="noopener"
&gt;Task-Adapted Learnable Embedded Quantization for Scalable Human-Machine Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;TCSVT 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2501.04329v1" target="_blank" rel="noopener"
&gt;An Efficient Adaptive Compression Method for Human Perception and Machine Vision Tasks&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2501.04579v1" target="_blank" rel="noopener"
&gt;Unified Coding for Both Human Perception and Generalized Machine Analytics with CLIP Supervision&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2409.19660v1" target="_blank" rel="noopener"
&gt;All-in-One Image Coding for Joint Human-Machine Vision with Multi-Path Aggregation&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2408.08575" target="_blank" rel="noopener"
&gt;Tell Codec What Worth Compressing: Semantically Disentangled Image Coding for Machine with LMMs&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2408.07028" target="_blank" rel="noopener"
&gt;Feature-Preserving Rate-Distortion Optimization in Image Coding for Machines&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="" &gt;Group Image Compression for Dual Use of Machine and Human Vision&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;TCSVT 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2408.07028" target="_blank" rel="noopener"
&gt;Feature-Preserving Rate-Distortion Optimization in Image Coding for Machines&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;MMSP 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10743309&amp;amp;casa_token=zKA0n7bsqFUAAAAA:HAwTji45HCcml__D27xCp29vhfB8Im2TXKbHm29ObXI80UW3kiaW4ckTorJJC7p1cZGUS5Y" target="_blank" rel="noopener"
&gt;Compression of Self-Supervised Representations for Machine Vision&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10647464&amp;amp;casa_token=____eHFo8BMAAAAA:U-jtu0xTn0RWA80FDfNvfith5yJz0sdvRTl5UhTQBhG_J874g9eNBXllfFgFRByMqDnY1zI&amp;amp;tag=1" target="_blank" rel="noopener"
&gt;Learned Image Compression for Both Humans and Machines via Dynamic Adaptation&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICIP 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10648033?casa_token=H_-iMbpng6oAAAAA:zbDs9boDRETBQINfnLEbkz31FcWDyoORoBTCrmmlqXzN86tKR6sqdmXIAA-uHmVH1agtBxsCZw" target="_blank" rel="noopener"
&gt;Image Coding For Machine Via Analytics-Driven Appearance Redundancy Reduction&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICIP 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10574324&amp;amp;casa_token=3hjufBt4DOEAAAAA:ZVH9S11WP5wB3eRmfHs02WCpHHe4_7cHo1SWnMNBuwaCoOJgkxOWk3UXhyUBlAVpCW4fgy4" target="_blank" rel="noopener"
&gt;Saliency Map-Guided End-to-End Image Coding for Machines&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;SPL 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10557851&amp;amp;casa_token=Fu-eEJDIq1gAAAAA:ap6uExZfQWevfhbLwgq3NoH-Q3SR4UBhsSFF7tnnAMTTsZjDPpUz73J0dSMhwR0B0iwQgH8" target="_blank" rel="noopener"
&gt;Redundancy Removal Module for Reducing the Bitrates of Image Coding for Machines&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ISCAS 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id="medical-image"&gt;Medical Image
&lt;/h3&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Title&lt;/th&gt;
&lt;th style="text-align: center"&gt;Pub. &amp;amp; Date&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2412.09231v1" target="_blank" rel="noopener"
&gt;Versatile Volumetric Medical Image Coding for Human-Machine Vision&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2405.16850" target="_blank" rel="noopener"
&gt;UniCompress: Enhancing Multi-Data Medical Image Compression with Knowledge Distillation&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;Arxiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id="implicit-neural-representation"&gt;Implicit Neural Representation
&lt;/h3&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Title&lt;/th&gt;
&lt;th style="text-align: center"&gt;Pub. &amp;amp; Date&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openreview.net/forum?id=9u5hPIcr6j" target="_blank" rel="noopener"
&gt;LotteryCodec: Searching the Implicit Representation in a Random Network for Low-Complexity Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICML 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2509.18748" target="_blank" rel="noopener"
&gt;HyperCool: Reducing Encoding Cost in Overfitted Codecs with Hypernetworks&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;Arxiv 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10647328" target="_blank" rel="noopener"
&gt;Redefining Visual Quality: The Impact of Loss Functions on INR-Based Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICIP 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10647328?casa_token=4zOGbEd8ye4AAAAA:HK-ntiQYpO25P-fk_Dob31eeKFZOJ4CFqwOTT5ZaivzBkAUTfcXvoLWxHeaPhoH6K2_BtZHF-A" target="_blank" rel="noopener"
&gt;Implicit Neural Image Field for Biological Microscopy Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICIP 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id="panoramicstereo-image"&gt;Panoramic/stereo Image
&lt;/h3&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Title&lt;/th&gt;
&lt;th style="text-align: center"&gt;Pub. &amp;amp; Date&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="eccv2024.ecva.net//virtual/2024/poster/1797" &gt;Bidirectional Stereo Image Compression with Cross-Dimensional Entropy Model&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ECCV 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10721338/authors#authors" target="_blank" rel="noopener"
&gt;Learning Content-Weighted Pseudocylindrical Representation for 360° Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICIP 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id="benchmark--dataset--survey"&gt;Benchmark &amp;amp; Dataset &amp;amp; Survey
&lt;/h3&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Title&lt;/th&gt;
&lt;th style="text-align: center"&gt;Pub. &amp;amp; Date&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/document/10807668/" target="_blank" rel="noopener"
&gt;JPEG AI: The First International Standard for Image Coding Based on an End-to-End Learning-Based Approach&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;IEEE MultiMedia 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://dl.acm.org/doi/pdf/10.1145/3664647.3685519" target="_blank" rel="noopener"
&gt;OpenDIC: An Open-Source Library and Performance Evaluation for Deep-learning-based Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ACMMM 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id="others"&gt;Others
&lt;/h3&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Title&lt;/th&gt;
&lt;th style="text-align: center"&gt;Pub. &amp;amp; Date&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2507.17221" target="_blank" rel="noopener"
&gt;Dataset Distillation as Data Compression: A Rate-Utility Perspective&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICCV 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2025/papers/Zhang_Balanced_Rate-Distortion_Optimization_in_Learned_Image_Compression_CVPR_2025_paper.pdf" target="_blank" rel="noopener"
&gt;Balanced Rate-Distortion Optimization in Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;CVPR 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openreview.net/forum?id=olzs3zVsE7" target="_blank" rel="noopener"
&gt;Privacy-Shielded Image Compression: Defending Against Exploitation from Vision-Language Pretrained Models&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICML 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openreview.net/pdf?id=ialr09SfeJ" target="_blank" rel="noopener"
&gt;Synonymous Variational Inference for Perceptual Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICML 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ojs.aaai.org/index.php/AAAI/article/view/33111/35266" target="_blank" rel="noopener"
&gt;CAMSIC: Content-aware Masked Image Modeling Transformer for Stereo Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;AAAI 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2412.01646v1" target="_blank" rel="noopener"
&gt;Robust and Transferable Backdoor Attacks Against Deep Image Compression With Selective Frequency Prior&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2411.06810v1" target="_blank" rel="noopener"
&gt;JPEG AI Image Compression Visual Artifacts: Detection Methods and Dataset&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2411.16727v2" target="_blank" rel="noopener"
&gt;An Information-Theoretic Regularizer for Lossy Neural Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICCV 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2411.10650v1" target="_blank" rel="noopener"
&gt;Deep Learning-Based Image Compression for Wireless Communications: Impacts on Reliability, Throughput, and Latency&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/document/10814661/" target="_blank" rel="noopener"
&gt;HNR-ISC: Hybrid Neural Representation for Image Set Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;TMM 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2412.03261v1" target="_blank" rel="noopener"
&gt;Is JPEG AI going to change image forensics?&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://www.nowpublishers.com/article/OpenAccessDownload/SIP-20240025" target="_blank" rel="noopener"
&gt;2D Gaussian Splatting for Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ATSIP 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2410.20145" target="_blank" rel="noopener"
&gt;Cross-Platform Neural Video Coding: A Case Study&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openreview.net/pdf?id=zIrvyQdIG4" target="_blank" rel="noopener"
&gt;Gone With the Bits: Benchmarking Bias in Facial Phenotype Degradation Under Low-Rate Neural Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICMLW 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2409.11111" target="_blank" rel="noopener"
&gt;Few-Shot Domain Adaptation for Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;AAAI 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="2024"&gt;✔2024
&lt;/h2&gt;&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;(SPL 2024) &lt;strong&gt;OMR-NET: A Two-Stage Octave Multi-Scale Residual Network for Screen Content Image Compression&lt;/strong&gt; Jiang S, Ren T, Fu C, et al. &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10552293&amp;amp;casa_token=HZozj0vMXvkAAAAA:_7rf8zPrb-WjgI1-i9BoraOqIEMGQdTWcvj2NUfc-3GEtogq1VavMVzi2kKx8yF3hrNoAX6lfg&amp;amp;tag=1" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TPAMI 2024) &lt;strong&gt;I2C: Invertible Continuous Codec for High-Fidelity Variable-Rate Image Compression&lt;/strong&gt; Cai, Shilv and Chen, Liqun and Zhang, Zhijun and Zhao, Xiangyun and Zhou, Jiahuan and Peng, Yuxin and Yan, Luxin and Zhong, Sheng and Zou, Xu &lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10411123" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICASSP 2024) &lt;strong&gt;Leveraging Redundancy in Feature for Efficient Learned Image CompressionN&lt;/strong&gt; Qin, Peng and Bao, Youneng and Meng, Fanyang and Tan, Wen and Li, Chao and Wang, Genhong and Liang, Yongsheng &lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10447424" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICASSP 2024) &lt;strong&gt;RATE-QUALITY BASED RATE CONTROL MODEL FOR NEURAL VIDEO COMPRESSION&lt;/strong&gt; Liao, Shuhong and Jia, Chuanmin and Fan, Hongfei and Yan, Jingwen and Ma, Siwei &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=10447777" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICASSP 2024) &lt;strong&gt;Enhancing Adversarial Training with Prior Knowledge Distillation for Robust Image Compression&lt;/strong&gt; Zhi, Cao and Youneng, Bao and Fanyang, Meng and Chao, Li and Wen, Tan and Genhong, Wang and Yongsheng, Liang&lt;a class="link" href="https://arxiv.org/pdf/2403.06700v1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(AAAI 2024) &lt;strong&gt;Make Lossy Compression Meaningful for Low-Light Images&lt;/strong&gt; Cai, Shilv and Chen, Liqun and Zhong, Sheng and Yan, Luxin and Zhou, Jiahuan and Zou, Xu &lt;a class="link" href="https://ojs.aaai.org/index.php/AAAI/article/download/28664/29289" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(AAAI 2024) &lt;strong&gt;End-to-End RGB-D Image Compression via Exploiting Channel-Modality Redundancy&lt;/strong&gt; Zheng, Huiming and Gao, Wei &lt;a class="link" href="https://ojs.aaai.org/index.php/AAAI/article/download/28588/29143" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2024) &lt;strong&gt;Towards Backward-Compatible Continual Learning of Image Compression&lt;/strong&gt; Duan, Zhihao and Lu, Ming and Yang, Justin and He, Jiangpeng and Ma, Zhan and Zhu, Fengqing &lt;a class="link" href="https://arxiv.org/pdf/2402.18862v1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(NeurIPS 2024) &lt;strong&gt;Compression with bayesian implicit neural representations&lt;/strong&gt; Guo, Zongyu and Flamich, Gergely and He, Jiajun and Chen, Zhibo and Hern{'a}ndez-Lobato, Jos{'e} Miguel &lt;a class="link" href="https://arxiv.org/pdf/2305.19185.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TIP 2024) &lt;strong&gt;Bilateral Context Modeling for Residual Coding in Lossless 3D Medical Image Compression&lt;/strong&gt; Liu, Xiangrui and Wang, Meng and Wang, Shiqi and Kwong, Sam &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=10478821" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TMM 2024) &lt;strong&gt;Neural Network Coding of Difference Updates for Efficient Distributed Learning Communication&lt;/strong&gt; Sheng, Xihua and Li, Li and Liu, Dong and Li, Houqiang &lt;a class="link" href="https://arxiv.org/pdf/2401.15864.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TCSVT 2024) &lt;strong&gt;FICNet: An End to End Network for Free-view Image Coding&lt;/strong&gt; Yang, Chunhui and Yang, Jiayu and Zhai, Yongqi and Wang, Ronggang&lt;a class="link" href="https://ieeexplore.ieee.org/document/10504389?denied=" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TCSVT 2024) &lt;strong&gt;GroupedMixer: An Entropy Model with Group-wise Token-Mixers for Learned Image Compression&lt;/strong&gt; Li, Daxin and Bai, Yuanchao and Wang, Kai and Jiang, Junjun and Liu, Xianming and Gao, Wen &lt;a class="link" href="https://arxiv.org/pdf/2405.01170" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TCSVT 2024) &lt;strong&gt;Multirate Progressive Entropy Model for Learned Image Compression&lt;/strong&gt; Li, Chao and Yin, Shanzhi and Jia, Chuanmin and Meng, Fanyang and Tian, Yonghong and Liang, Yongsheng &lt;a class="link" href="https://ieeexplore.ieee.org/document/10471618" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TCSVT 2024) &lt;strong&gt;EUICN: An Efficient Underwater Image Compression Network&lt;/strong&gt; Li, Mengyao and Shen, Liquan and Hua, Xia and Tian, Zhaoyi &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=10445326" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TCSVT 2024) &lt;strong&gt;Rate-Distortion Optimized Cross Modal Compression with Multiple Domains&lt;/strong&gt; Gao, Junlong and Jia, Chuanmin and Huang, Zhimeng and Wang, Shanshe and Ma, Siwei and Gao, Wen &lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10430161" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ToMM 2024) &lt;strong&gt;Perceptual Quality-Oriented Rate Allocation via Distillation from End-to-End Image Compression&lt;/strong&gt; Yang, Runyu and Liu, Dong and Ma, Siwei and Wu, Feng and Gao, Wen &lt;a class="link" href="https://dl.acm.org/doi/pdf/10.1145/3650034" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TGRS 2024) &lt;strong&gt;Remote Sensing Image Compression Based on High-Frequency and Low-Frequency Components&lt;/strong&gt; Xiang, Shao and Liang, Qiaokang &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10379598&amp;amp;casa_token=8o7Rvla9bkIAAAAA:BdM70h2rnznpm8AjLpmF2OaaY4LOyj96msdVfnJyaYeQ-EVVWgoAz8YSFYoxbq2tG6L95AQr" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(WCACV 2024) &lt;strong&gt;Neural Image Compression Using Masked Sparse Visual Representation&lt;/strong&gt; Jiang, Wei and Wang, Wei and Chen, Yue &lt;a class="link" href="https://arxiv.org/pdf/2309.11661.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(PCS 2024) &lt;strong&gt;CoCliCo: Extremely low bitrate image compression based on CLIP semantic and tiny color map&lt;/strong&gt; Bachard, Tom and Bordin, Tom and Maugey, Thomas &lt;a class="link" href="https://inria.hal.science/hal-04478601/file/PCS_2024-2-1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(IEVC 2024) &lt;strong&gt;The Effect of Edge Information in Stable Diffusion Applied to Image Coding&lt;/strong&gt; Watanabe, Hiroshi and Chujoh, Takeshi and Fan, Zheming and Jin, Luoxu and Yasugi, Yukinobu and Ikai, Tomohiro and Hayami, Taiga and Hong, Sujun &lt;a class="link" href="https://www.ams.giti.waseda.ac.jp/data/pdf-files/2024IEVC_LBP-15.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(SPL 2024) &lt;strong&gt;Enhancing High-Resolution Image Compression Through Local-Global Joint Attention Mechanism&lt;/strong&gt;Jiang, Zeyu and Liu, Xiaohong and Li, Aini and Wang, Guangyu&lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10487886" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(SPL 2024) &lt;strong&gt;Learning-Based Image Compression With Parameter-Adaptive Rate-Constrained Loss&lt;/strong&gt;Guerin, Nilson D and da Silva, Renam Castro and Macchiavello, Bruno&lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10487041?casa_token=knUB41_TmBsAAAAA:a-OvI58YlhHCqICs5ondcAnowi-IGX2nx0TgWqjjp_VfILwGajk6aEbDfqpUAqvF6--XxzsqGQ" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICASSP 2024) &lt;strong&gt;Fine color guidance in diffusion models and its application to image compression at extremely low bitrates&lt;/strong&gt;Bordin, Tom and Maugey, Thomas&lt;a class="link" href="https://ieeexplore.ieee.org/document/10445837" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;On the Adversarial Robustness of Learning-based Image Compression Against Rate-Distortion Attacks&lt;/strong&gt;Wu, Chenhao and Wu, Qingbo and Wei, Haoran and Chen, Shuai and Wang, Lei and Ngan, King Ngi and Meng, Fanman and Li, Hongliang&lt;a class="link" href="https://arxiv.org/pdf/2405.07717" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Scalable Image Coding for Humans and Machines Using Feature Fusion Network&lt;/strong&gt;Li, Junhui and Hou, Xingsong&lt;a class="link" href="https://arxiv.org/pdf/2405.09152" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Towards Task-Compatible Compressible Representations&lt;/strong&gt; de Andrade, Anderson and Baji{'c}, Ivan&lt;a class="link" href="https://arxiv.org/pdf/2405.10244" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Enhancing Perception Quality in Remote Sensing Image Compression via Invertible Neural Network&lt;/strong&gt; Li, Junhui and Hou, Xingsong&lt;a class="link" href="https://arxiv.org/pdf/2405.10518" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;NLIC: Non-uniform Quantization based Learned Image Compression&lt;/strong&gt; Ge, Ziqing and Ma, Siwei and Gao, Wen and Pan, Jingshan and Jia, Chuanmin&lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10531761" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Domain Adaptation for Learned Image Compression with Supervised Adapters&lt;/strong&gt;Presta, Alberto and Spadaro, Gabriele and Tartaglione, Enzo and Fiandrotti, Attilio and Grangetto, Marco&lt;a class="link" href="https://arxiv.org/pdf/2404.15591" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;2D Gaussian Splatting for Image Compression&lt;/strong&gt;Pingping Zhang, Xiangrui Liu, Meng Wang, Shiqi Wang, Sam Kwong&lt;a class="link" href="https://github.com/ppingzhang/2DGS_ImageCompression/blob/main/2DGS_APSIPA.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Domain Adaptation for Learned Image Compression with Supervised Adapters&lt;/strong&gt;Presta, Alberto and Spadaro, Gabriele and Tartaglione, Enzo and Fiandrotti, Attilio and Grangetto, Marco&lt;a class="link" href="https://arxiv.org/pdf/2404.15591" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Towards Extreme Image Compression with Latent Feature Guidance and Diffusion Prior&lt;/strong&gt;Li, Zhiyuan and Zhou, Yanhui and Wei, Hao and Ge, Chenyang and Jiang, Jingwen&lt;a class="link" href="https://arxiv.org/pdf/2404.18820" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;S2LIC: Learned Image Compression with the SwinV2 Block, Adaptive Channel-wise and Global-inter Attention Context&lt;/strong&gt;Wang, Yongqiang and Liang, Feng and Liang, Jie and Fu, Haisheng&lt;a class="link" href="https://arxiv.org/pdf/2403.14471.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Lossy Image Compression with Foundation Diffusion Models&lt;/strong&gt;WRelic, Lucas and Azevedo, Roberto and Gross, Markus and Schroers, Christopher&lt;a class="link" href="https://arxiv.org/pdf/2404.08580.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Correcting Diffusion-Based Perceptual Image Compression with Privileged End-to-End Decoder&lt;/strong&gt;Ma, Yiyang and Yang, Wenhan and Liu, Jiaying&lt;a class="link" href="https://arxiv.org/html/2404.04916v1" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Fine color guidance in diffusion models and its application to image compression at extremely low bitrates&lt;/strong&gt;Bordin, Tom and Maugey, Thomas&lt;a class="link" href="https://arxiv.org/pdf/2404.06865.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Human-Machine Collaborative Image Compression Method Based on Implicit Neural Representations&lt;/strong&gt;Li, Huanyang and Zhang, Xinfeng&lt;a class="link" href="https://arxiv.org/pdf/2112.04267.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;CGenerative Refinement for Low Bitrate Image Coding Using Vector Quantized Residual&lt;/strong&gt;Kong, Yuzhuo and Lu, Ming and Ma, Zhan&lt;a class="link" href="https://ieeexplore.ieee.org/document/10493033?denied=" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Image and Video Compression using Generative Sparse Representation with Fidelity Controls&lt;/strong&gt;Jiang, Wei and Wang, Wei&lt;a class="link" href="https://arxiv.org/pdf/2404.06076.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Content-aware Masked Image Modeling Transformer for Stereo Image Compression&lt;/strong&gt;Zhang, Xinjie and Gao, Shenyuan and Liu, Zhening and Ge, Xingtong and He, Dailan and Xu, Tongda and Wang, Yan and Zhang, Jun&lt;a class="link" href="https://arxiv.org/pdf/2403.08505v1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Super-High-Fidelity Image Compression via Hierarchical-ROI and Adaptive Quantization&lt;/strong&gt;Luo, Jixiang and Wang, Yan and Qin, Hongwei&lt;a class="link" href="https://arxiv.org/pdf/2403.13030.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Theoretical Bound-Guided Hierarchical VAE for Neural Image Codecs&lt;/strong&gt;Zhang, Yichi and Duan, Zhihao and Huang, Yuning and Zhu, Fengqing&lt;a class="link" href="https://arxiv.org/pdf/2403.18535v1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Unifying Generation and Compression: Ultra-low bitrate Image Coding Via Multi-stage Transformer&lt;/strong&gt;Xue, Naifu and Mao, Qi and Wang, Zijian and Zhang, Yuan and Ma, Siwei&lt;a class="link" href="https://arxiv.org/pdf/2403.03736.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Enhancing the Rate-Distortion-Perception Flexibility of Learned Image Codecs with Conditional Diffusion Decoders&lt;/strong&gt;Mari, Daniele and Milani, Simone&lt;a class="link" href="https://arxiv.org/pdf/2403.02887v1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Channel-wise Feature Decorrelation for Enhanced Learned Image Compression&lt;/strong&gt;Pakdaman, Farhad and Gabbouj, Moncef&lt;a class="link" href="https://arxiv.org/pdf/2403.10936.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Overfitted image coding at reduced complexity&lt;/strong&gt;Blard, Th{'e}ophile and Ladune, Th{'e}o and Philippe, Pierrick and Clare, Gordon and Jiang, Xiaoran and D{'e}forges, Olivier&lt;a class="link" href="https://arxiv.org/pdf/2403.11651v1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Neural Image Compression with Text-guided Encoding for both Pixel-level and Perceptual Fidelity&lt;/strong&gt; Lee, Hagyeong and Kim, Minkyu and Kim, Jun-Hyuk and Kim, Seungeon and Oh, Dokwan and Lee, Jaeho&lt;a class="link" href="https://arxiv.org/pdf/2403.02944.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Transformer-based Learned Image Compression for Joint Decoding and Denoising&lt;/strong&gt; Chen, Yi-Hsin and Ho, Kuan-Wei and Tsai, Shiau-Rung and Lin, Guan-Hsun and Gnutti, Alessandro and Peng, Wen-Hsiao and Leonardi, Riccardo&lt;a class="link" href="https://arxiv.org/pdf/2402.12888v1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Channel-wise Feature Decorrelation for Enhanced Learned Image Compression&lt;/strong&gt; Pakdaman, Farhad and Gabbouj, Moncef&lt;a class="link" href="https://arxiv.org/ftp/arxiv/papers/2403/2403.10936.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Super-High-Fidelity Image Compression via Hierarchical-ROI and Adaptive Quantization&lt;/strong&gt; Luo, Jixiang and Wang, Yan and Qin, Hongwei&lt;a class="link" href="https://arxiv.org/pdf/2403.13030.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;S2LIC: Learned Image Compression with the SwinV2 Block, Adaptive Channel-wise and Global-inter Attention Context&lt;/strong&gt; Wang, Yongqiang and Liang, Feng and Liang, Jie and Fu, Haisheng&lt;a class="link" href="https://arxiv.org/pdf/2403.14471.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Content-aware Masked Image Modeling Transformer for Stereo Image Compression&lt;/strong&gt; Zhang, Xinjie and Gao, Shenyuan and Liu, Zhening and Ge, Xingtong and He, Dailan and Xu, Tongda and Wang, Yan and Zhang, Jun&lt;a class="link" href="https://arxiv.org/pdf/2403.08505v1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Probing Image Compression For Class-Incremental Learning&lt;/strong&gt; Yang, Justin and Duan, Zhihao and Peng, Andrew and Huang, Yuning and He, Jiangpeng and Zhu, Fengqing&lt;a class="link" href="https://arxiv.org/pdf/2403.06288.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Variable-Rate Learned Image Compression with Multi-Objective Optimization and Quantization-Reconstruction Offsets&lt;/strong&gt; Kamisli, Fatih and Racape, Fabien and Choi, Hyomin &lt;a class="link" href="https://arxiv.org/pdf/2402.18930v1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Channel-wise Feature Decorrelation for Enhanced Learned Image Compression&lt;/strong&gt; Pakdaman, Farhad and Gabbouj, Moncef &lt;a class="link" href="https://arxiv.org/ftp/arxiv/papers/2403/2403.10936.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Exploration of Learned Lifting-Based Transform Structures for Fully Scalable and Accessible Wavelet-Like Image Compression&lt;/strong&gt; Li, Xinyue and Naman, Aous and Taubman, David &lt;a class="link" href="https://arxiv.org/pdf/2402.18761v1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Powerful Lossy Compression for Noisy Images&lt;/strong&gt; Cai, Shilv and Liang, Xiaoguo and Cao, Shuning and Yan, Luxin and Zhong, Sheng and Chen, Liqun and Zou, Xu &lt;a class="link" href="https://arxiv.org/pdf/2403.14135v1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Enhancing Adversarial Training with Prior Knowledge Distillation for Robust Image Compression&lt;/strong&gt; Zhi, Cao and Youneng, Bao and Fanyang, Meng and Chao, Li and Wen, Tan and Genhong, Wang and Yongsheng, Liang &lt;a class="link" href="https://arxiv.org/pdf/2403.06700v1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Image Coding for Machines with Edge Information Learning Using Segment Anything&lt;/strong&gt; Shindo, Takahiro and Yamada, Kein and Watanabe, Taiju and Watanabe, Hiroshi &lt;a class="link" href="https://arxiv.org/pdf/2403.04173v1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Resilience of Entropy Model in Distributed Neural Networks&lt;/strong&gt; Zhang, Milin and Abdi, Mohammad and Rifat, Shahriar and Restuccia, Francesco&lt;a class="link" href="https://arxiv.org/pdf/2403.00942v1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Unifying Generation and Compression: Ultra-low bitrate Image Coding Via Multi-stage Transformer&lt;/strong&gt; Xue, Naifu and Mao, Qi and Wang, Zijian and Zhang, Yuan and Ma, Siwei&lt;a class="link" href="https://arxiv.org/pdf/2403.03736.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Content-aware Masked Image Modeling Transformer for Stereo Image Compression&lt;/strong&gt; Zhang, Xinjie and Gao, Shenyuan and Liu, Zhening and Ge, Xingtong and He, Dailan and Xu, Tongda and Wang, Yan and Zhang, Jun&lt;a class="link" href="https://arxiv.org/pdf/2403.08505v1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;GaussianImage: 1000 FPS Image Representation and Compression by 2D Gaussian Splatting&lt;/strong&gt; Zhang, Xinjie and Ge, Xingtong and Xu, Tongda and He, Dailan and Wang, Yan and Qin, Hongwei and Lu, Guo and Geng, Jing and Zhang, Jun&lt;a class="link" href="https://arxiv.org/pdf/2403.08551v1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Wavelet-Like Transform-Based Technology in Response to the Call for Proposals on Neural Network-Based Image Coding&lt;/strong&gt; Dong, Cunhui and Ma, Haichuan and Zhang, Haotian and Gao, Changsheng and Li, Li and Liu, Dong&lt;a class="link" href="https://arxiv.org/pdf/2403.05937v1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Region-Adaptive Transform with Segmentation Prior for Image Compression&lt;/strong&gt; Liu, Yuxi and Yang, Wenhan and Bai, Huihui and Wei, Yunchao and Zhao, Yao&lt;a class="link" href="https://arxiv.org/pdf/2403.00628.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;LEARNED IMAGE COMPRESSION WITH TEXT QUALITY ENHANCEMENT&lt;/strong&gt; Lai, Chih-Yu and Tran, Dung and Koishida, Kazuhito&lt;a class="link" href="https://arxiv.org/pdf/2402.08643.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Variable-Rate Learned Image Compression with Multi-Objective Optimization and Quantization-Reconstruction Offsets&lt;/strong&gt; Kamisli, Fatih and Racape, Fabien and Choi, Hyomin&lt;a class="link" href="https://arxiv.org/pdf/2402.18930v1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;End-to-End Optimized Image Compression with the Frequency-Oriented Transform&lt;/strong&gt; Zhang, Yuefeng and Lin, Kai &lt;a class="link" href="https://arxiv.org/pdf/2401.08194.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Learned Image Compression with ROI-Weighted Distortion and Bit Allocation&lt;/strong&gt; Jiang, Wei and Zhai, Yongqi and Li, Hangyu and Wang, Ronggang &lt;a class="link" href="https://arxiv.org/pdf/2401.08154.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Semantic Ensemble Loss and Latent Refinement for High-Fidelity Neural Image Compression&lt;/strong&gt; Li, Daxin and Bai, Yuanchao and Wang, Kai and Jiang, Junjun and Liu, Xianming &lt;a class="link" href="https://arxiv.org/pdf/2401.14007.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;FLLIC: Functionally Lossless Image Compression&lt;/strong&gt; Zhang, Xi and Wu, Xiaolin &lt;a class="link" href="https://arxiv.org/pdf/2401.13616.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Fast Implicit Neural Representation Image Codec in Resource-limited Devices&lt;/strong&gt; Liu, Xiang and Chen, Jiahong and Chen, Bin and Liu, Zimo and An, Baoyi and Xia, Shu-Tao &lt;a class="link" href="https://arxiv.org/pdf/2401.12587.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(NeurPIS 2024) &lt;strong&gt;Robustly overfitting latents for flexible neural image compression&lt;/strong&gt; Perugachi-Diaz, Yura and Gansekoele, Arwin and Bhulai, Sandjai &lt;a class="link" href="https://arxiv.org/pdf/2401.17789.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Saliency-aware End-to-end Learned Variable-Bitrate 360-degree Image Compression&lt;/strong&gt; Gungordu, Oguzhan and Tekalp, A Murat &lt;a class="link" href="https://arxiv.org/pdf/2402.08862.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Joint End-to-End Image Compression and Denoising: Leveraging Contrastive Learning and Multi-Scale Self-ONNs&lt;/strong&gt; Xie, Yuxin and Yu, Li and Pakdaman, Farhad and Gabbouj, Moncef&lt;a class="link" href="https://arxiv.org/pdf/2402.05582.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;LEARNED COMPRESSION OF ENCODING DISTRIBUTIONS&lt;/strong&gt; Ulhaq, Mateen and Bajic, Ivan V&lt;a class="link" href="https://www.sfu.ca/~mulhaq/assets/pdf/2024-icip-learned-compression-of-encoding-distributions.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Transformer-based Learned Image Compression for Joint Decoding and Denoising&lt;/strong&gt; Chen, Yi-Hsin and Ho, Kuan-Wei and Tsai, Shiau-Rung and Lin, Guan-Hsun and Gnutti, Alessandro and Peng, Wen-Hsiao and Leonardi, Riccardo&lt;a class="link" href="https://arxiv.org/pdf/2402.12888v1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(VICP 2024) &lt;strong&gt;Flexible Coding Order for Learned Image Compression&lt;/strong&gt; Li, Yuqi and Zhang, Haotian and Liu, Dong &lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10402631" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(VICP 2024) &lt;strong&gt;Variable-rate Learned Image Compression with Adaptive Quantization Step Size&lt;/strong&gt; Mei, Feihong and Li, Li and Liu, Dong &lt;a class="link" href="https://ieeexplore.ieee.org/stampPDF/getPDF.jsp?tp=&amp;amp;arnumber=10402767&amp;amp;ref=" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(VICP 2024) &lt;strong&gt;Learned Progressive Image Compression With Spatial Autoregression&lt;/strong&gt; Li, Hangyu and Jiang, Wei and Li, Litian and Zhai, Yongqi and Wang, Ronggang &lt;a class="link" href="https://ieeexplore.ieee.org/stampPDF/getPDF.jsp?tp=&amp;amp;arnumber=10402651&amp;amp;ref=" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(VICP 2024) &lt;strong&gt;Hybrid Implicit Neural Image Compression with Subpixel Context Model and Iterative Pruner&lt;/strong&gt; Tian, Wenxin and Li, Shaohui and Dai, Wenrui and Lu, Cewu and Hu, Weisheng and Zhang, Lin and Du, Junfeng and Xiong, Hongkai &lt;a class="link" href="https://ieeexplore.ieee.org/stampPDF/getPDF.jsp?tp=&amp;amp;arnumber=10402791&amp;amp;ref=" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(VICP 2024) &lt;strong&gt;Learned Progressive Image Compression With Spatial Autoregression&lt;/strong&gt; Tian, Wenxin and Li, Shaohui and Dai, Wenrui and Lu, Cewu and Hu, Weisheng and Zhang, Lin and Du, Junfeng and Xiong, Hongkai &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=10402767" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="2023"&gt;✔2023
&lt;/h2&gt;&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;(NeurIPS 2023) &lt;strong&gt;Towards efficient image compression without autoregressive models&lt;/strong&gt; Ali, Muhammad Salman and Kim, Yeongwoong and Qamar, Maryam and Lim, Sung-Chang and Kim, Donghyun and Zhang, Chaoning and Bae, Sung-Ho and Kim, Hui Yong &lt;a class="link" href="https://openreview.net/pdf?id=1ihGy9vAIg" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(NeurIPS 2023) &lt;strong&gt;LUT-LIC: Look-Up Table-Assisted Learned Image Compression&lt;/strong&gt; Yu, SeungEun and Lee, Jong-Seok&lt;a class="link" href="https://link.springer.com/chapter/10.1007/978-981-99-8148-9_34" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ACMMM 2023) &lt;strong&gt;Toward Scalable Image Feature Compression: A Content-Adaptive and Diffusion-Based Approach&lt;/strong&gt; Guo, Sha and Chen, Zhuo and Zhao, Yang and Zhang, Ning and Li, Xiaotong and Duan, Lingyu &lt;a class="link" href="https://dl.acm.org/doi/pdf/10.1145/3581783.3611851?casa_token=mNmCMwSt2NcAAAAA:pYJtS3-8nkQdv-d0hp5N3OptJqtnjFcfBNOohVR0SqCbdP9mF4tFuAZEN5_WiTkVaxttfYUdfyqJHw" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ACMMM 2023) &lt;strong&gt;Nif: A fast implicit image compression with bottleneck layers and modulated sinusoidal activations&lt;/strong&gt; Catania, Lorenzo and Allegra, Dario &lt;a class="link" href="https://dl.acm.org/doi/pdf/10.1145/3581783.3613834" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ACMMM 2023) &lt;strong&gt;Lambda-Domain Rate Control for Neural Image Compression&lt;/strong&gt; Xue, Naifu and Zhang, Yuan &lt;a class="link" href="https://dl.acm.org/doi/pdf/10.1145/3595916.3626372?casa_token=ZQoUWGi2J6UAAAAA:3NWoCPBC-hhmWmMgcu3uPf_UFg0eSN3fLoeBi_8S0GKRJaW78mnXjkxBesKBwfe30nzHI0PEXGfAVQ" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ACMMM 2023) &lt;strong&gt;MLIC: Multi-Reference Entropy Model for Learned Image Compression&lt;/strong&gt; Jiang, Wei and Yang, Jiayu and Zhai, Yongqi and Ning, Peirong and Gao, Feng and Wang, Ronggang &lt;a class="link" href="https://dl.acm.org/doi/pdf/10.1145/3581783.3611694" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ACMMM 2023) &lt;strong&gt;ELFIC: A Learning-based Flexible Image Codec with Rate-Distortion-Complexity Optimization&lt;/strong&gt; Zhang, Zhichen and Chen, Bolin and Lin, Hongbin and Lin, Jielian and Wang, Xu and Zhao, Tiesong &lt;a class="link" href="https://dl.acm.org/doi/pdf/10.1145/3581783.3612540" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ACMMM 2023) &lt;strong&gt;ICMH-Net: Neural Image Compression Towards both Machine Vision and Human Vision&lt;/strong&gt; Liu, Lei and Hu, Zhihao and Chen, Zhenghao and Xu, Dong &lt;a class="link" href="https://dl.acm.org/doi/pdf/10.1145/3581783.3612041?casa_token=S1tEOBghRlUAAAAA:3QJByYZssGAMLB6Yloy9eCwEEkI7RrZQ_kuaJfIjBCaWH45RJomJC4uQN1StEi_UplaboXcyaEASvA" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TIP 2023) &lt;strong&gt;Learned Image Compression Using Cross-Component Attention Mechanism&lt;/strong&gt; Duan, Wenhong and Chang, Zheng and Jia, Chuanmin and Wang, Shanshe and Ma, Siwei and Song, Li and Gao, Wen &lt;a class="link" href="https://ieeexplore.ieee.org/document/10268865/" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TIP 2023) &lt;strong&gt;Scalable Face Image Coding via StyleGAN Prior: Towards Compression for Human-Machine Collaborative Vision&lt;/strong&gt; Mao, Qi and Wang, Chongyu and Wang, Meng and Wang, Shiqi and Chen, Ruijie and Jin, Libiao and Ma, Siwei &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10372532&amp;amp;casa_token=tefNsn9cqyIAAAAA:iNI1vVcH9m8rW3GLAj-yB_6FC_eiNBGUUiIzVaAlYC7JHRxGElmSd1MdVYHKD0P-9FtPMq5aEw" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2023) &lt;strong&gt;Dec-Adapter: Exploring Efficient Decoder-Side Adapter for Bridging Screen Content and Natural Image Compression&lt;/strong&gt; Shen, Sheng and Yue, Huanjing and Yang, Jingyu &lt;a class="link" href="https://openaccess.thecvf.com/content/ICCV2023/papers/Shen_Dec-Adapter_Exploring_Efficient_Decoder-Side_Adapter_for_Bridging_Screen_Content_and_ICCV_2023_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2023) &lt;strong&gt;Context-Based Trit-Plane Coding for Progressive Image Compression&lt;/strong&gt; Jeon, Seungmin and Choi, Kwang Pyo and Park, Youngo and Kim, Chang-Su &lt;a class="link" href="https://arxiv.org/pdf/2303.05715.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2023) &lt;strong&gt;Transtic: Transferring transformer-based image compression from human perception to machine perception&lt;/strong&gt; Chen, Yi-Hsin and Weng, Ying-Chieh and Kao, Chia-Hao and Chien, Cheng and Chiu, Wei-Chen and Peng, Wen-Hsiao &lt;a class="link" href="https://openaccess.thecvf.com/content/ICCV2023/papers/Chen_TransTIC_Transferring_Transformer-based_Image_Compression_from_Human_Perception_to_Machine_ICCV_2023_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TAI 2023) &lt;strong&gt;Manipulation Attacks on Learned Image Compression&lt;/strong&gt; Liu, Kang and Wu, Di and Wu, Yangyu and Wang, Yiru and Feng, Dan and Tan, Benjamin and Garg, Siddharth&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10352982&amp;amp;casa_token=7J9wZTEfvZUAAAAA:A4rT0GYrKkWQ8h1hhnQxyazt_2kunYTDE1vn73nQD5RDms-6eoJ_ZUppgHNr3WTBk143oCWW6Q" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICIP 2023) &lt;strong&gt;A Decoupled Spatial-Channel Inverted Bottleneck For Image Compression&lt;/strong&gt; Hu, Yuting and Tan, Wen and Meng, Fanyang and Liang, Yongsheng&lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10222366" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICIP 2023) &lt;strong&gt;NUCQ: Non-Uniform Conditional Quantization for Learned Image Compression&lt;/strong&gt; Ge, Ziqing and Jia, Chuanmin and Ma, Siwei and Gao, We&lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10222198" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICIP 2023) &lt;strong&gt;End-to-End Learning-based Image Compression with A Decoupled Framework&lt;/strong&gt; Zhang, Zhaobin and Esenlik, Semih and Wu, Yaojun and Wang, Meng and Zhang, Kai and Zhang, Li&lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10247017/metrics#metrics" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICIP 2023) &lt;strong&gt;Advancing the Rate-Distortion-Computation Frontier for Neural Image Compression&lt;/strong&gt; Minnen, David and Johnston, Nick&lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10222381" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICIP 2023) &lt;strong&gt;Efficient Pruning Method for Learned Lossy Image Compression Models Based on Side Information&lt;/strong&gt; LChen, Weixuan and Yang, Qianqian&lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10222822" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICIP 2023) &lt;strong&gt;Content-Adaptive Parallel Entropy Coding for End-to-End Image Compression&lt;/strong&gt; Li, Shujia and Wang, Dezhao and Fan, Zejia and Liu, Jiaying&lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10222067" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICIP 2023) &lt;strong&gt;Edge-Guided Remote-Sensing Image Compression&lt;/strong&gt; Han, Pengfei and Zhao, Bin and Li, Xuelong &lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10247080" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICIP 2023) &lt;strong&gt;Learned Image Compression Guided Adaptive Quantization for Perceptual Quality&lt;/strong&gt; Chen, Cheng and Geng, Ruiqi and Li, Bohan and Ustarroz-Calonge, Maryla and Galligan, Frank and Han, Jingning and Xu, Yaowu &lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10222637" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICIP 2023) &lt;strong&gt;Unified Learning-Based Lossy and Lossless Jpeg Recompression&lt;/strong&gt; J. Zhang et al. &lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10222354" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICIP 2023) &lt;strong&gt;ULcompress: A Unified low bit-rate image Compression Framework via Invertible Image Representation&lt;/strong&gt; F. Gao, X. Deng, C. Gao and M. Xu &lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10222242" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICIP 2023) &lt;strong&gt;Learned Image Compression with Multi-Scan Based Channel Fusion&lt;/strong&gt; Y. Li, W. Zhou, P. Lu and S. -i. Kamata, &lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10222127" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICIP 2023) &lt;strong&gt;Integer Quantized Learned Image Compression&lt;/strong&gt; G. -W. Jeon, S. Yu and J. -S. Lee &lt;a class="link" href="https://ieeexplore.ieee.org/document/10222336" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICIP 2023) &lt;strong&gt;A Decoupled Spatial-Channel Inverted Bottleneck For Image Compression&lt;/strong&gt; Hu, Yuting and Tan, Wen and Meng, Fanyang and Liang, Yongsheng &lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10222381" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICIP 2023) &lt;strong&gt;Learned Image Compression with Large Capacity and Low Redundancy of Latent Representation&lt;/strong&gt; Meng, Xiandong and Zhu, Shuyuan and Ma, Siwei and Zeng, Bing &lt;a class="link" href="https://ieeexplore.ieee.org/document/10222366" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICIP 2023) &lt;strong&gt;An Improved Upper Bound on the Rate-Distortion Function of Images&lt;/strong&gt; Duan, Zhihao and Ma, Jack and He, Jiangpeng and Zhu, Fengqing&lt;a class="link" href="https://arxiv.org/pdf/2309.02574.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICIP 2023) &lt;strong&gt;AICT: An Adaptive Image Compression Transformer&lt;/strong&gt; Ghorbel, Ahmed and Hamidouche, Wassim and Morin, Luce&lt;a class="link" href="https://arxiv.org/pdf/2307.06091.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(WACV 2023) &lt;strong&gt;Neural Distributed Image Compression with Cross-Attention Feature Alignment&lt;/strong&gt; Mital, Nitish and {&amp;quot;O}zyilkan, Ezgi and Garjani, Ali and G{&amp;quot;u}nd{&amp;quot;u}z, Deniz&lt;a class="link" href="https://openaccess.thecvf.com/content/WACV2023/papers/Mital_Neural_Distributed_Image_Compression_With_Cross-Attention_Feature_Alignment_WACV_2023_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(VCIP 2023) &lt;strong&gt;Image Data Hiding in Neural Compressed Latent Representations&lt;/strong&gt; Huang, Chen-Hsiu and Wu, Ja-Ling&lt;a class="link" href="https://ieeexplore.ieee.org/document/10402627" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(VICP 2023) &lt;strong&gt;EVC: TOWARDS REAL-TIME NEURAL IMAGE COMPRESSION WITH MASK DECA&lt;/strong&gt; Wang, Guo-Hua and Li, Jiahao and Li, Bin and Lu, Yan &lt;a class="link" href="https://arxiv.org/pdf/2302.05071.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(VICP 2023) &lt;strong&gt;A Near Lossless Learned Image Coding Network Quantization Approach for Cross-Platform Inference&lt;/strong&gt; Hang, Xinyu and Jia, Chuanmin and Ma, Siwei and Gao, Wen &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10402704&amp;amp;casa_token=SpFz9g7TeT8AAAAA:GNVUj1Qv03LvWGp3bF9iyCSr_-ZLx6-HNZM4vxYXFqs_yTFitBKet3htVPIc1LR4uKboCvnL" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICASSP 2023) &lt;strong&gt;A Novel Cross-Component Context Model for End-to-End Wavelet Image Coding&lt;/strong&gt; Meyer, Anna and Kaup, Andr{'e} &lt;a class="link" href="https://arxiv.org/pdf/2303.05121.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TCSVT 2024) &lt;strong&gt;Lightweight Context Model Equipped aiWave in Response to the AVS Call for Evidence on Volumetric Medical Image Coding&lt;/strong&gt; Xue, Dongmei and Li, Li and Liu, Dong and Li, Houqiang &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10453226" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TCSVT 2023) &lt;strong&gt;MASIC: Deep Mask Stereo Image Compression&lt;/strong&gt; Deng, Xin and Deng, Yufan and Yang, Ren and Yang, Wenzhe and Timofte, Radu and Xu, Mai &lt;a class="link" href="https://scholar.google.com/scholar_url?url=https://ieeexplore.ieee.org/iel7/76/4358651/10061473.pdf%3Fcasa_token%3DyxaR8FAUmccAAAAA:NZVDcw8yyjkyl1jR53FSSfUBKSAUxSgFwjNl6n3E3gjtklYQ7e6KLBD0sY9rtdPDj3cMxRyjb3w&amp;amp;hl=zh-CN&amp;amp;sa=T&amp;amp;oi=ucasa&amp;amp;ct=ucasa&amp;amp;ei=C7HBZbBn6NLL1g_hqbWgCw&amp;amp;scisig=AFWwaeauyyBtBEhlO7xzS3SgL_l_" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TCSVT 2023) &lt;strong&gt;Extremely Low Bit-rate Image Compression via Invertible Image Generation&lt;/strong&gt; Gao, Fangyuan and Deng, Xin and Jing, Junpeng and Zou, Xin and Xu, Mai &lt;a class="link" href="https://ieeexplore.ieee.org/document/10256132" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TCSVT 2023) &lt;strong&gt;Task-Switchable Pre-Processor for Image Compression for Multiple Machine Vision Tasks&lt;/strong&gt; Yang, Mingyi and Yang, Fei and Murn, Luka and Blanch, Marc Gorriz and Sock, Juil and Wan, Shuai and Yang, Fuzheng and Herranz, Luis &lt;a class="link" href="https://ieeexplore.ieee.org/document/10256132" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TCSVT 2023) &lt;strong&gt;Rethinking semantic image compression: Scalable representation with cross-modality transfer&lt;/strong&gt; Zhang, Pingping and Wang, Shiqi and Wang, Meng and Li, Jiguo and Wang, Xu and Kwong, Sam &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10032603&amp;amp;casa_token=jUWiQNkyzn4AAAAA:sB3n5iqEj4xbTgiOrrXxsI5lbXizq0V9wxvkaZ71ik2nPah0yHZ8WzHwbkrp-URvTMuHukK3&amp;amp;tag=1" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TCSVT 2023) &lt;strong&gt;Facial Image Compression via Neural Image Manifold Compression&lt;/strong&gt; Yang, Wenhan and Huang, Haofeng and Liu, Jiaying and Kot, Alex C. &lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10122667" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TCSVT 2023) &lt;strong&gt;Sketch Assisted Face Image Coding for Human and Machine Vision: a Joint Training Approach.&lt;/strong&gt; Fang, Xin and Duan, Yiping and Du, Qiyuan and Tao, Xiaoming and Li, Fan &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10082973&amp;amp;casa_token=bXnEBK4JjLcAAAAA:JO0euK8CEhYZUGE70J9G-3WUZVOVeh5DkXdHQRnWQCSrgg4ybixUxy1J0tFCcYyZWvvggncp" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICCV 2023) &lt;strong&gt;COMPASS: High-Efficiency Deep Image Compression with Arbitrary-scale Spatial Scalability&lt;/strong&gt; Park, Jongmin and Lee, Jooyoung and Kim, Munchurl &lt;a class="link" href="https://openaccess.thecvf.com/content/ICCV2023/papers/Park_COMPASS_High-Efficiency_Deep_Image_Compression_with_Arbitrary-scale_Spatial_Scalability_ICCV_2023_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICCV 2023) &lt;strong&gt;AdaNIC: Towards Practical Neural Image Compression via Dynamic Transform Routing&lt;/strong&gt; Tao, Lvfang and Gao, Wei and Li, Ge and Zhang, Chenhao &lt;a class="link" href="https://openaccess.thecvf.com/content/ICCV2023/papers/Tao_AdaNIC_Towards_Practical_Neural_Image_Compression_via_Dynamic_Transform_Routing_ICCV_2023_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(WCACV 2023) &lt;strong&gt;Controlling Rate, Distortion, and Realism: Towards a Single Comprehensive Neural Image Compression Model&lt;/strong&gt; Iwai, Shoma and Miyazaki, Tomo and Omachi, Shinichiro &lt;a class="link" href="https://openaccess.thecvf.com/content/WACV2024/papers/Iwai_Controlling_Rate_Distortion_and_Realism_Towards_a_Single_Comprehensive_Neural_WACV_2024_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2023) &lt;strong&gt;EGIC: Enhanced Low-Bit-Rate Generative Image Compression Guided by Semantic Segmentation&lt;/strong&gt; K{&amp;quot;o}rber, Nikolai and Kromer, Eduard and Siebert, Andreas and Hauke, Sascha and Mueller-Gritschneder, Daniel &lt;a class="link" href="https://arxiv.org/pdf/2309.03244.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2023) &lt;strong&gt;A Training-Free Defense Framework for Robust Learned Image Compression&lt;/strong&gt; Song, Myungseo and Choi, Jinyoung and Han, Bohyung &lt;a class="link" href="https://arxiv.org/pdf/2401.11902.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2023) &lt;strong&gt;FFCA-Net: Stereo Image Compression via Fast Cascade Alignment of Side Information&lt;/strong&gt;Xia, Yichong and Huang, Yujun and Chen, Bin and Wang, Haoqian and Wang, Yaowei&lt;a class="link" href="https://arxiv.org/pdf/2312.16963.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2023) &lt;strong&gt;Another Way to the Top: Exploit Contextual Clustering in Learned Image Coding&lt;/strong&gt;Zhang, Yichi and Duan, Zhihao and Lu, Ming and Ding, Dandan and Zhu, Fengqing and Ma, Zhan&lt;a class="link" href="https://arxiv.org/pdf/2401.11615.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2023) &lt;strong&gt;Attack and Defense Analysis of Learned Image Compression&lt;/strong&gt;Zhu, Tianyu and Sun, Heming and Xiong, Xiankui and Zhu, Xuanpeng and Gong, Yong and Fan, Yibo and others&lt;a class="link" href="https://arxiv.org/pdf/2401.10345.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2023) &lt;strong&gt;Fast and High-Performance Learned Image Compression With Improved Checkerboard Context Model, Deformable Residual Module, and Knowledge Distillation&lt;/strong&gt; Fu, Haisheng and Liang, Feng and Liang, Jie and Wang, Yongqiang and Zhang, Guohe and Han, Jingning &lt;a class="link" href="https://arxiv.org/pdf/2309.02529.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2023) &lt;strong&gt;Multi-Context Dual Hyper-Prior Neural Image Compression&lt;/strong&gt; Khoshkhahtinat, Atefeh and Zafari, Ali and Mehta, Piyush M and Akyash, Mohammad and Kashiani, Hossein and Nasrabadi, Nasser M &lt;a class="link" href="https://arxiv.org/pdf/2309.10799.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2023) &lt;strong&gt;On Uniform Scalar Quantization for Learned Image Compression&lt;/strong&gt; Zhang, Haotian and Li, Li and Liu, Dong&lt;a class="link" href="https://arxiv.org/pdf/2309.17051.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2023) &lt;strong&gt;Frequency-Aware Transformer for Learned Image Compression&lt;/strong&gt; Li, Han and Li, Shaohui and Dai, Wenrui and Li, Chenglin and Zou, Junni and Xiong, Hongkai&lt;a class="link" href="https://arxiv.org/pdf/2310.16387.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2023) &lt;strong&gt;Perceptual Image Compression with Cooperative Cross-Modal Side Information&lt;/strong&gt; Qin, Shiyu and Chen, Bin and Huang, Yujun and An, Baoyi and Dai, Tao and Via, Shu-Tao&lt;a class="link" href="https://arxiv.org/pdf/2311.13847.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2023) &lt;strong&gt;Progressive Learning with Visual Prompt Tuning for Variable-Rate Image Compression&lt;/strong&gt; Qin, Shiyu and Zhou, Yimin and Wang, Jinpeng and Chen, Bin and An, Baoyi and Dai, Tao and Xia, Shu-Tao&lt;a class="link" href="https://arxiv.org/pdf/2311.17350.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2023) &lt;strong&gt;Exploring the Rate-Distortion-Complexity Optimization in Neural Image Compression&lt;/strong&gt; Gao, Yixin and Feng, Runsen and Guo, Zongyu and Chen, Zhibo&lt;a class="link" href="https://arxiv.org/pdf/2305.07678.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(JVCIR 2023) &lt;strong&gt;Corner-to-Center long-range context model for efficient learned image compression&lt;/strong&gt; LSui, Yang and Ding, Ding and Pan, Xiang and Xu, Xiaozhong and Liu, Shan and Yuan, Bo and Chen, Zhenzhong&lt;a class="link" href="https://arxiv.org/pdf/2311.18103.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(VICP 2023) &lt;strong&gt;A Near Lossless Learned Image Coding Network Quantization Approach for Cross-Platform Inference&lt;/strong&gt; Hang, Xinyu and Jia, Chuanmin and Ma, Siwei and Gao, Wen &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10402704&amp;amp;casa_token=SpFz9g7TeT8AAAAA:GNVUj1Qv03LvWGp3bF9iyCSr_-ZLx6-HNZM4vxYXFqs_yTFitBKet3htVPIc1LR4uKboCvnL" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TMI 2023) &lt;strong&gt;A Near Lossless Learned Image Coding Network Quantization Approach for Cross-Platform Inference&lt;/strong&gt; Hang, Xinyu and Jia, Chuanmin and Ma, Siwei and Gao, Wen &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10402704&amp;amp;casa_token=SpFz9g7TeT8AAAAA:GNVUj1Qv03LvWGp3bF9iyCSr_-ZLx6-HNZM4vxYXFqs_yTFitBKet3htVPIc1LR4uKboCvnL" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="2022"&gt;✔2022
&lt;/h2&gt;&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;(PCS 2022) &lt;strong&gt;Reducing The Amortization Gap of Entropy Bottleneck In End-to-End Image Compression&lt;/strong&gt; Balcilar, Muhammet and Damodaran, Bharath and Hellier, Pierre &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10018064&amp;amp;casa_token=T3OEyA4gC_UAAAAA:hV74ZEkQEKKE940LsRyDFRFIhIQcATSnQKZsc8mTr2UTT6jLIMAyBijHG1pTfFJG-8VxRRn7XuA" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt; e&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR workshop 2022) &lt;strong&gt;Self-Supervised Variable Rate Image Compression using Visual Attention&lt;/strong&gt; Sinha, Abhishek Kumar and Moorthi, S Manthira and Dhar, Debajyoti&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2022W/CLIC/papers/Sinha_Self-Supervised_Variable_Rate_Image_Compression_Using_Visual_Attention_CVPRW_2022_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR workshop 2022) &lt;strong&gt;RDONet: Rate-Distortion Optimized Learned Image Compression with Variable Depth&lt;/strong&gt; Brand, Fabian and Fischer, Kristian and Kopte, Alexander and Windsheimer, Marc and Kaup, Andr{'e} &lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2022W/CLIC/papers/Brand_RDONet_Rate-Distortion_Optimized_Learned_Image_Compression_With_Variable_Depth_CVPRW_2022_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2022) &lt;strong&gt;Transformations in Learned Image Compression from Modulation Perspective&lt;/strong&gt; Bao, Youneng and Meng, Fangyang and Tan, Wen and Li, Chao and Tian, Yonghong and Liang, Yongsheng &lt;a class="link" href="https://arxiv.org/pdf/2203.02158.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2022) &lt;strong&gt;Flexible Neural Image Compression via Code Editing&lt;/strong&gt; Gao, Chenjian and Xu, Tongda and He, Dailan and Qin, Hongwei and Wang, Yan &lt;a class="link" href="https://arxiv.org/pdf/2209.09244.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2022) &lt;strong&gt;Attention-Based Generative Neural Image Compression on Solar Dynamics Observatory&lt;/strong&gt; Zafari, Ali and Khoshkhahtinat, Atefeh and Mehta, Piyush M and Nasrabadi, Nasser M and Thompson, Barbara J and da Silva, Daniel and Kirk, Michael SF&lt;a class="link" href="https://arxiv.org/pdf/2210.06478.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2022) &lt;strong&gt;Progressive Deep Image Compression for Hybrid Contexts of Image Classification and Reconstruction&lt;/strong&gt; Lei, Zhongyue and Duan, Peng and Hong, Xuemin and Mota, Jo{~a}o FC and Shi, Jianghong and Wang, Cheng-Xiang &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9970515&amp;amp;casa_token=wr2tdLJpoSQAAAAA:yxNRSlqMzqo0libGY0kbkrP79VRTccC5BmKEzCC5ziY9shpizVudordovWx5BOFOgQSHC7dxrZs" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPRW 2022) &lt;strong&gt;Universal Deep Image Compression via Content-Adaptive Optimization with Adapters&lt;/strong&gt; Tsubota, Koki and Akutsu, Hiroaki and Aizawa, Kiyoharu &lt;a class="link" href="https://openaccess.thecvf.com/content/WACV2023/papers/Tsubota_Universal_Deep_Image_Compression_via_Content-Adaptive_Optimization_With_Adapters_WACV_2023_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPRW 2022) &lt;strong&gt;User-Guided Variable Rate Learned Image Compression&lt;/strong&gt; Gupta, Rushil and BV, Suryateja and Kapoor, Nikhil and Jaiswal, Rajat and Nangi, Sharmila Reddy and Kulkarni, Kuldeep&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2022W/CLIC/papers/Gupta_User-Guided_Variable_Rate_Learned_Image_Compression_CVPRW_2022_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPRW 2022) &lt;strong&gt;Adaptive Bitrate Quantization Scheme Without Codebook for Learned Image Compression&lt;/strong&gt; L{&amp;quot;o}hdefink, Jonas and Sitzmann, Jonas and B{&amp;quot;a}r, Andreas and Fingscheidt, Tim &lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2022W/CLIC/papers/Lohdefink_Adaptive_Bitrate_Quantization_Scheme_Without_Codebook_for_Learned_Image_Compression_CVPRW_2022_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TIP 2022) &lt;strong&gt;OSLO: On-the-Sphere Learning for Omnidirectional images and its application to 360-degree image compression&lt;/strong&gt; Bidgoli, Navid Mahmoudian and Roberto, G de A and Maugey, Thomas and Roumy, Aline and Frossard, Pascal &lt;a class="link" href="https://arxiv.org/pdf/2107.09179.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(AAAI 2022) &lt;strong&gt;Two-Stage Octave Residual Network for End-to-End Image Compression&lt;/strong&gt; Chen, Fangdong and Xu, Yumeng and Wang, Li &lt;a class="link" href="https://scholar.google.com/scholar?hl=zh-CN&amp;amp;as_sdt=0%2C5&amp;amp;q=Two-Stage&amp;#43;Octave&amp;#43;Residual&amp;#43;Network&amp;#43;for&amp;#43;End-to-End&amp;#43;Image&amp;#43;Compression&amp;amp;btnG=#:~:text=%E5%B9%B4%E4%BB%BD-,%5BPDF%5D%20aaai.org,-Two%2DStage%20Octave" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2022) &lt;strong&gt;Preprocessing Enhanced Image Compression for Machine Vision&lt;/strong&gt; Lu, Guo and Ge, Xingtong and Zhong, Tianxiong and Geng, Jing and Hu, Qiang &lt;a class="link" href="https://arxiv.org/pdf/2206.05650.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2022) &lt;strong&gt;Learning-Driven Lossy Image Compression; A
Comprehensive Survey&lt;/strong&gt; Jamil, Sonain and Piran, Md and others &lt;a class="link" href="https://arxiv.org/pdf/2201.09240.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2022) &lt;strong&gt;Estimating the Resize Parameter in End-to-end Learned Image Compression&lt;/strong&gt; Chen, Li-Heng and Bampis, Christos G and Li, Zhi and Krasula, Luk{'a}{\v{s}} and Bovik, Alan C &lt;a class="link" href="https://arxiv.org/pdf/2204.12022.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2022) &lt;strong&gt;Image Compression with Product Quantized Masked Image Modeling&lt;/strong&gt; El-Nouby, Alaaeldin and Muckley, Matthew J and Ullrich, Karen and Laptev, Ivan and Verbeek, Jakob and J{'e}gou, Herv{'e} &lt;a class="link" href="https://arxiv.org/pdf/2212.07372.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ITJ 2022) &lt;strong&gt;Human–Machine Interaction-Oriented Image Coding for Res8ource-Constrained Visual Monitoring in IoT&lt;/strong&gt;
Wang, Zixi and Li, Fan and Xu, Jing and Cosman, Pamela C &lt;a class="link" href="" &gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TGRS 2022) &lt;strong&gt;Towards simultaneous image compression and indexing for scalable content-based retrieval in remote sensing&lt;/strong&gt; Sumbul, Gencer and Xiang, Jun and Demir, Beg{&amp;quot;u}m &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9878355" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(SPI 2022) &lt;strong&gt;Rate-constrained learning-based image compression&lt;/strong&gt; &lt;a class="link" href="" &gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TIP 2022) &lt;strong&gt;Exploiting Intra-Slice and Inter-Slice Redundancy for Learning-Based Lossless Volumetric Image Compression&lt;/strong&gt; Chen, Zhenghao and Gu, Shuhang and Lu, Guo and Xu, Dong &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9694511&amp;amp;casa_token=_INFRj8nkRkAAAAA:_4VWc5Q56n7hHUi5xnIS3Yyno0YRwyVWQdEnU2XqmAV6Sv_XnG7SgBnO0DfYUnoLuNP-3iKOivk" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt; lossless&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2022) &lt;strong&gt;Entroformer: A transformer-based entropy model for learned image compression&lt;/strong&gt; Qian, Yichen and Lin, Ming and Sun, Xiuyu and Tan, Zhiyu and Jin, Rong &lt;a class="link" href="https://arxiv.org/pdf/2202.05492.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2022) &lt;strong&gt;Multi-Sample Training for Neural Image Compression&lt;/strong&gt; Xu, Tongda and Wang, Yan and He, Dailan and Gao, Chenjian and Gao, Han and Liu, Kunzan and Qin, Hongwei &lt;a class="link" href="https://arxiv.org/pdf/2209.13834.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2022) &lt;strong&gt;ELIC: Efficient Learned Image Compression with Unevenly Grouped Space-Channel Contextual Adaptive Coding&lt;/strong&gt; He, Dailan and Yang, Ziming and Peng, Weikun and Ma, Rui and Qin, Hongwei and Wang, Yan &lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2022/papers/He_ELIC_Efficient_Learned_Image_Compression_With_Unevenly_Grouped_Space-Channel_Contextual_CVPR_2022_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ECCV 2022) &lt;strong&gt;Contextformer: A Transformer with
Spatio-Channel Attention for Context Modeling
in Learned Image Compression&lt;/strong&gt; Koyuncu, A Burakhan and Gao, Han and Boev, Atanas and Gaikov, Georgii and Alshina, Elena and Steinbach, Eckehard &lt;a class="link" href="https://arxiv.org/pdf/2203.02452.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ECCV 2022) &lt;strong&gt;Contextformer: A Transformer with Spatio-Channel Attention for Context Modeling in Learned Image Compression&lt;/strong&gt; Koyuncu, A Burakhan and Gao, Han and Boev, Atanas and Gaikov, Georgii and Alshina, Elena and Steinbach, Eckehard &lt;a class="link" href="https://link.springer.com/content/pdf/10.1007/978-3-031-19800-7_26.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ECCV 2022) &lt;strong&gt;Content-Oriented Learned Image
Compression&lt;/strong&gt; Li, Meng and Gao, Shangyin and Feng, Yihui and Shi, Yibo and Wang, Jing &lt;a class="link" href="https://link.springer.com/content/pdf/10.1007/978-3-031-19800-7_37.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ECCV 2022) &lt;strong&gt;Implicit Neural Representations
for Image Compression&lt;/strong&gt; Str{&amp;quot;u}mpler, Yannick and Postels, Janis and Yang, Ren and Gool, Luc Van and Tombari, Federico &lt;a class="link" href="https://arxiv.org/pdf/2112.04267.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ECCV 2022) &lt;strong&gt;Content Adaptive Latents and Decoder for Neural Image Compression&lt;/strong&gt; Pan, Guanbo and Lu, Guo and Hu, Zhihao and Xu, Dong &lt;a class="link" href="https://arxiv.org/pdf/2212.10132.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ECCV 2022) &lt;strong&gt;Optimizing Image Compression via Joint
Learning with Denoising&lt;/strong&gt; Cheng, Ka Leong and Xie, Yueqi and Chen, Qifeng &lt;a class="link" href="https://link.springer.com/content/pdf/10.1007/978-3-031-19800-7_4.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt; denoising&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(2022) &lt;strong&gt;2C-Net: Integrate Image Compression and Classification via Deep Neural Network&lt;/strong&gt; Liu, Linfeng and Chen, Tong and Liu, Haojie and Pu, Shiliang and Wang, Li and Shen, Qiu &lt;a class="link" href="https://assets.researchsquare.com/files/rs-2049607/v1_covered.pdf?c=1663278884" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ACMMM 2022) &lt;strong&gt;High-Fidelity Variable-Rate Image Compression via Invertible Activation Transformation&lt;/strong&gt; Cai, Shilv and Zhang, Zhijun and Chen, Liqun and Yan, Luxin and Zhong, Sheng and Zou, Xu [&lt;a class="link" href="https://arxiv.org/pdf/2209.05054.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arxiv 2022) &lt;strong&gt;Deep Lossy Plus Residual Coding for Lossless and Near-lossless Image Compression&lt;/strong&gt; Bai, Yuanchao and Liu, Xianming and Wang, Kai and Ji, Xiangyang and Wu, Xiaolin and Gao, Wen [&lt;a class="link" href="https://arxiv.org/pdf/2209.04847.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TCSVT 2022) &lt;strong&gt;End-to-End Optimized Image Compression With Deep Gaussian Process Regression&lt;/strong&gt; Cao, Maida and Dai, Wenrui and Li, Shaohui and Li, Chenglin and Zou, Junni and Chen, Ying and Xiong, Hongkai [&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=9903432" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TIP 2022) &lt;strong&gt;End-to-end optimized 360° image compression&lt;/strong&gt; Li, Mu and Li, Jinxing and Gu, Shuhang and Wu, Feng and Zhang, David [&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=9904466" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arxiv 2022) &lt;strong&gt;Lossy Compression with Gaussian Diffusion&lt;/strong&gt; Theis, Lucas and Salimans, Tim and Hoffman, Matthew D and Mentzer, Fabian [&lt;a class="link" href="https://arxiv.org/pdf/2206.08889.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arxiv 2022) &lt;strong&gt;Joint Image Compression and Denoising via Latent-Space Scalability&lt;/strong&gt; Alvar, Saeed Ranjbar and Ulhaq, Mateen and Choi, Hyomin and Baji{'c}, Ivan V [&lt;a class="link" href="https://arxiv.org/pdf/2205.01874.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arxiv 2022) &lt;strong&gt;Post-Training Quantization for Cross-Platform Learned Image Compression&lt;/strong&gt; He, Dailan and Yang, Ziming and Chen, Yuan and Zhang, Qi and Qin, Hongwei and Wang, Yan [&lt;a class="link" href="https://arxiv.org/pdf/2202.07513.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICASSP 2022) &lt;strong&gt;Satellite Image Compression and Denoising With Neural Networks&lt;/strong&gt; Yin, Shanzhi and Li, Chao and Bao, Youneng and Liang, Yongsheng and Meng, Fanyang and Liu, Wei [&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=9747854" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICASSP 2022) &lt;strong&gt;AdderIC: Towards Low Computation Cost Image Compression&lt;/strong&gt; Li, Bowen and Xin, Yao and Li, Chao and Bao, Youneng and Meng, Fanyang and Liang, Yongsheng [&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=9747652" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(IEEE Geoscience and Remote Sensing Letters 2022) &lt;strong&gt;Universal Efficient Variable-Rate Neural Image Compression&lt;/strong&gt; de Oliveira, Vinicius Alves and Chabert, Marie and Oberlin, Thomas and Poulliat, Charly and Bruno, Mickael and Latry, Christophe and Carlavan, Mikael and Henrot, Simon and Falzon, Frederic and Camarero, Roberto [&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=9690871" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2022) &lt;strong&gt;The Devil Is in the Details: Window-Based Attention for Image Compression&lt;/strong&gt; Zou, Renjie and Song, Chunfeng and Zhang, Zhaoxiang &lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2022/papers/Zou_The_Devil_Is_in_the_Details_Window-Based_Attention_for_Image_CVPR_2022_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2022) &lt;strong&gt;Joint Global and Local Hierarchical Priors for Learned Image Compression&lt;/strong&gt;, Kim, Jun-Hyuk and Heo, Byeongho and Lee, Jong-Seok &lt;a class="link" href="https://arxiv.org/pdf/2112.04487.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; (E)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2022) &lt;strong&gt;RIDDLE: Lidar Data Compression with Range Image Deep Delta Encoding&lt;/strong&gt; Zhou, Xuanyu and Qi, Charles R and Zhou, Yin and Anguelov, Dragomir [&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2022/papers/Zhou_RIDDLE_Lidar_Data_Compression_With_Range_Image_Deep_Delta_Encoding_CVPR_2022_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2022) &lt;strong&gt;Neural Data-Dependent Transform for Learned Image Compression&lt;/strong&gt; Wang, Dezhao and Yang, Wenhan and Hu, Yueyu and Liu, Jiaying [&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2022/papers/Wang_Neural_Data-Dependent_Transform_for_Learned_Image_Compression_CVPR_2022_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPRW 2022) &lt;strong&gt;Self-Supervised Variable Rate Image Compression using Visual Attention&lt;/strong&gt; Sinha, Abhishek Kumar and Moorthi, S Manthira and Dhar, Debajyoti [&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2022W/CLIC/papers/Sinha_Self-Supervised_Variable_Rate_Image_Compression_Using_Visual_Attention_CVPRW_2022_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPRW 2022) &lt;strong&gt;User-Guided Variable Rate Learned Image Compression&lt;/strong&gt; Gupta, Rushil and BV, Suryateja and Kapoor, Nikhil and Jaiswal, Rajat and Nangi, Sharmila Reddy and Kulkarni, Kuldeep [&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2022W/CLIC/papers/Gupta_User-Guided_Variable_Rate_Learned_Image_Compression_CVPRW_2022_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2022) &lt;strong&gt;RDONet: Rate-Distortion Optimized Learned Image Compression With Variable Depth&lt;/strong&gt; Brand, Fabian and Fischer, Kristian and Kopte, Alexander and Windsheimer, Marc and Kaup, Andr{'e}. [&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2022W/CLIC/papers/Brand_RDONet_Rate-Distortion_Optimized_Learned_Image_Compression_With_Variable_Depth_CVPRW_2022_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2022) &lt;strong&gt;LC-FDNet: Learned Lossless Image Compression with Frequency Decomposition Network&lt;/strong&gt; Rhee, Hochang and Jang, Yeong Il and Kim, Seyun and Cho, Nam Ik. [&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2022/papers/Rhee_LC-FDNet_Learned_Lossless_Image_Compression_With_Frequency_Decomposition_Network_CVPR_2022_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2022) &lt;strong&gt;PO-ELIC: Perception-Oriented Efficient Learned Image Coding&lt;/strong&gt; He, Dailan and Yang, Ziming and Yu, Hongjiu and Xu, Tongda and Luo, Jixiang and Chen, Yuan and Gao, Chenjian and Shi, Xinjie and Qin, Hongwei and Wang, Yan. [&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2022W/CLIC/papers/He_PO-ELIC_Perception-Oriented_Efficient_Learned_Image_Coding_CVPRW_2022_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2022) &lt;strong&gt;Online Meta Adaptation for Variable-Rate Learned Image Compression&lt;/strong&gt; Jiang, Wei and Wang, Wei and Li, Songnan and Liu, Shan. [&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2022W/NTIRE/papers/Jiang_Online_Meta_Adaptation_for_Variable-Rate_Learned_Image_Compression_CVPRW_2022_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2022) &lt;strong&gt;Unified Multivariate Gaussian Mixture for Efficient Neural Image Compression&lt;/strong&gt; Zhu, Xiaosu and Song, Jingkuan and Gao, Lianli and Zheng, Feng and Shen, Heng Tao. [&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2022/papers/Zhu_Unified_Multivariate_Gaussian_Mixture_for_Efficient_Neural_Image_Compression_CVPR_2022_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2022) &lt;strong&gt;Split Hierarchical Variational Compression&lt;/strong&gt; Ryder, Tom and Zhang, Chen and Kang, Ning and Zhang, Shifeng. [&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2022/papers/Ryder_Split_Hierarchical_Variational_Compression_CVPR_2022_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2022) &lt;strong&gt;SASIC: Stereo Image Compression With Latent Shifts and Stereo Attention&lt;/strong&gt; W{&amp;quot;o}dlinger, Matthias and Kotera, Jan and Xu, Jan and Sablatnig, Robert. [&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2022/papers/Wodlinger_SASIC_Stereo_Image_Compression_With_Latent_Shifts_and_Stereo_Attention_CVPR_2022_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2022) &lt;strong&gt;Deep Stereo Image Compression via Bi-directional Coding&lt;/strong&gt;, Lei, Jianjun and Liu, Xiangrui and Peng, Bo and Jin, Dengchao and Li, Wanqing and Gu, Jingxiao [&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2022/papers/Lei_Deep_Stereo_Image_Compression_via_Bi-Directional_Coding_CVPR_2022_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(AAAI 2022) &lt;strong&gt;OoDHDR-Codec: Out-of-Distribution Generalization for HDR Image Compression&lt;/strong&gt;, Cao, Linfeng and Jiang, Aofan and Li, Wei and Wu, Huaying and Ye, Nanyang &lt;a class="link" href="https://www.aaai.org/AAAI22Papers/AAAI-8610.CaoL.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; (HDR)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2022) &lt;strong&gt;Unified Multivariate Gaussian Mixture for Efficient Neural Image Compression&lt;/strong&gt;, Zhu, Xiaosu and Song, Jingkuan and Gao, Lianli and Zheng, Feng and Shen, Heng Tao &lt;a class="link" href="https://arxiv.org/pdf/2203.10897.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;a class="link" href="https://github.com/xiaosu-zhu/McQuic" target="_blank" rel="noopener"
&gt;[code]&lt;/a&gt; (E)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2022) &lt;strong&gt;Estimating the Resize Parameter in End-to-end Learned Image Compression&lt;/strong&gt;, Chen, Li-Heng and Bampis, Christos G and Li, Zhi and Krasula, Luk{'a}{\v{s}} and Bovik, Alan C &lt;a class="link" href="https://arxiv.org/pdf/2204.12022.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;a class="link" href="https://github.com/xiaosu-zhu/McQuic" target="_blank" rel="noopener"
&gt;[code]&lt;/a&gt; (Sa)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2022) &lt;strong&gt;DeepFGS: Fine-Grained Scalable Coding for Learned Image Compression&lt;/strong&gt;, Ma, Yi and Zhai, Yongqi and Wang, Ronggang &lt;a class="link" href="https://arxiv.org/pdf/2201.01173.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;(Sa)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2022) &lt;strong&gt;End-to-End Learned Block-Based Image Compression with Block-Level Masked Convolutions and Asymptotic Closed Loop Training&lt;/strong&gt;, Kamisli, Fatih &lt;a class="link" href="https://arxiv.org/pdf/2203.11686.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; (T+E)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2022) &lt;strong&gt;Transformations in Learned Image Compression from Modulation Perspective&lt;/strong&gt;, Bao, Youneng and Meng, Fangyang and Tan, Wen and Li, Chao and Tian, Yonghong and Liang, Yongsheng &lt;a class="link" href="https://arxiv.org/pdf/2203.02158.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2022) &lt;strong&gt;Identity Preserving Loss for Learned Image Compression&lt;/strong&gt;, Xiao, Jiuhong and Aggarwal, Lavisha and Banerjee, Prithviraj and Aggarwal, Manoj and Medioni, Gerard &lt;a class="link" href="https://arxiv.org/pdf/2204.10869.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2022) &lt;strong&gt;High-Efficiency Lossy Image Coding Through Adaptive Neighborhood Information Aggregation&lt;/strong&gt;, Lu, Ming and Ma, Zhan &lt;a class="link" href="https://arxiv.org/pdf/2204.11448.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; (E)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2022) &lt;strong&gt;Learning Weighting Map for Bit-Depth Expansion within a Rational Range&lt;/strong&gt;, Liu, Yuqing and Jia, Qi and Zhang, Jian and Fan, Xin and Wang, Shanshe and Ma, Siwei and Gao, Wen &lt;a class="link" href="https://arxiv.org/pdf/2204.12039.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; &lt;a class="link" href="https://github.com/yuqing-liu-dut/bit-depth-expansion" target="_blank" rel="noopener"
&gt;[code]&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2022) &lt;strong&gt;Joint Image Compression and Denoising via Latent-Space Scalability&lt;/strong&gt;, Ranjbar Alvar, Saeed and Ulhaq, Mateen and Choi, Hyomin and Baji{'c}, Ivan V &lt;a class="link" href="https://arxiv.org/pdf/2205.01874.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="2021"&gt;✔2021
&lt;/h2&gt;&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;(TPAMI 2021) &lt;strong&gt;Learning end-to-end lossy image compression: A benchmark&lt;/strong&gt;, Hu, Yueyu and Yang, Wenhan and Ma, Zhan and Liu, Jiaying &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9376651" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; &lt;a class="link" href="https://github.com/huzi96/Coarse2Fine-PyTorch" target="_blank" rel="noopener"
&gt;[code]&lt;/a&gt;(Benchmark)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(IJCV 2021) &lt;strong&gt;Semantics-to-signal scalable image compression with learned revertible representations&lt;/strong&gt;, Liu, Kang and Liu, Dong and Li, Li and Yan, Ning and Li, Houqiang &lt;a class="link" href="https://link.springer.com/content/pdf/10.1007/s11263-021-01491-7.pdf1" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;[&lt;a class="link" href="https://github.com/micmic123/QmapCompression" target="_blank" rel="noopener"
&gt;code&lt;/a&gt;] (Scalable)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TIP 2021) &lt;strong&gt;Semantic Perceptual Image Compression With a Laplacian Pyramid of Convolutional Networks&lt;/strong&gt;, Wang, Juan and Duan, Yiping and Tao, Xiaoming and Xu, Mai and Lu, Jianhua &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=9381614" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICLR 2021) &lt;strong&gt;Hierarchical Image Compression Framework&lt;/strong&gt;, Ge, Yunying and Wang, Jing and Shi, Yibo and Gao, Shangyin &lt;a class="link" href="https://openreview.net/pdf?id=8rPXT-SVgjh" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICCV 2021) &lt;strong&gt;Variable-Rate Deep Image Compression through Spatially-Adaptive Feature Transform&lt;/strong&gt;, Song, Myungseo and Choi, Jinyoung and Han, Bohyung &lt;a class="link" href="https://openaccess.thecvf.com/content/ICCV2021/papers/Song_Variable-Rate_Deep_Image_Compression_Through_Spatially-Adaptive_Feature_Transform_ICCV_2021_paper.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; (E)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2021) &lt;strong&gt;Asymmetric Gained Deep Image Compression With Continuous Rate Adaptation&lt;/strong&gt;, Cui, Ze and Wang, Jing and Gao, Shangyin and Guo, Tiansheng and Feng, Yihui and Bai, Bo &lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2021/papers/Cui_Asymmetric_Gained_Deep_Image_Compression_With_Continuous_Rate_Adaptation_CVPR_2021_paper.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; &lt;a class="link" href="https://github.com/mmSir/GainedVAE" target="_blank" rel="noopener"
&gt;[code]&lt;/a&gt;(VR)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2021) &lt;strong&gt;Checkerboard context model for efficient learned image compression&lt;/strong&gt;, He, Dailan and Zheng, Yaoyan and Sun, Baocheng and Wang, Yan and Qin, Hongwei &lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2021/papers/He_Checkerboard_Context_Model_for_Efficient_Learned_Image_Compression_CVPR_2021_paper.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; &lt;a class="link" href="https://github.com/leelitian/Checkerboard-Context-Model-Pytorch" target="_blank" rel="noopener"
&gt;[code1]&lt;/a&gt; &lt;a class="link" href="https://github.com/JiangWeibeta/Checkerboard-Context-Model-for-Efficient-Learned-Image-Compression" target="_blank" rel="noopener"
&gt;[code]&lt;/a&gt; (E)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2021) &lt;strong&gt;Learning scalable ly=-constrained near-lossless image compression via joint lossy image and residual compression&lt;/strong&gt;, Bai, Yuanchao and Liu, Xianming and Zuo, Wangmeng and Wang, Yaowei and Ji, Xiangyang &lt;a class="link" href="[https://openaccess.thecvf.com/content/CVPR2021/papers/Cui_Asymmetric_Gained_Deep_Image_Compression_With_Continuous_Rate_Adaptation_CVPR_2021_paper.pdf" &gt;[paper]&lt;/a&gt; &lt;a class="link" href="https://github.com/mmSir/GainedVAE]%28https://openaccess.thecvf.com/content/CVPR2021/papers/Bai_Learning_Scalable_lY-Constrained_Near-Lossless_Image_Compression_via_Joint_Lossy_Image_CVPR_2021_paper.pdf%29" target="_blank" rel="noopener"
&gt;[code]&lt;/a&gt;(lossless)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPRW 2021) &lt;strong&gt;End-to-end optimized image compression with competition of prior distributions&lt;/strong&gt;, Brummer, Benoit and De Vleeschouwer, Christophe &lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2021W/CLIC/papers/Brummer_End-to-End_Optimized_Image_Compression_With_Competition_of_Prior_Distributions_CVPRW_2021_paper.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; &lt;a class="link" href="https://github.com/trougnouf/Manypriors" target="_blank" rel="noopener"
&gt;[code]&lt;/a&gt;(E)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPRW 2021) &lt;strong&gt;Subjective Quality Optimized Efficient Image Compression&lt;/strong&gt;, Wang, Xining and Chen, Tong and Ma, Zhan &lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2021W/CLIC/papers/Wang_Subjective_Quality_Optimized_Efficient_Image_Compression_CVPRW_2021_paper.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; &lt;a class="link" href="https://github.com/mmSir/GainedVAE" target="_blank" rel="noopener"
&gt;[code]&lt;/a&gt;(perceptual)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPRW 2021) &lt;strong&gt;Variable Rate ROI Image Compression Optimized for Visual Quality&lt;/strong&gt;, Ma, Yi and Zhai, Yongqi and Yang, Chunhui and Yang, Jiayu and Wang, Ruofan and Zhou, Jing and Li, Kai and Chen, Ying and Wang, Ronggang &lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2021W/CLIC/papers/Ma_Variable_Rate_ROI_Image_Compression_Optimized_for_Visual_Quality_CVPRW_2021_paper.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;(VR)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPRW 2021) &lt;strong&gt;Image Compression with Recurrent Neural Network and Generalized Divisive Normalization&lt;/strong&gt;, Islam, Khawar and Dang, L Minh and Lee, Sujin and Moon, Hyeonjoon &lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2021W/CLIC/papers/Islam_Image_Compression_With_Recurrent_Neural_Network_and_Generalized_Divisive_Normalization_CVPRW_2021_paper.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;a class="link" href="https://github.com/khawar-islam/cvpr" target="_blank" rel="noopener"
&gt;[code]&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPRW 2021) &lt;strong&gt;End-to-End Learned Image Compression with Augmented Normalizing Flows&lt;/strong&gt;, Ho, Yung-Han and Chan, Chih-Chun and Peng, Wen-Hsiao and Hang, Hsueh-Ming &lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2021W/CLIC/papers/Islam_Image_Compression_With_Recurrent_Neural_Network_and_Generalized_Divisive_Normalization_CVPRW_2021_paper.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;a class="link" href="https://github.com/dororojames/anfic" target="_blank" rel="noopener"
&gt;[code]&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPRW 2021) &lt;strong&gt;Learned Image Compression with Super-Resolution Residual Modules and DISTS Optimization&lt;/strong&gt;, Suzuki, Akifumi and Akutsu, Hiroaki and Naruko, Takahiro and Tsubota, Koki and Aizawa, Kiyoharu &lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2021W/CLIC/papers/Suzuki_Learned_Image_Compression_With_Super-Resolution_Residual_Modules_and_DISTS_Optimization_CVPRW_2021_paper.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; (Perceptual)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPRW 2021) &lt;strong&gt;Perceptual Friendly Variable Rate Image Compression&lt;/strong&gt;, Gao, Yixin and Wu, Yaojun and Guo, Zongyu and Zhang, Zhizheng and Chen, Zhibo &lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2021W/CLIC/papers/Gao_Perceptual_Friendly_Variable_Rate_Image_Compression_CVPRW_2021_paper.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; (VR+Perceptual)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(WACV 2021) &lt;strong&gt;Saliency Driven Perceptual Image Compression&lt;/strong&gt;, Patel, Yash and Appalaraju, Srikar and Manmatha, R &lt;a class="link" href="https://openaccess.thecvf.com/content/WACV2021/papers/Patel_Saliency_Driven_Perceptual_Image_Compression_WACV_2021_paper.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; (perceputal)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TCSVT 2021) &lt;strong&gt;Causal contextual prediction for learned image compression&lt;/strong&gt;, Guo, Zongyu and Zhang, Zhizheng and Feng, Runsen and Chen, Zhibo &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9455349" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; (E)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TCSVT 2021) &lt;strong&gt;Learned Block-based Hybrid Image Compression&lt;/strong&gt;, Wu, Yaojun and Li, Xin and Zhang, Zhizheng and Jin, Xin and Chen, Zhibo &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9455349" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; (T+E)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ACMMM 2021) &lt;strong&gt;Enhanced Invertible Encoding for Learned Image Compression&lt;/strong&gt;, Yueqi Xie, Ka Leong Cheng, Qifeng Chen &lt;a class="link" href="https://dl.acm.org/doi/pdf/10.1145/3474085.3475213" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; &lt;a class="link" href="https://github.com/xyq7/InvCompress" target="_blank" rel="noopener"
&gt;[code]&lt;/a&gt; (Invertible)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ACMMM 2021) &lt;strong&gt;Semantic Scalable Image Compression with Cross-Layer Priors&lt;/strong&gt;, Tu, Hanyue and Li, Li and Zhou, Wengang and Li, Houqiang &lt;a class="link" href="https://dl.acm.org/doi/pdf/10.1145/3474085.3475533" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; (Scalable)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ACMMM 2021) &lt;strong&gt;Interpolation Variable Rate Image Compression&lt;/strong&gt;, Sun, Zhenhong and Tan, Zhiyu and Sun, Xiuyu and Zhang, Fangyi and Qian, Yichen and Li, Dongyang and Li, Hao &lt;a class="link" href="https://dl.acm.org/doi/pdf/10.1145/3474085.3475698" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; (VR)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TMM 2021) &lt;strong&gt;Learned Multi-Resolution Variable-Rate Image Compression With Octave-Based Residual Blocks&lt;/strong&gt;, Akbari, Mohammad and Liang, Jie and Han, Jingning and Tu, Chengjie &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=9385968&amp;amp;tag=1" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; (VR)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(DCC 2021) &lt;strong&gt;Accelerate Neural Image Compression with Channel-adaptive Arithmetic Coding&lt;/strong&gt;, uo, Zongyu and Fu, Jun and Feng, Runsen and Chen, Zhibo &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=9401277" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICIP 2021) &lt;strong&gt;Graph-Convolution Network for Image Compression&lt;/strong&gt;, Yang, Chunhui and Ma, Yi and Yang, Jiayu and Liu, Shiyi and Wang, Ronggang &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9506704" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(PMLR 2021) &lt;strong&gt;Soft then hard: Rethinking the quantization in neural image compression&lt;/strong&gt;, Z Guo，Z Zhang，R Feng，Z Chen &lt;a class="link" href="http://proceedings.mlr.press/v139/guo21c/guo21c.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2021) &lt;strong&gt;Learned Image Compression for Machine Perception&lt;/strong&gt;, Codevilla, Felipe and Simard, Jean Gabriel and Goroshin, Ross and Pal, Chris &lt;a class="link" href="https://arxiv.org/pdf/2111.02249.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; (Perceptual)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2021) &lt;strong&gt;Substitutional Neural Image Compression&lt;/strong&gt;, Wang, Xiao and Jiang, Wei and Wang, Wei and Liu, Shan and Kulis, Brian and Chin, Peter &lt;a class="link" href="https://arxiv.org/pdf/2105.07512.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; (VR)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2021) &lt;strong&gt;DPICT: Deep Progressive Image Compression Using Trit-Planes&lt;/strong&gt;, Lee, Jae-Han and Jeon, Seungmin and Choi, Kwang Pyo and Park, Youngo and Kim, Chang-Su &lt;a class="link" href="https://arxiv.org/pdf/2112.06334.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; (VR)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2021) &lt;strong&gt;Implicit Neural Representations for Image Compression&lt;/strong&gt;, Str{&amp;quot;u}mpler, Yannick and Postels, Janis and Yang, Ren and Van Gool, Luc and Tombari, Federico &lt;a class="link" href="https://arxiv.org/pdf/2112.04267.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2021) &lt;strong&gt;A Novel Framework for Image-to-image Translation and Image Compression&lt;/strong&gt;, Yang, Fei and Wang, Yaxing and Herranz, Luis and Cheng, Yongmei and Mozerov, Mikhail &lt;a class="link" href="https://arxiv.org/pdf/2111.13105.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; (I2I)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2021) &lt;strong&gt;Semantic-assisted image compression&lt;/strong&gt;, Sun, Qizheng and Guo, Caili and Yang, Yang and Chen, Jiujiu and Xue, Xijun &lt;a class="link" href="https://arxiv.org/pdf/2201.12599.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2021) &lt;strong&gt;End-to-End Learned Image Compression with Quantized Weights and Activations&lt;/strong&gt;, Sun, Heming and Yu, Lu and Katto, Jiro &lt;a class="link" href="https://arxiv.org/pdf/2111.09348.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2021) &lt;strong&gt;End-to-End Image Compression with Probabilistic Decoding&lt;/strong&gt;, Ma, Haichuan and Liu, Dong and Dong, Cunhui and Li, Li and Wu, Feng &lt;a class="link" href="https://arxiv.org/pdf/2109.14837.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2021) &lt;strong&gt;Towards End-to-End Image Compression and Analysis with Transformers&lt;/strong&gt;, Bai, Yuanchao and Yang, Xu and Liu, Xianming and Jiang, Junjun and Wang, Yaowei and Ji, Xiangyang and Gao, Wen &lt;a class="link" href="https://arxiv.org/pdf/2112.09300.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2021) &lt;strong&gt;A Cross Channel Context Model for Latents in Deep Image Compression&lt;/strong&gt;, Ma, Changyue and Wang, Zhao and Liao, Ruling and Ye, Yan &lt;a class="link" href="https://arxiv.org/pdf/2103.02884.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2021) &lt;strong&gt;Online Meta Adaptation for Variable-Rate Learned Image Compression&lt;/strong&gt;, Wei Jiang, Wei Wang, Songnan Li, Shan Liu &lt;a class="link" href="https://arxiv.org/abs/2111.08256" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; (VR)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2021) &lt;strong&gt;Transformer-based Image Compression&lt;/strong&gt;, Ming Lu, Peiyao Guo, Huiqing Shi, Chuntong Cao, Zhan Ma [&lt;a class="link" href="https://arxiv.org/abs/2111.06707" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="2020"&gt;✔2020
&lt;/h2&gt;&lt;ul&gt;
&lt;li&gt;[arXiv preprint 2020] &lt;strong&gt;Lossless Image Compression through Super-Resolution&lt;/strong&gt;, Sheng Cao, Chao-Yuan Wu, Philipp Krähenbühl [&lt;a class="link" href="https://arxiv.org/abs/2004.02872" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="2019"&gt;✔2019
&lt;/h2&gt;&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;(PCS 19) &lt;strong&gt;A novel deep progressive image compression framework&lt;/strong&gt;, Cai, Chunlei and Chen, Li and Zhang, Xiaoyun and Lu, Guo and Gao, Zhiyong. &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8954500" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 19) &lt;strong&gt;Learning image and video compression through spatial-temporal energy compaction&lt;/strong&gt;, Cheng, Zhengxue and Sun, Heming and Takeuchi, Masaru and Katto, Jiro. &lt;a class="link" href="https://openaccess.thecvf.com/content_CVPR_2019/papers/Cheng_Learning_Image_and_Video_Compression_Through_Spatial-Temporal_Energy_Compaction_CVPR_2019_paper.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="2018"&gt;✔2018
&lt;/h2&gt;&lt;hr&gt;</description></item></channel></rss>