<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>GitHub on Hanguangwu</title><link>https://hanguangwu.github.io/blog/en/categories/github/</link><description>Recent content in GitHub on Hanguangwu</description><generator>Hugo -- gohugo.io</generator><language>en</language><copyright>hanguangwu</copyright><lastBuildDate>Mon, 23 Mar 2026 13:34:25 -0800</lastBuildDate><atom:link href="https://hanguangwu.github.io/blog/en/categories/github/index.xml" rel="self" type="application/rss+xml"/><item><title>GitHub Repo Deep-Learning-Based-Image-Compression</title><link>https://hanguangwu.github.io/blog/en/p/github-repo-deep-learning-based-image-compression/</link><pubDate>Mon, 23 Mar 2026 13:34:25 -0800</pubDate><guid>https://hanguangwu.github.io/blog/en/p/github-repo-deep-learning-based-image-compression/</guid><description>&lt;h1 id="awesome-public-datasets"&gt;Awesome Public Datasets
&lt;/h1&gt;&lt;h2 id="introduction"&gt;Introduction
&lt;/h2&gt;&lt;p&gt;&lt;a class="link" href="https://github.com/ppingzhang/Deep-Learning-Based-Image-Compression" target="_blank" rel="noopener"
&gt;The paper list about deep learning based image compression&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="paper-list"&gt;Paper List
&lt;/h2&gt;&lt;h3 id="generative-compression"&gt;Generative compression
&lt;/h3&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Title&lt;/th&gt;
&lt;th style="text-align: center"&gt;Pub. &amp;amp; Date&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/07338.pdf" target="_blank" rel="noopener"
&gt;Rate-Distortion-Cognition Controllable Versatile Neural Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ECCV 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/07844.pdf" target="_blank" rel="noopener"
&gt;Lossy Image Compression with Foundation Diffusion Models&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ECCV 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/05155.pdf" target="_blank" rel="noopener"
&gt;EGIC: Enhanced Low-Bit-Rate Generative Image Compression Guided by Semantic Segmentation&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ECCV 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2507.04947" target="_blank" rel="noopener"
&gt;DC-AR: Efficient Masked Autoregressive Image Generation with Deep Compression Hybrid Tokenizer&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICCV 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2506.21977" target="_blank" rel="noopener"
&gt;StableCodec: Taming One-Step Diffusion for Extreme Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICCV 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://iccv.thecvf.com/virtual/2025/poster/577" target="_blank" rel="noopener"
&gt;DLF: Extreme Image Compression with Dual-generative Latent Fusion&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICCV 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://iccv.thecvf.com/virtual/2025/poster/2681" target="_blank" rel="noopener"
&gt;Cross-Granularity Online Optimization with Masked Compensated Information for Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICCV 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2025/papers/Xu_Decouple_Distortion_from_Perception_Region_Adaptive_Diffusion_for_Extreme-low_Bitrate_CVPR_2025_paper.pdf" target="_blank" rel="noopener"
&gt;Decouple Distortion from Perception: Region Adaptive Diffusion for Extreme-low Bitrate Perception Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;CVPR 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openreview.net/forum?id=xiVuqZZ59O" target="_blank" rel="noopener"
&gt;Ultra Lowrate Image Compression with Semantic Residual Coding and Compression-aware Diffusion&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICML 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openreview.net/pdf?id=qi7udwV66M" target="_blank" rel="noopener"
&gt;Zero-Shot Image Compression with Diffusion-Based Posterior Sampling&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openreview.net/forum?id=z0hUsPhwUN" target="_blank" rel="noopener"
&gt;Once-for-All: Controllable Generative Image Compression with Dynamic Granularity Adaptation&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICLR 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ojs.aaai.org/index.php/AAAI/article/view/33403" target="_blank" rel="noopener"
&gt;Conditional Latent Coding with Learnable Synthesized Reference for Deep Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;AAAI 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ojs.aaai.org/index.php/AAAI/article/view/33175" target="_blank" rel="noopener"
&gt;GLIC: General Format Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;AAAI 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2411.10185v3" target="_blank" rel="noopener"
&gt;Efficient Progressive Image Compression with Variance-aware Masking&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="None" &gt;UniMIC: Towards Universal Multi-modality Perceptual Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2412.10935v2" target="_blank" rel="noopener"
&gt;Progressive Compression with Universally Quantized Diffusion Models&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2412.11379v1" target="_blank" rel="noopener"
&gt;Controllable Distortion-Perception Tradeoff Through Latent Diffusion for Neural Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;AAAI 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2407.19651" target="_blank" rel="noopener"
&gt;ComNeck: Bridging Compressed Image Latents and Multimodal LLMs via Universal Transform-Neck&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2412.12982v1" target="_blank" rel="noopener"
&gt;Stable Diffusion is a Natural Cross-Modal Decoder for Layered AI-generated Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2407.12538v1" target="_blank" rel="noopener"
&gt;Linearly transformed color guide for low-bitrate diffusion based image compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;TIP 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2408.08459" target="_blank" rel="noopener"
&gt;JPEG-LM: LLMs as Image Generators with Canonical Codec Representations&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10566414&amp;amp;casa_token=4U0sgUNsxyQAAAAA:0ayUIqrQmKrwfM8v1sE67ZZaS48OiReJjRZdRqHyTlnCHI4zm_PSEqwM4QsvNI7qccQzSXg" target="_blank" rel="noopener"
&gt;Image Encryption and Compression Based on Reversed Diffusion Model&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;PCS 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2406.00758" target="_blank" rel="noopener"
&gt;Once-for-All: Controllable Generative Image Compression with Dynamic Granularity Adaption&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10570244&amp;amp;casa_token=xkZkXmlgP3wAAAAA:DYmBBrPQf2IwWoUAF70Te7XtdfSg85ud771PVI_vkfwCbjPUTB1cGuM3k_levF40o4NmV-s" target="_blank" rel="noopener"
&gt;Machine Perception-Driven Facial Image Compression: A Layered Generative Approach&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;TCSVT 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2407.07723" target="_blank" rel="noopener"
&gt;Understanding is Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2407.17060" target="_blank" rel="noopener"
&gt;High Efficiency Image Compression for Large Visual-Language Models&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openreview.net/pdf?id=nSUMQhITdd" target="_blank" rel="noopener"
&gt;Consistency Guided Diffusion Model with Neural Syntax for Perceptual Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ACM MM 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2407.09896" target="_blank" rel="noopener"
&gt;Zero-Shot Image Compression with Diffusion-Based Posterior Sampling&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2407.12538" target="_blank" rel="noopener"
&gt;High Frequency Matters: Uncertainty Guided Image Compression with Wavelet Diffusion&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2407.12295" target="_blank" rel="noopener"
&gt;Exploiting Inter-Image Similarity Prior for Low-Bitrate Remote Sensing Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2406.03961" target="_blank" rel="noopener"
&gt;LDM-RSIC: Exploring Distortion Prior with Latent Diffusion Models for Remote Sensing Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2024/papers/Jia_Generative_Latent_Coding_for_Ultra-Low_Bitrate_Image_Compression_CVPR_2024_paper.pdf" target="_blank" rel="noopener"
&gt;Generative Latent Coding for Ultra-Low Bitrate Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;CVPR 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2406.00758" target="_blank" rel="noopener"
&gt;Once-for-All: Controllable Generative Image Compression with Dynamic Granularity Adaption&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/abs/2406.09356" target="_blank" rel="noopener"
&gt;CMC-Bench: Towards a New Paradigm of Visual Signal Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id="lossless-compression"&gt;Lossless Compression
&lt;/h3&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Title&lt;/th&gt;
&lt;th style="text-align: center"&gt;Pub. &amp;amp; Date&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2025/papers/Zhang_Fitted_Neural_Lossless_Image_Compression_CVPR_2025_paper.pdf" target="_blank" rel="noopener"
&gt;Fitted Neural Lossless Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;CVPR 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2411.12448" target="_blank" rel="noopener"
&gt;Large Language Models for Lossless Image Compression: Next-Pixel Prediction in Language Space is All You Need&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;NeurIPS 2026&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2509.07704" target="_blank" rel="noopener"
&gt;SEEC: Segmentation-Assisted Multi-Entropy Models for Learned Lossless Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/abs/2412.17464" target="_blank" rel="noopener"
&gt;CALLIC: Content Adaptive Learning for Lossless Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;AAAI 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2410.17814" target="_blank" rel="noopener"
&gt;Learning Lossless Compression for High Bit-Depth Volumetric Medical Image&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10647302&amp;amp;casa_token=xbYddfRSqMoAAAAA:19cLT7kxdjVYv0j84IsNlUYujos72wpW_2phbqj45fjq-mNwLktHwGzZwENu4faVl1nvkhA" target="_blank" rel="noopener"
&gt;Rate-Complexity Optimization in Lossless Neural-Based Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICIP 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2412.00369v1" target="_blank" rel="noopener"
&gt;Random Cycle Coding: Lossless Compression of Cluster Assignments via Bits-Back Coding&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://www.sciencedirect.com/science/article/abs/pii/S0031320324003832" target="_blank" rel="noopener"
&gt;Hybrid-context-based multi-prior entropy modeling for learned lossless image compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;Pattern Recognition 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_Learned_Lossless_Image_Compression_based_on_Bit_Plane_Slicing_CVPR_2024_paper.pdf" target="_blank" rel="noopener"
&gt;Learned Lossless Image Compression based on Bit Plane Slicing&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;Pattern Recognition 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id="variable-rate--scalable-compression"&gt;Variable Rate / Scalable Compression
&lt;/h3&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Title&lt;/th&gt;
&lt;th style="text-align: center"&gt;Pub. &amp;amp; Date&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openreview.net/pdf?id=1groaXTrKo" target="_blank" rel="noopener"
&gt;Towards Scalable Compression with Universally Quantized Diffusion Models&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;NeurIPSW 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://neurips.cc/virtual/2024/98246" target="_blank" rel="noopener"
&gt;Flexible image decoding in learned image compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;NeurIPSW 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/1907.07875v1" target="_blank" rel="noopener"
&gt;Variable-size Symmetry-based Graph Fourier Transforms for image compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2207.04324v2" target="_blank" rel="noopener"
&gt;Latent Variables Coding for Perceptual Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ACM MM 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2410.00557" target="_blank" rel="noopener"
&gt;STanH: Parametric Quantization for Variable Rate Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2405.14222" target="_blank" rel="noopener"
&gt;RAQ-VAE: Rate-Adaptive Vector-Quantized Variational Autoencoder&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;Arxiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id="quantization"&gt;Quantization
&lt;/h3&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Title&lt;/th&gt;
&lt;th style="text-align: center"&gt;Pub. &amp;amp; Date&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2025/papers/Relic_Bridging_the_Gap_between_Gaussian_Diffusion_Models_and_Universal_Quantization_CVPR_2025_paper.pdf" target="_blank" rel="noopener"
&gt;Bridging the Gap between Gaussian Diffusion Models and Universal Quantization for Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;CVPR 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openreview.net/pdf?id=wqN6rWwYsr" target="_blank" rel="noopener"
&gt;Bridging the Gap between Diffusion Models and Universal Quantization for Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;NeurIPSW 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2411.16119v1" target="_blank" rel="noopener"
&gt;Learning Optimal Lattice Vector Quantizers for End-to-end Neural Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=10689618" target="_blank" rel="noopener"
&gt;Convolution Filter Compression via Sparse Linear Combinations of Quantized Basis&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;TNNLS 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2409.09488v1" target="_blank" rel="noopener"
&gt;Lossy Image Compression with Stochastic Quantization&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://www.arxiv.org/pdf/2408.12691" target="_blank" rel="noopener"
&gt;Quantization-free Lossy Image Compression Using Integer Matrix Factorization&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2408.12150" target="_blank" rel="noopener"
&gt;DeepHQ: Learned Hierarchical Quantizer for Progressive Deep Image Coding&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10566339&amp;amp;casa_token=6MnXai1ergEAAAAA:98ttJhOF_UU12y_KPlwG0kWpI35xBScxcKz4gIbyAdOow-5pe4hasuqIPeC7nBrnavlgr7Y" target="_blank" rel="noopener"
&gt;A Quantization Loss Compensation Network for Remote Sensing Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;PCS 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2406.07548" target="_blank" rel="noopener"
&gt;Image and Video Tokenization with Binary Spherical Quantization&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10531761" target="_blank" rel="noopener"
&gt;NLIC: Non-uniform Quantization based Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;TCSVT 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id="entropy-model"&gt;Entropy Model
&lt;/h3&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Title&lt;/th&gt;
&lt;th style="text-align: center"&gt;Pub. &amp;amp; Date&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/abs/2507.19125" target="_blank" rel="noopener"
&gt;Learned Image Compression with Hierarchical Progressive Context Modeling&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICCV 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openreview.net/forum?id=bsnRUkVn63" target="_blank" rel="noopener"
&gt;Test-time Adaptation for Image Compression with Distribution Regularization&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICLR 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2509.05169" target="_blank" rel="noopener"
&gt;Exploring Autoregressive Vision Foundation Models for Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openreview.net/pdf?id=J28aP5HsRJ" target="_blank" rel="noopener"
&gt;Learned Image Compression Framework with Quad-Prior Entropy Model&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2509.18815" target="_blank" rel="noopener"
&gt;FlashGMM: Fast Gaussian Mixture Entropy Model for Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2411.19320v1" target="_blank" rel="noopener"
&gt;Generalized Gaussian Model for Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2501.12330v1" target="_blank" rel="noopener"
&gt;The Gap Between Principle and Practice of Lossy Image Coding&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2405.09152v5" target="_blank" rel="noopener"
&gt;Group Image Compression for Dual Use of Machine and Human Vision&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;TCSVT 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2411.05832v1" target="_blank" rel="noopener"
&gt;Diversify, Contextualize, and Adapt: Efficient Entropy Modeling for Neural Image Codec&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2410.07669" target="_blank" rel="noopener"
&gt;Delta-ICM: Entropy Modeling with Delta Function for Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2410.04847" target="_blank" rel="noopener"
&gt;Causal Context Adjustment Loss for Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;NeurIPS 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openreview.net/pdf?id=YTNN0mOPQN" target="_blank" rel="noopener"
&gt;Spatial-Temporal Context Model for Remote Sensing Imagery Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ACM MM 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2407.09983" target="_blank" rel="noopener"
&gt;WeConvene: Learned Image Compression with Wavelet-Domain Convolution and Entropy Model&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2407.11590" target="_blank" rel="noopener"
&gt;Rethinking Learned Image Compression: Context is All You Need&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2407.10632" target="_blank" rel="noopener"
&gt;Bidirectional Stereo Image Compression with Cross-Dimensional Entropy Model&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id="achitecture"&gt;Achitecture
&lt;/h3&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Title&lt;/th&gt;
&lt;th style="text-align: center"&gt;Pub. &amp;amp; Date&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/06635.pdf" target="_blank" rel="noopener"
&gt;WeConvene: Learned Image Compression with Wavelet-Domain Convolution and Entropy Model&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ECCV 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/06270.pdf" target="_blank" rel="noopener"
&gt;Region-Adaptive Transform with Segmentation Prior for Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ECCV 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/03640.pdf" target="_blank" rel="noopener"
&gt;BaSIC: BayesNet Structure Learning for Computational Scalable Neural Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ECCV 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2509.10366" target="_blank" rel="noopener"
&gt;Efficient Learned Image Compression Through Knowledge Distillation&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICCV 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://iccv.thecvf.com/virtual/2025/poster/2181" target="_blank" rel="noopener"
&gt;Cassic: Towards Content-Adaptive State-Space Models for Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICCV 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2025/papers/Xu_PICD_Versatile_Perceptual_Image_Compression_with_Diffusion_Rendering_CVPR_2025_paper.pdf" target="_blank" rel="noopener"
&gt;PICD: Versatile Perceptual Image Compression with Diffusion Rendering&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;CVPR 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2025/papers/Zeng_MambaIC_State_Space_Models_for_High-Performance_Learned_Image_Compression_CVPR_2025_paper.pdf" target="_blank" rel="noopener"
&gt;MambaIC: State Space Models for High-Performance Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;CVPR 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openreview.net/pdf?id=gIrVoQEDQv" target="_blank" rel="noopener"
&gt;Unraveling Neural Cellular Automata for Lightweight Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openreview.net/pdf?id=Tv36j85SqR" target="_blank" rel="noopener"
&gt;Approaching Rate-Distortion Limits in Neural Compression with Lattice Transform Coding&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICLR 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2411.18494v1" target="_blank" rel="noopener"
&gt;Learning Optimal Linear Block Transform by Rate Distortion Minimization&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2412.15752v1" target="_blank" rel="noopener"
&gt;Sparse Point Clouds Assisted Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;TCSVT 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2501.13751v1" target="_blank" rel="noopener"
&gt;On Disentangled Training for Nonlinear Transform in Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2410.12191" target="_blank" rel="noopener"
&gt;Test-time adaptation for image compression with distribution regularization&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2409.18730" target="_blank" rel="noopener"
&gt;Effectiveness of learning-based image codecs on fingerprint storage&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/abs/2410.02981" target="_blank" rel="noopener"
&gt;Gabic: Graph-Based Attention Block for Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICIP 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2409.17134" target="_blank" rel="noopener"
&gt;Streaming Neural Images&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICIP 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://www.arxiv.org/pdf/2408.03842" target="_blank" rel="noopener"
&gt;Bi-Level Spatial and Channel-aware Transformer for Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10743248&amp;amp;casa_token=uVcLjjVsiIAAAAAA:umWqK3-lWEAaYZLS6bGRwU83D_HltSVBFOPPF547AAOr-fKWKk4cWWscip13hDKI1ZYlPoc" target="_blank" rel="noopener"
&gt;Extreme Low Bitrate Image Compression System for Mobile Deployment&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;MMSP 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2409.14090" target="_blank" rel="noopener"
&gt;Window-based Channel Attention for Wavelet-enhanced Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10647907&amp;amp;casa_token=_xL4m5ekrn0AAAAA:c7C1H9icT_KyIsjmgCz2uuikwvp8ukPivv5cDm_3V5nCspElz4BQXWWPxnrtmZmGv4pYddY" target="_blank" rel="noopener"
&gt;Feature Enhanced Learning Image Compression With Recurrent Criss-Cross Attention&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICIP 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2408.17073" target="_blank" rel="noopener"
&gt;Approximately Invertible Neural Network for Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2408.14127" target="_blank" rel="noopener"
&gt;Rate-Distortion-Perception Controllable Joint Source-Channel Coding for High-Fidelity Generative Communications&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;Arxiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10648236/authors#authors" target="_blank" rel="noopener"
&gt;Structured Pruning and Quantization for Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICIP 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10566341&amp;amp;casa_token=5IwoTIplk3sAAAAA:qmSZUREE9iZFM3FtnOzIscEwUAonnBfKeBw8tRob7l35ZWuRRaxxcKx68NXw8vRraaBVmrU" target="_blank" rel="noopener"
&gt;Practical Learned Image Compression with Online Encoder Optimization&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;PCS 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2406.10361" target="_blank" rel="noopener"
&gt;On Efficient Neural Network Architectures for Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2406.13709" target="_blank" rel="noopener"
&gt;A Study on the Effect of Color Spaces in Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10558571&amp;amp;casa_token=7OHwnFHkwDUAAAAA:fZ9rVL-B_QI8BT4AWEJkS8-M07rg9VWUxSY3Z1MBlWqoNQtpc4l9wDjz4uchHFS2SPZErEI" target="_blank" rel="noopener"
&gt;Learning-Based Conditional Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ISCAS 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10558635&amp;amp;casa_token=iR30sgfqXX0AAAAA:CygeYdTY8WGiAaUw68kNTiQAcmmiu1nSCbQ13daszhrMk4SO72ODDxLDgjAmHnlCXWRBwBs" target="_blank" rel="noopener"
&gt;Asymmetric Neural Image Compression with High-Preserving Information&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ISCAS 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10566428&amp;amp;casa_token=wYpGkb8wjkQAAAAA:xImfyLYnypOrxhvo6O4UHwHGsOVstRa_6jbBbmRMPdlJLMkBZsULXdcdHJ2wWnVIxkZkmsI" target="_blank" rel="noopener"
&gt;Wavelet-like Transform with Subbands Fusion in Decoupled Structure for Deep Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;PCS 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10559830&amp;amp;casa_token=FWJQglVJO3MAAAAA:BTaIvWu6YnP42QFsGfQak48wjhoAfmxhLVSZjJX-kgjRJ-2dH3y3tteKQn8h5-U-YCZP-IE" target="_blank" rel="noopener"
&gt;FDNet: Frequency Decomposition Network for Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;TCSVT 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2407.09853" target="_blank" rel="noopener"
&gt;Image Compression for Machine and Human Vision with Spatial-Frequency Adaptation&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2407.11700" target="_blank" rel="noopener"
&gt;Rate-Distortion-Cognition Controllable Versatile Neural Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2405.15413" target="_blank" rel="noopener"
&gt;MambaVC: Learned Visual Compression with Selective State Spaces&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id="screen-content-image"&gt;Screen Content Image
&lt;/h3&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Title&lt;/th&gt;
&lt;th style="text-align: center"&gt;Pub. &amp;amp; Date&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/06526.pdf" target="_blank" rel="noopener"
&gt;Learned HDR Image Compression for Perceptually Optimal Storage and Display&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ECCV 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://www.ijcai.org/proceedings/2024/0134.pdf" target="_blank" rel="noopener"
&gt;Efficient Screen Content Image Compression via Superpixel-based Content Aggregation and Dynamic Feature Fusion&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;IJCAI 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10577165&amp;amp;casa_token=ddUlyV468d4AAAAA:Ep5T9S4nD7zCZWS-ml46aRYuuKqAYMW518K3gLntWQ7GDCjuPpxRY5M7B7UtF42qZ_KiiuU&amp;amp;tag=1" target="_blank" rel="noopener"
&gt;DSCIC: Deep Screen Content Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;TCSVT 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id="hdr-image"&gt;HDR Image
&lt;/h3&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Title&lt;/th&gt;
&lt;th style="text-align: center"&gt;Pub. &amp;amp; Date&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2407.13179v1" target="_blank" rel="noopener"
&gt;Breaking Boundaries: Unifying Imaging and Compression for HDR Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;TIP 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2407.13179" target="_blank" rel="noopener"
&gt;Learned HDR Image Compression for Perceptually Optimal Storage and Display&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id="image-coding-for-machine-vision"&gt;Image coding for machine vision
&lt;/h3&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Title&lt;/th&gt;
&lt;th style="text-align: center"&gt;Pub. &amp;amp; Date&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/06823.pdf" target="_blank" rel="noopener"
&gt;Image Compression for Machine and Human Vision With Spatial-Frequency Adaptation&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ECCV 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/09009.pdf" target="_blank" rel="noopener"
&gt;A Unified Image Compression Method for Human Perception and Multiple Vision Tasks&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ECCV 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://dl.acm.org/doi/10.1145/3708347" target="_blank" rel="noopener"
&gt;Neural Image Compression with Regional Decoding&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ToMM 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2402.08862v1" target="_blank" rel="noopener"
&gt;Saliency Segmentation Oriented Deep Image Compression With Novel Bit Allocation&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;TIP 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="None" &gt;LL-ICM: Image Compression for Low-level Machine Vision via Large Vision-Language Model&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2310.09382v1" target="_blank" rel="noopener"
&gt;Task-Adapted Learnable Embedded Quantization for Scalable Human-Machine Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;TCSVT 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2501.04329v1" target="_blank" rel="noopener"
&gt;An Efficient Adaptive Compression Method for Human Perception and Machine Vision Tasks&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2501.04579v1" target="_blank" rel="noopener"
&gt;Unified Coding for Both Human Perception and Generalized Machine Analytics with CLIP Supervision&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2409.19660v1" target="_blank" rel="noopener"
&gt;All-in-One Image Coding for Joint Human-Machine Vision with Multi-Path Aggregation&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2408.08575" target="_blank" rel="noopener"
&gt;Tell Codec What Worth Compressing: Semantically Disentangled Image Coding for Machine with LMMs&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2408.07028" target="_blank" rel="noopener"
&gt;Feature-Preserving Rate-Distortion Optimization in Image Coding for Machines&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="" &gt;Group Image Compression for Dual Use of Machine and Human Vision&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;TCSVT 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2408.07028" target="_blank" rel="noopener"
&gt;Feature-Preserving Rate-Distortion Optimization in Image Coding for Machines&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;MMSP 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10743309&amp;amp;casa_token=zKA0n7bsqFUAAAAA:HAwTji45HCcml__D27xCp29vhfB8Im2TXKbHm29ObXI80UW3kiaW4ckTorJJC7p1cZGUS5Y" target="_blank" rel="noopener"
&gt;Compression of Self-Supervised Representations for Machine Vision&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10647464&amp;amp;casa_token=____eHFo8BMAAAAA:U-jtu0xTn0RWA80FDfNvfith5yJz0sdvRTl5UhTQBhG_J874g9eNBXllfFgFRByMqDnY1zI&amp;amp;tag=1" target="_blank" rel="noopener"
&gt;Learned Image Compression for Both Humans and Machines via Dynamic Adaptation&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICIP 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10648033?casa_token=H_-iMbpng6oAAAAA:zbDs9boDRETBQINfnLEbkz31FcWDyoORoBTCrmmlqXzN86tKR6sqdmXIAA-uHmVH1agtBxsCZw" target="_blank" rel="noopener"
&gt;Image Coding For Machine Via Analytics-Driven Appearance Redundancy Reduction&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICIP 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10574324&amp;amp;casa_token=3hjufBt4DOEAAAAA:ZVH9S11WP5wB3eRmfHs02WCpHHe4_7cHo1SWnMNBuwaCoOJgkxOWk3UXhyUBlAVpCW4fgy4" target="_blank" rel="noopener"
&gt;Saliency Map-Guided End-to-End Image Coding for Machines&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;SPL 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10557851&amp;amp;casa_token=Fu-eEJDIq1gAAAAA:ap6uExZfQWevfhbLwgq3NoH-Q3SR4UBhsSFF7tnnAMTTsZjDPpUz73J0dSMhwR0B0iwQgH8" target="_blank" rel="noopener"
&gt;Redundancy Removal Module for Reducing the Bitrates of Image Coding for Machines&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ISCAS 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id="medical-image"&gt;Medical Image
&lt;/h3&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Title&lt;/th&gt;
&lt;th style="text-align: center"&gt;Pub. &amp;amp; Date&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2412.09231v1" target="_blank" rel="noopener"
&gt;Versatile Volumetric Medical Image Coding for Human-Machine Vision&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2405.16850" target="_blank" rel="noopener"
&gt;UniCompress: Enhancing Multi-Data Medical Image Compression with Knowledge Distillation&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;Arxiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id="implicit-neural-representation"&gt;Implicit Neural Representation
&lt;/h3&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Title&lt;/th&gt;
&lt;th style="text-align: center"&gt;Pub. &amp;amp; Date&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openreview.net/forum?id=9u5hPIcr6j" target="_blank" rel="noopener"
&gt;LotteryCodec: Searching the Implicit Representation in a Random Network for Low-Complexity Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICML 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2509.18748" target="_blank" rel="noopener"
&gt;HyperCool: Reducing Encoding Cost in Overfitted Codecs with Hypernetworks&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;Arxiv 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10647328" target="_blank" rel="noopener"
&gt;Redefining Visual Quality: The Impact of Loss Functions on INR-Based Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICIP 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10647328?casa_token=4zOGbEd8ye4AAAAA:HK-ntiQYpO25P-fk_Dob31eeKFZOJ4CFqwOTT5ZaivzBkAUTfcXvoLWxHeaPhoH6K2_BtZHF-A" target="_blank" rel="noopener"
&gt;Implicit Neural Image Field for Biological Microscopy Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICIP 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id="panoramicstereo-image"&gt;Panoramic/stereo Image
&lt;/h3&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Title&lt;/th&gt;
&lt;th style="text-align: center"&gt;Pub. &amp;amp; Date&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="eccv2024.ecva.net//virtual/2024/poster/1797" &gt;Bidirectional Stereo Image Compression with Cross-Dimensional Entropy Model&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ECCV 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10721338/authors#authors" target="_blank" rel="noopener"
&gt;Learning Content-Weighted Pseudocylindrical Representation for 360° Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICIP 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id="benchmark--dataset--survey"&gt;Benchmark &amp;amp; Dataset &amp;amp; Survey
&lt;/h3&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Title&lt;/th&gt;
&lt;th style="text-align: center"&gt;Pub. &amp;amp; Date&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/document/10807668/" target="_blank" rel="noopener"
&gt;JPEG AI: The First International Standard for Image Coding Based on an End-to-End Learning-Based Approach&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;IEEE MultiMedia 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://dl.acm.org/doi/pdf/10.1145/3664647.3685519" target="_blank" rel="noopener"
&gt;OpenDIC: An Open-Source Library and Performance Evaluation for Deep-learning-based Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ACMMM 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id="others"&gt;Others
&lt;/h3&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Title&lt;/th&gt;
&lt;th style="text-align: center"&gt;Pub. &amp;amp; Date&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2507.17221" target="_blank" rel="noopener"
&gt;Dataset Distillation as Data Compression: A Rate-Utility Perspective&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICCV 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2025/papers/Zhang_Balanced_Rate-Distortion_Optimization_in_Learned_Image_Compression_CVPR_2025_paper.pdf" target="_blank" rel="noopener"
&gt;Balanced Rate-Distortion Optimization in Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;CVPR 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openreview.net/forum?id=olzs3zVsE7" target="_blank" rel="noopener"
&gt;Privacy-Shielded Image Compression: Defending Against Exploitation from Vision-Language Pretrained Models&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICML 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openreview.net/pdf?id=ialr09SfeJ" target="_blank" rel="noopener"
&gt;Synonymous Variational Inference for Perceptual Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICML 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ojs.aaai.org/index.php/AAAI/article/view/33111/35266" target="_blank" rel="noopener"
&gt;CAMSIC: Content-aware Masked Image Modeling Transformer for Stereo Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;AAAI 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2412.01646v1" target="_blank" rel="noopener"
&gt;Robust and Transferable Backdoor Attacks Against Deep Image Compression With Selective Frequency Prior&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2411.06810v1" target="_blank" rel="noopener"
&gt;JPEG AI Image Compression Visual Artifacts: Detection Methods and Dataset&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2411.16727v2" target="_blank" rel="noopener"
&gt;An Information-Theoretic Regularizer for Lossy Neural Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICCV 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2411.10650v1" target="_blank" rel="noopener"
&gt;Deep Learning-Based Image Compression for Wireless Communications: Impacts on Reliability, Throughput, and Latency&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://ieeexplore.ieee.org/document/10814661/" target="_blank" rel="noopener"
&gt;HNR-ISC: Hybrid Neural Representation for Image Set Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;TMM 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="http://arxiv.org/abs/2412.03261v1" target="_blank" rel="noopener"
&gt;Is JPEG AI going to change image forensics?&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://www.nowpublishers.com/article/OpenAccessDownload/SIP-20240025" target="_blank" rel="noopener"
&gt;2D Gaussian Splatting for Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ATSIP 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2410.20145" target="_blank" rel="noopener"
&gt;Cross-Platform Neural Video Coding: A Case Study&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;arXiv 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://openreview.net/pdf?id=zIrvyQdIG4" target="_blank" rel="noopener"
&gt;Gone With the Bits: Benchmarking Bias in Facial Phenotype Degradation Under Low-Rate Neural Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;ICMLW 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://arxiv.org/pdf/2409.11111" target="_blank" rel="noopener"
&gt;Few-Shot Domain Adaptation for Learned Image Compression&lt;/a&gt;&lt;/td&gt;
&lt;td style="text-align: center"&gt;AAAI 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="2024"&gt;✔2024
&lt;/h2&gt;&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;(SPL 2024) &lt;strong&gt;OMR-NET: A Two-Stage Octave Multi-Scale Residual Network for Screen Content Image Compression&lt;/strong&gt; Jiang S, Ren T, Fu C, et al. &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10552293&amp;amp;casa_token=HZozj0vMXvkAAAAA:_7rf8zPrb-WjgI1-i9BoraOqIEMGQdTWcvj2NUfc-3GEtogq1VavMVzi2kKx8yF3hrNoAX6lfg&amp;amp;tag=1" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TPAMI 2024) &lt;strong&gt;I2C: Invertible Continuous Codec for High-Fidelity Variable-Rate Image Compression&lt;/strong&gt; Cai, Shilv and Chen, Liqun and Zhang, Zhijun and Zhao, Xiangyun and Zhou, Jiahuan and Peng, Yuxin and Yan, Luxin and Zhong, Sheng and Zou, Xu &lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10411123" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICASSP 2024) &lt;strong&gt;Leveraging Redundancy in Feature for Efficient Learned Image CompressionN&lt;/strong&gt; Qin, Peng and Bao, Youneng and Meng, Fanyang and Tan, Wen and Li, Chao and Wang, Genhong and Liang, Yongsheng &lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10447424" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICASSP 2024) &lt;strong&gt;RATE-QUALITY BASED RATE CONTROL MODEL FOR NEURAL VIDEO COMPRESSION&lt;/strong&gt; Liao, Shuhong and Jia, Chuanmin and Fan, Hongfei and Yan, Jingwen and Ma, Siwei &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=10447777" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICASSP 2024) &lt;strong&gt;Enhancing Adversarial Training with Prior Knowledge Distillation for Robust Image Compression&lt;/strong&gt; Zhi, Cao and Youneng, Bao and Fanyang, Meng and Chao, Li and Wen, Tan and Genhong, Wang and Yongsheng, Liang&lt;a class="link" href="https://arxiv.org/pdf/2403.06700v1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(AAAI 2024) &lt;strong&gt;Make Lossy Compression Meaningful for Low-Light Images&lt;/strong&gt; Cai, Shilv and Chen, Liqun and Zhong, Sheng and Yan, Luxin and Zhou, Jiahuan and Zou, Xu &lt;a class="link" href="https://ojs.aaai.org/index.php/AAAI/article/download/28664/29289" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(AAAI 2024) &lt;strong&gt;End-to-End RGB-D Image Compression via Exploiting Channel-Modality Redundancy&lt;/strong&gt; Zheng, Huiming and Gao, Wei &lt;a class="link" href="https://ojs.aaai.org/index.php/AAAI/article/download/28588/29143" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2024) &lt;strong&gt;Towards Backward-Compatible Continual Learning of Image Compression&lt;/strong&gt; Duan, Zhihao and Lu, Ming and Yang, Justin and He, Jiangpeng and Ma, Zhan and Zhu, Fengqing &lt;a class="link" href="https://arxiv.org/pdf/2402.18862v1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(NeurIPS 2024) &lt;strong&gt;Compression with bayesian implicit neural representations&lt;/strong&gt; Guo, Zongyu and Flamich, Gergely and He, Jiajun and Chen, Zhibo and Hern{'a}ndez-Lobato, Jos{'e} Miguel &lt;a class="link" href="https://arxiv.org/pdf/2305.19185.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TIP 2024) &lt;strong&gt;Bilateral Context Modeling for Residual Coding in Lossless 3D Medical Image Compression&lt;/strong&gt; Liu, Xiangrui and Wang, Meng and Wang, Shiqi and Kwong, Sam &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=10478821" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TMM 2024) &lt;strong&gt;Neural Network Coding of Difference Updates for Efficient Distributed Learning Communication&lt;/strong&gt; Sheng, Xihua and Li, Li and Liu, Dong and Li, Houqiang &lt;a class="link" href="https://arxiv.org/pdf/2401.15864.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TCSVT 2024) &lt;strong&gt;FICNet: An End to End Network for Free-view Image Coding&lt;/strong&gt; Yang, Chunhui and Yang, Jiayu and Zhai, Yongqi and Wang, Ronggang&lt;a class="link" href="https://ieeexplore.ieee.org/document/10504389?denied=" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TCSVT 2024) &lt;strong&gt;GroupedMixer: An Entropy Model with Group-wise Token-Mixers for Learned Image Compression&lt;/strong&gt; Li, Daxin and Bai, Yuanchao and Wang, Kai and Jiang, Junjun and Liu, Xianming and Gao, Wen &lt;a class="link" href="https://arxiv.org/pdf/2405.01170" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TCSVT 2024) &lt;strong&gt;Multirate Progressive Entropy Model for Learned Image Compression&lt;/strong&gt; Li, Chao and Yin, Shanzhi and Jia, Chuanmin and Meng, Fanyang and Tian, Yonghong and Liang, Yongsheng &lt;a class="link" href="https://ieeexplore.ieee.org/document/10471618" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TCSVT 2024) &lt;strong&gt;EUICN: An Efficient Underwater Image Compression Network&lt;/strong&gt; Li, Mengyao and Shen, Liquan and Hua, Xia and Tian, Zhaoyi &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=10445326" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TCSVT 2024) &lt;strong&gt;Rate-Distortion Optimized Cross Modal Compression with Multiple Domains&lt;/strong&gt; Gao, Junlong and Jia, Chuanmin and Huang, Zhimeng and Wang, Shanshe and Ma, Siwei and Gao, Wen &lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10430161" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ToMM 2024) &lt;strong&gt;Perceptual Quality-Oriented Rate Allocation via Distillation from End-to-End Image Compression&lt;/strong&gt; Yang, Runyu and Liu, Dong and Ma, Siwei and Wu, Feng and Gao, Wen &lt;a class="link" href="https://dl.acm.org/doi/pdf/10.1145/3650034" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TGRS 2024) &lt;strong&gt;Remote Sensing Image Compression Based on High-Frequency and Low-Frequency Components&lt;/strong&gt; Xiang, Shao and Liang, Qiaokang &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10379598&amp;amp;casa_token=8o7Rvla9bkIAAAAA:BdM70h2rnznpm8AjLpmF2OaaY4LOyj96msdVfnJyaYeQ-EVVWgoAz8YSFYoxbq2tG6L95AQr" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(WCACV 2024) &lt;strong&gt;Neural Image Compression Using Masked Sparse Visual Representation&lt;/strong&gt; Jiang, Wei and Wang, Wei and Chen, Yue &lt;a class="link" href="https://arxiv.org/pdf/2309.11661.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(PCS 2024) &lt;strong&gt;CoCliCo: Extremely low bitrate image compression based on CLIP semantic and tiny color map&lt;/strong&gt; Bachard, Tom and Bordin, Tom and Maugey, Thomas &lt;a class="link" href="https://inria.hal.science/hal-04478601/file/PCS_2024-2-1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(IEVC 2024) &lt;strong&gt;The Effect of Edge Information in Stable Diffusion Applied to Image Coding&lt;/strong&gt; Watanabe, Hiroshi and Chujoh, Takeshi and Fan, Zheming and Jin, Luoxu and Yasugi, Yukinobu and Ikai, Tomohiro and Hayami, Taiga and Hong, Sujun &lt;a class="link" href="https://www.ams.giti.waseda.ac.jp/data/pdf-files/2024IEVC_LBP-15.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(SPL 2024) &lt;strong&gt;Enhancing High-Resolution Image Compression Through Local-Global Joint Attention Mechanism&lt;/strong&gt;Jiang, Zeyu and Liu, Xiaohong and Li, Aini and Wang, Guangyu&lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10487886" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(SPL 2024) &lt;strong&gt;Learning-Based Image Compression With Parameter-Adaptive Rate-Constrained Loss&lt;/strong&gt;Guerin, Nilson D and da Silva, Renam Castro and Macchiavello, Bruno&lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10487041?casa_token=knUB41_TmBsAAAAA:a-OvI58YlhHCqICs5ondcAnowi-IGX2nx0TgWqjjp_VfILwGajk6aEbDfqpUAqvF6--XxzsqGQ" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICASSP 2024) &lt;strong&gt;Fine color guidance in diffusion models and its application to image compression at extremely low bitrates&lt;/strong&gt;Bordin, Tom and Maugey, Thomas&lt;a class="link" href="https://ieeexplore.ieee.org/document/10445837" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;On the Adversarial Robustness of Learning-based Image Compression Against Rate-Distortion Attacks&lt;/strong&gt;Wu, Chenhao and Wu, Qingbo and Wei, Haoran and Chen, Shuai and Wang, Lei and Ngan, King Ngi and Meng, Fanman and Li, Hongliang&lt;a class="link" href="https://arxiv.org/pdf/2405.07717" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Scalable Image Coding for Humans and Machines Using Feature Fusion Network&lt;/strong&gt;Li, Junhui and Hou, Xingsong&lt;a class="link" href="https://arxiv.org/pdf/2405.09152" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Towards Task-Compatible Compressible Representations&lt;/strong&gt; de Andrade, Anderson and Baji{'c}, Ivan&lt;a class="link" href="https://arxiv.org/pdf/2405.10244" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Enhancing Perception Quality in Remote Sensing Image Compression via Invertible Neural Network&lt;/strong&gt; Li, Junhui and Hou, Xingsong&lt;a class="link" href="https://arxiv.org/pdf/2405.10518" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;NLIC: Non-uniform Quantization based Learned Image Compression&lt;/strong&gt; Ge, Ziqing and Ma, Siwei and Gao, Wen and Pan, Jingshan and Jia, Chuanmin&lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10531761" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Domain Adaptation for Learned Image Compression with Supervised Adapters&lt;/strong&gt;Presta, Alberto and Spadaro, Gabriele and Tartaglione, Enzo and Fiandrotti, Attilio and Grangetto, Marco&lt;a class="link" href="https://arxiv.org/pdf/2404.15591" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;2D Gaussian Splatting for Image Compression&lt;/strong&gt;Pingping Zhang, Xiangrui Liu, Meng Wang, Shiqi Wang, Sam Kwong&lt;a class="link" href="https://github.com/ppingzhang/2DGS_ImageCompression/blob/main/2DGS_APSIPA.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Domain Adaptation for Learned Image Compression with Supervised Adapters&lt;/strong&gt;Presta, Alberto and Spadaro, Gabriele and Tartaglione, Enzo and Fiandrotti, Attilio and Grangetto, Marco&lt;a class="link" href="https://arxiv.org/pdf/2404.15591" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Towards Extreme Image Compression with Latent Feature Guidance and Diffusion Prior&lt;/strong&gt;Li, Zhiyuan and Zhou, Yanhui and Wei, Hao and Ge, Chenyang and Jiang, Jingwen&lt;a class="link" href="https://arxiv.org/pdf/2404.18820" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;S2LIC: Learned Image Compression with the SwinV2 Block, Adaptive Channel-wise and Global-inter Attention Context&lt;/strong&gt;Wang, Yongqiang and Liang, Feng and Liang, Jie and Fu, Haisheng&lt;a class="link" href="https://arxiv.org/pdf/2403.14471.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Lossy Image Compression with Foundation Diffusion Models&lt;/strong&gt;WRelic, Lucas and Azevedo, Roberto and Gross, Markus and Schroers, Christopher&lt;a class="link" href="https://arxiv.org/pdf/2404.08580.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Correcting Diffusion-Based Perceptual Image Compression with Privileged End-to-End Decoder&lt;/strong&gt;Ma, Yiyang and Yang, Wenhan and Liu, Jiaying&lt;a class="link" href="https://arxiv.org/html/2404.04916v1" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Fine color guidance in diffusion models and its application to image compression at extremely low bitrates&lt;/strong&gt;Bordin, Tom and Maugey, Thomas&lt;a class="link" href="https://arxiv.org/pdf/2404.06865.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Human-Machine Collaborative Image Compression Method Based on Implicit Neural Representations&lt;/strong&gt;Li, Huanyang and Zhang, Xinfeng&lt;a class="link" href="https://arxiv.org/pdf/2112.04267.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;CGenerative Refinement for Low Bitrate Image Coding Using Vector Quantized Residual&lt;/strong&gt;Kong, Yuzhuo and Lu, Ming and Ma, Zhan&lt;a class="link" href="https://ieeexplore.ieee.org/document/10493033?denied=" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Image and Video Compression using Generative Sparse Representation with Fidelity Controls&lt;/strong&gt;Jiang, Wei and Wang, Wei&lt;a class="link" href="https://arxiv.org/pdf/2404.06076.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Content-aware Masked Image Modeling Transformer for Stereo Image Compression&lt;/strong&gt;Zhang, Xinjie and Gao, Shenyuan and Liu, Zhening and Ge, Xingtong and He, Dailan and Xu, Tongda and Wang, Yan and Zhang, Jun&lt;a class="link" href="https://arxiv.org/pdf/2403.08505v1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Super-High-Fidelity Image Compression via Hierarchical-ROI and Adaptive Quantization&lt;/strong&gt;Luo, Jixiang and Wang, Yan and Qin, Hongwei&lt;a class="link" href="https://arxiv.org/pdf/2403.13030.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Theoretical Bound-Guided Hierarchical VAE for Neural Image Codecs&lt;/strong&gt;Zhang, Yichi and Duan, Zhihao and Huang, Yuning and Zhu, Fengqing&lt;a class="link" href="https://arxiv.org/pdf/2403.18535v1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Unifying Generation and Compression: Ultra-low bitrate Image Coding Via Multi-stage Transformer&lt;/strong&gt;Xue, Naifu and Mao, Qi and Wang, Zijian and Zhang, Yuan and Ma, Siwei&lt;a class="link" href="https://arxiv.org/pdf/2403.03736.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Enhancing the Rate-Distortion-Perception Flexibility of Learned Image Codecs with Conditional Diffusion Decoders&lt;/strong&gt;Mari, Daniele and Milani, Simone&lt;a class="link" href="https://arxiv.org/pdf/2403.02887v1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Channel-wise Feature Decorrelation for Enhanced Learned Image Compression&lt;/strong&gt;Pakdaman, Farhad and Gabbouj, Moncef&lt;a class="link" href="https://arxiv.org/pdf/2403.10936.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Overfitted image coding at reduced complexity&lt;/strong&gt;Blard, Th{'e}ophile and Ladune, Th{'e}o and Philippe, Pierrick and Clare, Gordon and Jiang, Xiaoran and D{'e}forges, Olivier&lt;a class="link" href="https://arxiv.org/pdf/2403.11651v1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Neural Image Compression with Text-guided Encoding for both Pixel-level and Perceptual Fidelity&lt;/strong&gt; Lee, Hagyeong and Kim, Minkyu and Kim, Jun-Hyuk and Kim, Seungeon and Oh, Dokwan and Lee, Jaeho&lt;a class="link" href="https://arxiv.org/pdf/2403.02944.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Transformer-based Learned Image Compression for Joint Decoding and Denoising&lt;/strong&gt; Chen, Yi-Hsin and Ho, Kuan-Wei and Tsai, Shiau-Rung and Lin, Guan-Hsun and Gnutti, Alessandro and Peng, Wen-Hsiao and Leonardi, Riccardo&lt;a class="link" href="https://arxiv.org/pdf/2402.12888v1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Channel-wise Feature Decorrelation for Enhanced Learned Image Compression&lt;/strong&gt; Pakdaman, Farhad and Gabbouj, Moncef&lt;a class="link" href="https://arxiv.org/ftp/arxiv/papers/2403/2403.10936.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Super-High-Fidelity Image Compression via Hierarchical-ROI and Adaptive Quantization&lt;/strong&gt; Luo, Jixiang and Wang, Yan and Qin, Hongwei&lt;a class="link" href="https://arxiv.org/pdf/2403.13030.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;S2LIC: Learned Image Compression with the SwinV2 Block, Adaptive Channel-wise and Global-inter Attention Context&lt;/strong&gt; Wang, Yongqiang and Liang, Feng and Liang, Jie and Fu, Haisheng&lt;a class="link" href="https://arxiv.org/pdf/2403.14471.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Content-aware Masked Image Modeling Transformer for Stereo Image Compression&lt;/strong&gt; Zhang, Xinjie and Gao, Shenyuan and Liu, Zhening and Ge, Xingtong and He, Dailan and Xu, Tongda and Wang, Yan and Zhang, Jun&lt;a class="link" href="https://arxiv.org/pdf/2403.08505v1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Probing Image Compression For Class-Incremental Learning&lt;/strong&gt; Yang, Justin and Duan, Zhihao and Peng, Andrew and Huang, Yuning and He, Jiangpeng and Zhu, Fengqing&lt;a class="link" href="https://arxiv.org/pdf/2403.06288.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Variable-Rate Learned Image Compression with Multi-Objective Optimization and Quantization-Reconstruction Offsets&lt;/strong&gt; Kamisli, Fatih and Racape, Fabien and Choi, Hyomin &lt;a class="link" href="https://arxiv.org/pdf/2402.18930v1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Channel-wise Feature Decorrelation for Enhanced Learned Image Compression&lt;/strong&gt; Pakdaman, Farhad and Gabbouj, Moncef &lt;a class="link" href="https://arxiv.org/ftp/arxiv/papers/2403/2403.10936.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Exploration of Learned Lifting-Based Transform Structures for Fully Scalable and Accessible Wavelet-Like Image Compression&lt;/strong&gt; Li, Xinyue and Naman, Aous and Taubman, David &lt;a class="link" href="https://arxiv.org/pdf/2402.18761v1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Powerful Lossy Compression for Noisy Images&lt;/strong&gt; Cai, Shilv and Liang, Xiaoguo and Cao, Shuning and Yan, Luxin and Zhong, Sheng and Chen, Liqun and Zou, Xu &lt;a class="link" href="https://arxiv.org/pdf/2403.14135v1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Enhancing Adversarial Training with Prior Knowledge Distillation for Robust Image Compression&lt;/strong&gt; Zhi, Cao and Youneng, Bao and Fanyang, Meng and Chao, Li and Wen, Tan and Genhong, Wang and Yongsheng, Liang &lt;a class="link" href="https://arxiv.org/pdf/2403.06700v1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Image Coding for Machines with Edge Information Learning Using Segment Anything&lt;/strong&gt; Shindo, Takahiro and Yamada, Kein and Watanabe, Taiju and Watanabe, Hiroshi &lt;a class="link" href="https://arxiv.org/pdf/2403.04173v1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Resilience of Entropy Model in Distributed Neural Networks&lt;/strong&gt; Zhang, Milin and Abdi, Mohammad and Rifat, Shahriar and Restuccia, Francesco&lt;a class="link" href="https://arxiv.org/pdf/2403.00942v1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Unifying Generation and Compression: Ultra-low bitrate Image Coding Via Multi-stage Transformer&lt;/strong&gt; Xue, Naifu and Mao, Qi and Wang, Zijian and Zhang, Yuan and Ma, Siwei&lt;a class="link" href="https://arxiv.org/pdf/2403.03736.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Content-aware Masked Image Modeling Transformer for Stereo Image Compression&lt;/strong&gt; Zhang, Xinjie and Gao, Shenyuan and Liu, Zhening and Ge, Xingtong and He, Dailan and Xu, Tongda and Wang, Yan and Zhang, Jun&lt;a class="link" href="https://arxiv.org/pdf/2403.08505v1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;GaussianImage: 1000 FPS Image Representation and Compression by 2D Gaussian Splatting&lt;/strong&gt; Zhang, Xinjie and Ge, Xingtong and Xu, Tongda and He, Dailan and Wang, Yan and Qin, Hongwei and Lu, Guo and Geng, Jing and Zhang, Jun&lt;a class="link" href="https://arxiv.org/pdf/2403.08551v1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Wavelet-Like Transform-Based Technology in Response to the Call for Proposals on Neural Network-Based Image Coding&lt;/strong&gt; Dong, Cunhui and Ma, Haichuan and Zhang, Haotian and Gao, Changsheng and Li, Li and Liu, Dong&lt;a class="link" href="https://arxiv.org/pdf/2403.05937v1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Region-Adaptive Transform with Segmentation Prior for Image Compression&lt;/strong&gt; Liu, Yuxi and Yang, Wenhan and Bai, Huihui and Wei, Yunchao and Zhao, Yao&lt;a class="link" href="https://arxiv.org/pdf/2403.00628.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;LEARNED IMAGE COMPRESSION WITH TEXT QUALITY ENHANCEMENT&lt;/strong&gt; Lai, Chih-Yu and Tran, Dung and Koishida, Kazuhito&lt;a class="link" href="https://arxiv.org/pdf/2402.08643.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Variable-Rate Learned Image Compression with Multi-Objective Optimization and Quantization-Reconstruction Offsets&lt;/strong&gt; Kamisli, Fatih and Racape, Fabien and Choi, Hyomin&lt;a class="link" href="https://arxiv.org/pdf/2402.18930v1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;End-to-End Optimized Image Compression with the Frequency-Oriented Transform&lt;/strong&gt; Zhang, Yuefeng and Lin, Kai &lt;a class="link" href="https://arxiv.org/pdf/2401.08194.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Learned Image Compression with ROI-Weighted Distortion and Bit Allocation&lt;/strong&gt; Jiang, Wei and Zhai, Yongqi and Li, Hangyu and Wang, Ronggang &lt;a class="link" href="https://arxiv.org/pdf/2401.08154.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Semantic Ensemble Loss and Latent Refinement for High-Fidelity Neural Image Compression&lt;/strong&gt; Li, Daxin and Bai, Yuanchao and Wang, Kai and Jiang, Junjun and Liu, Xianming &lt;a class="link" href="https://arxiv.org/pdf/2401.14007.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;FLLIC: Functionally Lossless Image Compression&lt;/strong&gt; Zhang, Xi and Wu, Xiaolin &lt;a class="link" href="https://arxiv.org/pdf/2401.13616.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Fast Implicit Neural Representation Image Codec in Resource-limited Devices&lt;/strong&gt; Liu, Xiang and Chen, Jiahong and Chen, Bin and Liu, Zimo and An, Baoyi and Xia, Shu-Tao &lt;a class="link" href="https://arxiv.org/pdf/2401.12587.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(NeurPIS 2024) &lt;strong&gt;Robustly overfitting latents for flexible neural image compression&lt;/strong&gt; Perugachi-Diaz, Yura and Gansekoele, Arwin and Bhulai, Sandjai &lt;a class="link" href="https://arxiv.org/pdf/2401.17789.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Saliency-aware End-to-end Learned Variable-Bitrate 360-degree Image Compression&lt;/strong&gt; Gungordu, Oguzhan and Tekalp, A Murat &lt;a class="link" href="https://arxiv.org/pdf/2402.08862.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Joint End-to-End Image Compression and Denoising: Leveraging Contrastive Learning and Multi-Scale Self-ONNs&lt;/strong&gt; Xie, Yuxin and Yu, Li and Pakdaman, Farhad and Gabbouj, Moncef&lt;a class="link" href="https://arxiv.org/pdf/2402.05582.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;LEARNED COMPRESSION OF ENCODING DISTRIBUTIONS&lt;/strong&gt; Ulhaq, Mateen and Bajic, Ivan V&lt;a class="link" href="https://www.sfu.ca/~mulhaq/assets/pdf/2024-icip-learned-compression-of-encoding-distributions.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2024) &lt;strong&gt;Transformer-based Learned Image Compression for Joint Decoding and Denoising&lt;/strong&gt; Chen, Yi-Hsin and Ho, Kuan-Wei and Tsai, Shiau-Rung and Lin, Guan-Hsun and Gnutti, Alessandro and Peng, Wen-Hsiao and Leonardi, Riccardo&lt;a class="link" href="https://arxiv.org/pdf/2402.12888v1.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(VICP 2024) &lt;strong&gt;Flexible Coding Order for Learned Image Compression&lt;/strong&gt; Li, Yuqi and Zhang, Haotian and Liu, Dong &lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10402631" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(VICP 2024) &lt;strong&gt;Variable-rate Learned Image Compression with Adaptive Quantization Step Size&lt;/strong&gt; Mei, Feihong and Li, Li and Liu, Dong &lt;a class="link" href="https://ieeexplore.ieee.org/stampPDF/getPDF.jsp?tp=&amp;amp;arnumber=10402767&amp;amp;ref=" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(VICP 2024) &lt;strong&gt;Learned Progressive Image Compression With Spatial Autoregression&lt;/strong&gt; Li, Hangyu and Jiang, Wei and Li, Litian and Zhai, Yongqi and Wang, Ronggang &lt;a class="link" href="https://ieeexplore.ieee.org/stampPDF/getPDF.jsp?tp=&amp;amp;arnumber=10402651&amp;amp;ref=" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(VICP 2024) &lt;strong&gt;Hybrid Implicit Neural Image Compression with Subpixel Context Model and Iterative Pruner&lt;/strong&gt; Tian, Wenxin and Li, Shaohui and Dai, Wenrui and Lu, Cewu and Hu, Weisheng and Zhang, Lin and Du, Junfeng and Xiong, Hongkai &lt;a class="link" href="https://ieeexplore.ieee.org/stampPDF/getPDF.jsp?tp=&amp;amp;arnumber=10402791&amp;amp;ref=" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(VICP 2024) &lt;strong&gt;Learned Progressive Image Compression With Spatial Autoregression&lt;/strong&gt; Tian, Wenxin and Li, Shaohui and Dai, Wenrui and Lu, Cewu and Hu, Weisheng and Zhang, Lin and Du, Junfeng and Xiong, Hongkai &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=10402767" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="2023"&gt;✔2023
&lt;/h2&gt;&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;(NeurIPS 2023) &lt;strong&gt;Towards efficient image compression without autoregressive models&lt;/strong&gt; Ali, Muhammad Salman and Kim, Yeongwoong and Qamar, Maryam and Lim, Sung-Chang and Kim, Donghyun and Zhang, Chaoning and Bae, Sung-Ho and Kim, Hui Yong &lt;a class="link" href="https://openreview.net/pdf?id=1ihGy9vAIg" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(NeurIPS 2023) &lt;strong&gt;LUT-LIC: Look-Up Table-Assisted Learned Image Compression&lt;/strong&gt; Yu, SeungEun and Lee, Jong-Seok&lt;a class="link" href="https://link.springer.com/chapter/10.1007/978-981-99-8148-9_34" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ACMMM 2023) &lt;strong&gt;Toward Scalable Image Feature Compression: A Content-Adaptive and Diffusion-Based Approach&lt;/strong&gt; Guo, Sha and Chen, Zhuo and Zhao, Yang and Zhang, Ning and Li, Xiaotong and Duan, Lingyu &lt;a class="link" href="https://dl.acm.org/doi/pdf/10.1145/3581783.3611851?casa_token=mNmCMwSt2NcAAAAA:pYJtS3-8nkQdv-d0hp5N3OptJqtnjFcfBNOohVR0SqCbdP9mF4tFuAZEN5_WiTkVaxttfYUdfyqJHw" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ACMMM 2023) &lt;strong&gt;Nif: A fast implicit image compression with bottleneck layers and modulated sinusoidal activations&lt;/strong&gt; Catania, Lorenzo and Allegra, Dario &lt;a class="link" href="https://dl.acm.org/doi/pdf/10.1145/3581783.3613834" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ACMMM 2023) &lt;strong&gt;Lambda-Domain Rate Control for Neural Image Compression&lt;/strong&gt; Xue, Naifu and Zhang, Yuan &lt;a class="link" href="https://dl.acm.org/doi/pdf/10.1145/3595916.3626372?casa_token=ZQoUWGi2J6UAAAAA:3NWoCPBC-hhmWmMgcu3uPf_UFg0eSN3fLoeBi_8S0GKRJaW78mnXjkxBesKBwfe30nzHI0PEXGfAVQ" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ACMMM 2023) &lt;strong&gt;MLIC: Multi-Reference Entropy Model for Learned Image Compression&lt;/strong&gt; Jiang, Wei and Yang, Jiayu and Zhai, Yongqi and Ning, Peirong and Gao, Feng and Wang, Ronggang &lt;a class="link" href="https://dl.acm.org/doi/pdf/10.1145/3581783.3611694" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ACMMM 2023) &lt;strong&gt;ELFIC: A Learning-based Flexible Image Codec with Rate-Distortion-Complexity Optimization&lt;/strong&gt; Zhang, Zhichen and Chen, Bolin and Lin, Hongbin and Lin, Jielian and Wang, Xu and Zhao, Tiesong &lt;a class="link" href="https://dl.acm.org/doi/pdf/10.1145/3581783.3612540" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ACMMM 2023) &lt;strong&gt;ICMH-Net: Neural Image Compression Towards both Machine Vision and Human Vision&lt;/strong&gt; Liu, Lei and Hu, Zhihao and Chen, Zhenghao and Xu, Dong &lt;a class="link" href="https://dl.acm.org/doi/pdf/10.1145/3581783.3612041?casa_token=S1tEOBghRlUAAAAA:3QJByYZssGAMLB6Yloy9eCwEEkI7RrZQ_kuaJfIjBCaWH45RJomJC4uQN1StEi_UplaboXcyaEASvA" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TIP 2023) &lt;strong&gt;Learned Image Compression Using Cross-Component Attention Mechanism&lt;/strong&gt; Duan, Wenhong and Chang, Zheng and Jia, Chuanmin and Wang, Shanshe and Ma, Siwei and Song, Li and Gao, Wen &lt;a class="link" href="https://ieeexplore.ieee.org/document/10268865/" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TIP 2023) &lt;strong&gt;Scalable Face Image Coding via StyleGAN Prior: Towards Compression for Human-Machine Collaborative Vision&lt;/strong&gt; Mao, Qi and Wang, Chongyu and Wang, Meng and Wang, Shiqi and Chen, Ruijie and Jin, Libiao and Ma, Siwei &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10372532&amp;amp;casa_token=tefNsn9cqyIAAAAA:iNI1vVcH9m8rW3GLAj-yB_6FC_eiNBGUUiIzVaAlYC7JHRxGElmSd1MdVYHKD0P-9FtPMq5aEw" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2023) &lt;strong&gt;Dec-Adapter: Exploring Efficient Decoder-Side Adapter for Bridging Screen Content and Natural Image Compression&lt;/strong&gt; Shen, Sheng and Yue, Huanjing and Yang, Jingyu &lt;a class="link" href="https://openaccess.thecvf.com/content/ICCV2023/papers/Shen_Dec-Adapter_Exploring_Efficient_Decoder-Side_Adapter_for_Bridging_Screen_Content_and_ICCV_2023_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2023) &lt;strong&gt;Context-Based Trit-Plane Coding for Progressive Image Compression&lt;/strong&gt; Jeon, Seungmin and Choi, Kwang Pyo and Park, Youngo and Kim, Chang-Su &lt;a class="link" href="https://arxiv.org/pdf/2303.05715.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2023) &lt;strong&gt;Transtic: Transferring transformer-based image compression from human perception to machine perception&lt;/strong&gt; Chen, Yi-Hsin and Weng, Ying-Chieh and Kao, Chia-Hao and Chien, Cheng and Chiu, Wei-Chen and Peng, Wen-Hsiao &lt;a class="link" href="https://openaccess.thecvf.com/content/ICCV2023/papers/Chen_TransTIC_Transferring_Transformer-based_Image_Compression_from_Human_Perception_to_Machine_ICCV_2023_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TAI 2023) &lt;strong&gt;Manipulation Attacks on Learned Image Compression&lt;/strong&gt; Liu, Kang and Wu, Di and Wu, Yangyu and Wang, Yiru and Feng, Dan and Tan, Benjamin and Garg, Siddharth&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10352982&amp;amp;casa_token=7J9wZTEfvZUAAAAA:A4rT0GYrKkWQ8h1hhnQxyazt_2kunYTDE1vn73nQD5RDms-6eoJ_ZUppgHNr3WTBk143oCWW6Q" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICIP 2023) &lt;strong&gt;A Decoupled Spatial-Channel Inverted Bottleneck For Image Compression&lt;/strong&gt; Hu, Yuting and Tan, Wen and Meng, Fanyang and Liang, Yongsheng&lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10222366" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICIP 2023) &lt;strong&gt;NUCQ: Non-Uniform Conditional Quantization for Learned Image Compression&lt;/strong&gt; Ge, Ziqing and Jia, Chuanmin and Ma, Siwei and Gao, We&lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10222198" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICIP 2023) &lt;strong&gt;End-to-End Learning-based Image Compression with A Decoupled Framework&lt;/strong&gt; Zhang, Zhaobin and Esenlik, Semih and Wu, Yaojun and Wang, Meng and Zhang, Kai and Zhang, Li&lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10247017/metrics#metrics" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICIP 2023) &lt;strong&gt;Advancing the Rate-Distortion-Computation Frontier for Neural Image Compression&lt;/strong&gt; Minnen, David and Johnston, Nick&lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10222381" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICIP 2023) &lt;strong&gt;Efficient Pruning Method for Learned Lossy Image Compression Models Based on Side Information&lt;/strong&gt; LChen, Weixuan and Yang, Qianqian&lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10222822" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICIP 2023) &lt;strong&gt;Content-Adaptive Parallel Entropy Coding for End-to-End Image Compression&lt;/strong&gt; Li, Shujia and Wang, Dezhao and Fan, Zejia and Liu, Jiaying&lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10222067" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICIP 2023) &lt;strong&gt;Edge-Guided Remote-Sensing Image Compression&lt;/strong&gt; Han, Pengfei and Zhao, Bin and Li, Xuelong &lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10247080" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICIP 2023) &lt;strong&gt;Learned Image Compression Guided Adaptive Quantization for Perceptual Quality&lt;/strong&gt; Chen, Cheng and Geng, Ruiqi and Li, Bohan and Ustarroz-Calonge, Maryla and Galligan, Frank and Han, Jingning and Xu, Yaowu &lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10222637" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICIP 2023) &lt;strong&gt;Unified Learning-Based Lossy and Lossless Jpeg Recompression&lt;/strong&gt; J. Zhang et al. &lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10222354" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICIP 2023) &lt;strong&gt;ULcompress: A Unified low bit-rate image Compression Framework via Invertible Image Representation&lt;/strong&gt; F. Gao, X. Deng, C. Gao and M. Xu &lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10222242" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICIP 2023) &lt;strong&gt;Learned Image Compression with Multi-Scan Based Channel Fusion&lt;/strong&gt; Y. Li, W. Zhou, P. Lu and S. -i. Kamata, &lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10222127" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICIP 2023) &lt;strong&gt;Integer Quantized Learned Image Compression&lt;/strong&gt; G. -W. Jeon, S. Yu and J. -S. Lee &lt;a class="link" href="https://ieeexplore.ieee.org/document/10222336" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICIP 2023) &lt;strong&gt;A Decoupled Spatial-Channel Inverted Bottleneck For Image Compression&lt;/strong&gt; Hu, Yuting and Tan, Wen and Meng, Fanyang and Liang, Yongsheng &lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10222381" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICIP 2023) &lt;strong&gt;Learned Image Compression with Large Capacity and Low Redundancy of Latent Representation&lt;/strong&gt; Meng, Xiandong and Zhu, Shuyuan and Ma, Siwei and Zeng, Bing &lt;a class="link" href="https://ieeexplore.ieee.org/document/10222366" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICIP 2023) &lt;strong&gt;An Improved Upper Bound on the Rate-Distortion Function of Images&lt;/strong&gt; Duan, Zhihao and Ma, Jack and He, Jiangpeng and Zhu, Fengqing&lt;a class="link" href="https://arxiv.org/pdf/2309.02574.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICIP 2023) &lt;strong&gt;AICT: An Adaptive Image Compression Transformer&lt;/strong&gt; Ghorbel, Ahmed and Hamidouche, Wassim and Morin, Luce&lt;a class="link" href="https://arxiv.org/pdf/2307.06091.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(WACV 2023) &lt;strong&gt;Neural Distributed Image Compression with Cross-Attention Feature Alignment&lt;/strong&gt; Mital, Nitish and {&amp;quot;O}zyilkan, Ezgi and Garjani, Ali and G{&amp;quot;u}nd{&amp;quot;u}z, Deniz&lt;a class="link" href="https://openaccess.thecvf.com/content/WACV2023/papers/Mital_Neural_Distributed_Image_Compression_With_Cross-Attention_Feature_Alignment_WACV_2023_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(VCIP 2023) &lt;strong&gt;Image Data Hiding in Neural Compressed Latent Representations&lt;/strong&gt; Huang, Chen-Hsiu and Wu, Ja-Ling&lt;a class="link" href="https://ieeexplore.ieee.org/document/10402627" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(VICP 2023) &lt;strong&gt;EVC: TOWARDS REAL-TIME NEURAL IMAGE COMPRESSION WITH MASK DECA&lt;/strong&gt; Wang, Guo-Hua and Li, Jiahao and Li, Bin and Lu, Yan &lt;a class="link" href="https://arxiv.org/pdf/2302.05071.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(VICP 2023) &lt;strong&gt;A Near Lossless Learned Image Coding Network Quantization Approach for Cross-Platform Inference&lt;/strong&gt; Hang, Xinyu and Jia, Chuanmin and Ma, Siwei and Gao, Wen &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10402704&amp;amp;casa_token=SpFz9g7TeT8AAAAA:GNVUj1Qv03LvWGp3bF9iyCSr_-ZLx6-HNZM4vxYXFqs_yTFitBKet3htVPIc1LR4uKboCvnL" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICASSP 2023) &lt;strong&gt;A Novel Cross-Component Context Model for End-to-End Wavelet Image Coding&lt;/strong&gt; Meyer, Anna and Kaup, Andr{'e} &lt;a class="link" href="https://arxiv.org/pdf/2303.05121.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TCSVT 2024) &lt;strong&gt;Lightweight Context Model Equipped aiWave in Response to the AVS Call for Evidence on Volumetric Medical Image Coding&lt;/strong&gt; Xue, Dongmei and Li, Li and Liu, Dong and Li, Houqiang &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10453226" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TCSVT 2023) &lt;strong&gt;MASIC: Deep Mask Stereo Image Compression&lt;/strong&gt; Deng, Xin and Deng, Yufan and Yang, Ren and Yang, Wenzhe and Timofte, Radu and Xu, Mai &lt;a class="link" href="https://scholar.google.com/scholar_url?url=https://ieeexplore.ieee.org/iel7/76/4358651/10061473.pdf%3Fcasa_token%3DyxaR8FAUmccAAAAA:NZVDcw8yyjkyl1jR53FSSfUBKSAUxSgFwjNl6n3E3gjtklYQ7e6KLBD0sY9rtdPDj3cMxRyjb3w&amp;amp;hl=zh-CN&amp;amp;sa=T&amp;amp;oi=ucasa&amp;amp;ct=ucasa&amp;amp;ei=C7HBZbBn6NLL1g_hqbWgCw&amp;amp;scisig=AFWwaeauyyBtBEhlO7xzS3SgL_l_" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TCSVT 2023) &lt;strong&gt;Extremely Low Bit-rate Image Compression via Invertible Image Generation&lt;/strong&gt; Gao, Fangyuan and Deng, Xin and Jing, Junpeng and Zou, Xin and Xu, Mai &lt;a class="link" href="https://ieeexplore.ieee.org/document/10256132" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TCSVT 2023) &lt;strong&gt;Task-Switchable Pre-Processor for Image Compression for Multiple Machine Vision Tasks&lt;/strong&gt; Yang, Mingyi and Yang, Fei and Murn, Luka and Blanch, Marc Gorriz and Sock, Juil and Wan, Shuai and Yang, Fuzheng and Herranz, Luis &lt;a class="link" href="https://ieeexplore.ieee.org/document/10256132" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TCSVT 2023) &lt;strong&gt;Rethinking semantic image compression: Scalable representation with cross-modality transfer&lt;/strong&gt; Zhang, Pingping and Wang, Shiqi and Wang, Meng and Li, Jiguo and Wang, Xu and Kwong, Sam &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10032603&amp;amp;casa_token=jUWiQNkyzn4AAAAA:sB3n5iqEj4xbTgiOrrXxsI5lbXizq0V9wxvkaZ71ik2nPah0yHZ8WzHwbkrp-URvTMuHukK3&amp;amp;tag=1" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TCSVT 2023) &lt;strong&gt;Facial Image Compression via Neural Image Manifold Compression&lt;/strong&gt; Yang, Wenhan and Huang, Haofeng and Liu, Jiaying and Kot, Alex C. &lt;a class="link" href="https://ieeexplore.ieee.org/abstract/document/10122667" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TCSVT 2023) &lt;strong&gt;Sketch Assisted Face Image Coding for Human and Machine Vision: a Joint Training Approach.&lt;/strong&gt; Fang, Xin and Duan, Yiping and Du, Qiyuan and Tao, Xiaoming and Li, Fan &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10082973&amp;amp;casa_token=bXnEBK4JjLcAAAAA:JO0euK8CEhYZUGE70J9G-3WUZVOVeh5DkXdHQRnWQCSrgg4ybixUxy1J0tFCcYyZWvvggncp" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICCV 2023) &lt;strong&gt;COMPASS: High-Efficiency Deep Image Compression with Arbitrary-scale Spatial Scalability&lt;/strong&gt; Park, Jongmin and Lee, Jooyoung and Kim, Munchurl &lt;a class="link" href="https://openaccess.thecvf.com/content/ICCV2023/papers/Park_COMPASS_High-Efficiency_Deep_Image_Compression_with_Arbitrary-scale_Spatial_Scalability_ICCV_2023_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICCV 2023) &lt;strong&gt;AdaNIC: Towards Practical Neural Image Compression via Dynamic Transform Routing&lt;/strong&gt; Tao, Lvfang and Gao, Wei and Li, Ge and Zhang, Chenhao &lt;a class="link" href="https://openaccess.thecvf.com/content/ICCV2023/papers/Tao_AdaNIC_Towards_Practical_Neural_Image_Compression_via_Dynamic_Transform_Routing_ICCV_2023_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(WCACV 2023) &lt;strong&gt;Controlling Rate, Distortion, and Realism: Towards a Single Comprehensive Neural Image Compression Model&lt;/strong&gt; Iwai, Shoma and Miyazaki, Tomo and Omachi, Shinichiro &lt;a class="link" href="https://openaccess.thecvf.com/content/WACV2024/papers/Iwai_Controlling_Rate_Distortion_and_Realism_Towards_a_Single_Comprehensive_Neural_WACV_2024_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2023) &lt;strong&gt;EGIC: Enhanced Low-Bit-Rate Generative Image Compression Guided by Semantic Segmentation&lt;/strong&gt; K{&amp;quot;o}rber, Nikolai and Kromer, Eduard and Siebert, Andreas and Hauke, Sascha and Mueller-Gritschneder, Daniel &lt;a class="link" href="https://arxiv.org/pdf/2309.03244.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2023) &lt;strong&gt;A Training-Free Defense Framework for Robust Learned Image Compression&lt;/strong&gt; Song, Myungseo and Choi, Jinyoung and Han, Bohyung &lt;a class="link" href="https://arxiv.org/pdf/2401.11902.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2023) &lt;strong&gt;FFCA-Net: Stereo Image Compression via Fast Cascade Alignment of Side Information&lt;/strong&gt;Xia, Yichong and Huang, Yujun and Chen, Bin and Wang, Haoqian and Wang, Yaowei&lt;a class="link" href="https://arxiv.org/pdf/2312.16963.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2023) &lt;strong&gt;Another Way to the Top: Exploit Contextual Clustering in Learned Image Coding&lt;/strong&gt;Zhang, Yichi and Duan, Zhihao and Lu, Ming and Ding, Dandan and Zhu, Fengqing and Ma, Zhan&lt;a class="link" href="https://arxiv.org/pdf/2401.11615.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2023) &lt;strong&gt;Attack and Defense Analysis of Learned Image Compression&lt;/strong&gt;Zhu, Tianyu and Sun, Heming and Xiong, Xiankui and Zhu, Xuanpeng and Gong, Yong and Fan, Yibo and others&lt;a class="link" href="https://arxiv.org/pdf/2401.10345.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2023) &lt;strong&gt;Fast and High-Performance Learned Image Compression With Improved Checkerboard Context Model, Deformable Residual Module, and Knowledge Distillation&lt;/strong&gt; Fu, Haisheng and Liang, Feng and Liang, Jie and Wang, Yongqiang and Zhang, Guohe and Han, Jingning &lt;a class="link" href="https://arxiv.org/pdf/2309.02529.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2023) &lt;strong&gt;Multi-Context Dual Hyper-Prior Neural Image Compression&lt;/strong&gt; Khoshkhahtinat, Atefeh and Zafari, Ali and Mehta, Piyush M and Akyash, Mohammad and Kashiani, Hossein and Nasrabadi, Nasser M &lt;a class="link" href="https://arxiv.org/pdf/2309.10799.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2023) &lt;strong&gt;On Uniform Scalar Quantization for Learned Image Compression&lt;/strong&gt; Zhang, Haotian and Li, Li and Liu, Dong&lt;a class="link" href="https://arxiv.org/pdf/2309.17051.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2023) &lt;strong&gt;Frequency-Aware Transformer for Learned Image Compression&lt;/strong&gt; Li, Han and Li, Shaohui and Dai, Wenrui and Li, Chenglin and Zou, Junni and Xiong, Hongkai&lt;a class="link" href="https://arxiv.org/pdf/2310.16387.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2023) &lt;strong&gt;Perceptual Image Compression with Cooperative Cross-Modal Side Information&lt;/strong&gt; Qin, Shiyu and Chen, Bin and Huang, Yujun and An, Baoyi and Dai, Tao and Via, Shu-Tao&lt;a class="link" href="https://arxiv.org/pdf/2311.13847.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2023) &lt;strong&gt;Progressive Learning with Visual Prompt Tuning for Variable-Rate Image Compression&lt;/strong&gt; Qin, Shiyu and Zhou, Yimin and Wang, Jinpeng and Chen, Bin and An, Baoyi and Dai, Tao and Xia, Shu-Tao&lt;a class="link" href="https://arxiv.org/pdf/2311.17350.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2023) &lt;strong&gt;Exploring the Rate-Distortion-Complexity Optimization in Neural Image Compression&lt;/strong&gt; Gao, Yixin and Feng, Runsen and Guo, Zongyu and Chen, Zhibo&lt;a class="link" href="https://arxiv.org/pdf/2305.07678.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(JVCIR 2023) &lt;strong&gt;Corner-to-Center long-range context model for efficient learned image compression&lt;/strong&gt; LSui, Yang and Ding, Ding and Pan, Xiang and Xu, Xiaozhong and Liu, Shan and Yuan, Bo and Chen, Zhenzhong&lt;a class="link" href="https://arxiv.org/pdf/2311.18103.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(VICP 2023) &lt;strong&gt;A Near Lossless Learned Image Coding Network Quantization Approach for Cross-Platform Inference&lt;/strong&gt; Hang, Xinyu and Jia, Chuanmin and Ma, Siwei and Gao, Wen &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10402704&amp;amp;casa_token=SpFz9g7TeT8AAAAA:GNVUj1Qv03LvWGp3bF9iyCSr_-ZLx6-HNZM4vxYXFqs_yTFitBKet3htVPIc1LR4uKboCvnL" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TMI 2023) &lt;strong&gt;A Near Lossless Learned Image Coding Network Quantization Approach for Cross-Platform Inference&lt;/strong&gt; Hang, Xinyu and Jia, Chuanmin and Ma, Siwei and Gao, Wen &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10402704&amp;amp;casa_token=SpFz9g7TeT8AAAAA:GNVUj1Qv03LvWGp3bF9iyCSr_-ZLx6-HNZM4vxYXFqs_yTFitBKet3htVPIc1LR4uKboCvnL" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="2022"&gt;✔2022
&lt;/h2&gt;&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;(PCS 2022) &lt;strong&gt;Reducing The Amortization Gap of Entropy Bottleneck In End-to-End Image Compression&lt;/strong&gt; Balcilar, Muhammet and Damodaran, Bharath and Hellier, Pierre &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10018064&amp;amp;casa_token=T3OEyA4gC_UAAAAA:hV74ZEkQEKKE940LsRyDFRFIhIQcATSnQKZsc8mTr2UTT6jLIMAyBijHG1pTfFJG-8VxRRn7XuA" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt; e&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR workshop 2022) &lt;strong&gt;Self-Supervised Variable Rate Image Compression using Visual Attention&lt;/strong&gt; Sinha, Abhishek Kumar and Moorthi, S Manthira and Dhar, Debajyoti&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2022W/CLIC/papers/Sinha_Self-Supervised_Variable_Rate_Image_Compression_Using_Visual_Attention_CVPRW_2022_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR workshop 2022) &lt;strong&gt;RDONet: Rate-Distortion Optimized Learned Image Compression with Variable Depth&lt;/strong&gt; Brand, Fabian and Fischer, Kristian and Kopte, Alexander and Windsheimer, Marc and Kaup, Andr{'e} &lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2022W/CLIC/papers/Brand_RDONet_Rate-Distortion_Optimized_Learned_Image_Compression_With_Variable_Depth_CVPRW_2022_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2022) &lt;strong&gt;Transformations in Learned Image Compression from Modulation Perspective&lt;/strong&gt; Bao, Youneng and Meng, Fangyang and Tan, Wen and Li, Chao and Tian, Yonghong and Liang, Yongsheng &lt;a class="link" href="https://arxiv.org/pdf/2203.02158.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2022) &lt;strong&gt;Flexible Neural Image Compression via Code Editing&lt;/strong&gt; Gao, Chenjian and Xu, Tongda and He, Dailan and Qin, Hongwei and Wang, Yan &lt;a class="link" href="https://arxiv.org/pdf/2209.09244.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2022) &lt;strong&gt;Attention-Based Generative Neural Image Compression on Solar Dynamics Observatory&lt;/strong&gt; Zafari, Ali and Khoshkhahtinat, Atefeh and Mehta, Piyush M and Nasrabadi, Nasser M and Thompson, Barbara J and da Silva, Daniel and Kirk, Michael SF&lt;a class="link" href="https://arxiv.org/pdf/2210.06478.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2022) &lt;strong&gt;Progressive Deep Image Compression for Hybrid Contexts of Image Classification and Reconstruction&lt;/strong&gt; Lei, Zhongyue and Duan, Peng and Hong, Xuemin and Mota, Jo{~a}o FC and Shi, Jianghong and Wang, Cheng-Xiang &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9970515&amp;amp;casa_token=wr2tdLJpoSQAAAAA:yxNRSlqMzqo0libGY0kbkrP79VRTccC5BmKEzCC5ziY9shpizVudordovWx5BOFOgQSHC7dxrZs" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPRW 2022) &lt;strong&gt;Universal Deep Image Compression via Content-Adaptive Optimization with Adapters&lt;/strong&gt; Tsubota, Koki and Akutsu, Hiroaki and Aizawa, Kiyoharu &lt;a class="link" href="https://openaccess.thecvf.com/content/WACV2023/papers/Tsubota_Universal_Deep_Image_Compression_via_Content-Adaptive_Optimization_With_Adapters_WACV_2023_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPRW 2022) &lt;strong&gt;User-Guided Variable Rate Learned Image Compression&lt;/strong&gt; Gupta, Rushil and BV, Suryateja and Kapoor, Nikhil and Jaiswal, Rajat and Nangi, Sharmila Reddy and Kulkarni, Kuldeep&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2022W/CLIC/papers/Gupta_User-Guided_Variable_Rate_Learned_Image_Compression_CVPRW_2022_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPRW 2022) &lt;strong&gt;Adaptive Bitrate Quantization Scheme Without Codebook for Learned Image Compression&lt;/strong&gt; L{&amp;quot;o}hdefink, Jonas and Sitzmann, Jonas and B{&amp;quot;a}r, Andreas and Fingscheidt, Tim &lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2022W/CLIC/papers/Lohdefink_Adaptive_Bitrate_Quantization_Scheme_Without_Codebook_for_Learned_Image_Compression_CVPRW_2022_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TIP 2022) &lt;strong&gt;OSLO: On-the-Sphere Learning for Omnidirectional images and its application to 360-degree image compression&lt;/strong&gt; Bidgoli, Navid Mahmoudian and Roberto, G de A and Maugey, Thomas and Roumy, Aline and Frossard, Pascal &lt;a class="link" href="https://arxiv.org/pdf/2107.09179.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(AAAI 2022) &lt;strong&gt;Two-Stage Octave Residual Network for End-to-End Image Compression&lt;/strong&gt; Chen, Fangdong and Xu, Yumeng and Wang, Li &lt;a class="link" href="https://scholar.google.com/scholar?hl=zh-CN&amp;amp;as_sdt=0%2C5&amp;amp;q=Two-Stage&amp;#43;Octave&amp;#43;Residual&amp;#43;Network&amp;#43;for&amp;#43;End-to-End&amp;#43;Image&amp;#43;Compression&amp;amp;btnG=#:~:text=%E5%B9%B4%E4%BB%BD-,%5BPDF%5D%20aaai.org,-Two%2DStage%20Octave" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2022) &lt;strong&gt;Preprocessing Enhanced Image Compression for Machine Vision&lt;/strong&gt; Lu, Guo and Ge, Xingtong and Zhong, Tianxiong and Geng, Jing and Hu, Qiang &lt;a class="link" href="https://arxiv.org/pdf/2206.05650.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2022) &lt;strong&gt;Learning-Driven Lossy Image Compression; A
Comprehensive Survey&lt;/strong&gt; Jamil, Sonain and Piran, Md and others &lt;a class="link" href="https://arxiv.org/pdf/2201.09240.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2022) &lt;strong&gt;Estimating the Resize Parameter in End-to-end Learned Image Compression&lt;/strong&gt; Chen, Li-Heng and Bampis, Christos G and Li, Zhi and Krasula, Luk{'a}{\v{s}} and Bovik, Alan C &lt;a class="link" href="https://arxiv.org/pdf/2204.12022.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2022) &lt;strong&gt;Image Compression with Product Quantized Masked Image Modeling&lt;/strong&gt; El-Nouby, Alaaeldin and Muckley, Matthew J and Ullrich, Karen and Laptev, Ivan and Verbeek, Jakob and J{'e}gou, Herv{'e} &lt;a class="link" href="https://arxiv.org/pdf/2212.07372.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ITJ 2022) &lt;strong&gt;Human–Machine Interaction-Oriented Image Coding for Res8ource-Constrained Visual Monitoring in IoT&lt;/strong&gt;
Wang, Zixi and Li, Fan and Xu, Jing and Cosman, Pamela C &lt;a class="link" href="" &gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TGRS 2022) &lt;strong&gt;Towards simultaneous image compression and indexing for scalable content-based retrieval in remote sensing&lt;/strong&gt; Sumbul, Gencer and Xiang, Jun and Demir, Beg{&amp;quot;u}m &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9878355" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(SPI 2022) &lt;strong&gt;Rate-constrained learning-based image compression&lt;/strong&gt; &lt;a class="link" href="" &gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TIP 2022) &lt;strong&gt;Exploiting Intra-Slice and Inter-Slice Redundancy for Learning-Based Lossless Volumetric Image Compression&lt;/strong&gt; Chen, Zhenghao and Gu, Shuhang and Lu, Guo and Xu, Dong &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9694511&amp;amp;casa_token=_INFRj8nkRkAAAAA:_4VWc5Q56n7hHUi5xnIS3Yyno0YRwyVWQdEnU2XqmAV6Sv_XnG7SgBnO0DfYUnoLuNP-3iKOivk" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt; lossless&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2022) &lt;strong&gt;Entroformer: A transformer-based entropy model for learned image compression&lt;/strong&gt; Qian, Yichen and Lin, Ming and Sun, Xiuyu and Tan, Zhiyu and Jin, Rong &lt;a class="link" href="https://arxiv.org/pdf/2202.05492.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Arxiv 2022) &lt;strong&gt;Multi-Sample Training for Neural Image Compression&lt;/strong&gt; Xu, Tongda and Wang, Yan and He, Dailan and Gao, Chenjian and Gao, Han and Liu, Kunzan and Qin, Hongwei &lt;a class="link" href="https://arxiv.org/pdf/2209.13834.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2022) &lt;strong&gt;ELIC: Efficient Learned Image Compression with Unevenly Grouped Space-Channel Contextual Adaptive Coding&lt;/strong&gt; He, Dailan and Yang, Ziming and Peng, Weikun and Ma, Rui and Qin, Hongwei and Wang, Yan &lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2022/papers/He_ELIC_Efficient_Learned_Image_Compression_With_Unevenly_Grouped_Space-Channel_Contextual_CVPR_2022_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ECCV 2022) &lt;strong&gt;Contextformer: A Transformer with
Spatio-Channel Attention for Context Modeling
in Learned Image Compression&lt;/strong&gt; Koyuncu, A Burakhan and Gao, Han and Boev, Atanas and Gaikov, Georgii and Alshina, Elena and Steinbach, Eckehard &lt;a class="link" href="https://arxiv.org/pdf/2203.02452.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ECCV 2022) &lt;strong&gt;Contextformer: A Transformer with Spatio-Channel Attention for Context Modeling in Learned Image Compression&lt;/strong&gt; Koyuncu, A Burakhan and Gao, Han and Boev, Atanas and Gaikov, Georgii and Alshina, Elena and Steinbach, Eckehard &lt;a class="link" href="https://link.springer.com/content/pdf/10.1007/978-3-031-19800-7_26.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ECCV 2022) &lt;strong&gt;Content-Oriented Learned Image
Compression&lt;/strong&gt; Li, Meng and Gao, Shangyin and Feng, Yihui and Shi, Yibo and Wang, Jing &lt;a class="link" href="https://link.springer.com/content/pdf/10.1007/978-3-031-19800-7_37.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ECCV 2022) &lt;strong&gt;Implicit Neural Representations
for Image Compression&lt;/strong&gt; Str{&amp;quot;u}mpler, Yannick and Postels, Janis and Yang, Ren and Gool, Luc Van and Tombari, Federico &lt;a class="link" href="https://arxiv.org/pdf/2112.04267.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ECCV 2022) &lt;strong&gt;Content Adaptive Latents and Decoder for Neural Image Compression&lt;/strong&gt; Pan, Guanbo and Lu, Guo and Hu, Zhihao and Xu, Dong &lt;a class="link" href="https://arxiv.org/pdf/2212.10132.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ECCV 2022) &lt;strong&gt;Optimizing Image Compression via Joint
Learning with Denoising&lt;/strong&gt; Cheng, Ka Leong and Xie, Yueqi and Chen, Qifeng &lt;a class="link" href="https://link.springer.com/content/pdf/10.1007/978-3-031-19800-7_4.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt; denoising&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(2022) &lt;strong&gt;2C-Net: Integrate Image Compression and Classification via Deep Neural Network&lt;/strong&gt; Liu, Linfeng and Chen, Tong and Liu, Haojie and Pu, Shiliang and Wang, Li and Shen, Qiu &lt;a class="link" href="https://assets.researchsquare.com/files/rs-2049607/v1_covered.pdf?c=1663278884" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ACMMM 2022) &lt;strong&gt;High-Fidelity Variable-Rate Image Compression via Invertible Activation Transformation&lt;/strong&gt; Cai, Shilv and Zhang, Zhijun and Chen, Liqun and Yan, Luxin and Zhong, Sheng and Zou, Xu [&lt;a class="link" href="https://arxiv.org/pdf/2209.05054.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arxiv 2022) &lt;strong&gt;Deep Lossy Plus Residual Coding for Lossless and Near-lossless Image Compression&lt;/strong&gt; Bai, Yuanchao and Liu, Xianming and Wang, Kai and Ji, Xiangyang and Wu, Xiaolin and Gao, Wen [&lt;a class="link" href="https://arxiv.org/pdf/2209.04847.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TCSVT 2022) &lt;strong&gt;End-to-End Optimized Image Compression With Deep Gaussian Process Regression&lt;/strong&gt; Cao, Maida and Dai, Wenrui and Li, Shaohui and Li, Chenglin and Zou, Junni and Chen, Ying and Xiong, Hongkai [&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=9903432" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TIP 2022) &lt;strong&gt;End-to-end optimized 360° image compression&lt;/strong&gt; Li, Mu and Li, Jinxing and Gu, Shuhang and Wu, Feng and Zhang, David [&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=9904466" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arxiv 2022) &lt;strong&gt;Lossy Compression with Gaussian Diffusion&lt;/strong&gt; Theis, Lucas and Salimans, Tim and Hoffman, Matthew D and Mentzer, Fabian [&lt;a class="link" href="https://arxiv.org/pdf/2206.08889.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arxiv 2022) &lt;strong&gt;Joint Image Compression and Denoising via Latent-Space Scalability&lt;/strong&gt; Alvar, Saeed Ranjbar and Ulhaq, Mateen and Choi, Hyomin and Baji{'c}, Ivan V [&lt;a class="link" href="https://arxiv.org/pdf/2205.01874.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arxiv 2022) &lt;strong&gt;Post-Training Quantization for Cross-Platform Learned Image Compression&lt;/strong&gt; He, Dailan and Yang, Ziming and Chen, Yuan and Zhang, Qi and Qin, Hongwei and Wang, Yan [&lt;a class="link" href="https://arxiv.org/pdf/2202.07513.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICASSP 2022) &lt;strong&gt;Satellite Image Compression and Denoising With Neural Networks&lt;/strong&gt; Yin, Shanzhi and Li, Chao and Bao, Youneng and Liang, Yongsheng and Meng, Fanyang and Liu, Wei [&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=9747854" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICASSP 2022) &lt;strong&gt;AdderIC: Towards Low Computation Cost Image Compression&lt;/strong&gt; Li, Bowen and Xin, Yao and Li, Chao and Bao, Youneng and Meng, Fanyang and Liang, Yongsheng [&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=9747652" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(IEEE Geoscience and Remote Sensing Letters 2022) &lt;strong&gt;Universal Efficient Variable-Rate Neural Image Compression&lt;/strong&gt; de Oliveira, Vinicius Alves and Chabert, Marie and Oberlin, Thomas and Poulliat, Charly and Bruno, Mickael and Latry, Christophe and Carlavan, Mikael and Henrot, Simon and Falzon, Frederic and Camarero, Roberto [&lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=9690871" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2022) &lt;strong&gt;The Devil Is in the Details: Window-Based Attention for Image Compression&lt;/strong&gt; Zou, Renjie and Song, Chunfeng and Zhang, Zhaoxiang &lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2022/papers/Zou_The_Devil_Is_in_the_Details_Window-Based_Attention_for_Image_CVPR_2022_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2022) &lt;strong&gt;Joint Global and Local Hierarchical Priors for Learned Image Compression&lt;/strong&gt;, Kim, Jun-Hyuk and Heo, Byeongho and Lee, Jong-Seok &lt;a class="link" href="https://arxiv.org/pdf/2112.04487.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; (E)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2022) &lt;strong&gt;RIDDLE: Lidar Data Compression with Range Image Deep Delta Encoding&lt;/strong&gt; Zhou, Xuanyu and Qi, Charles R and Zhou, Yin and Anguelov, Dragomir [&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2022/papers/Zhou_RIDDLE_Lidar_Data_Compression_With_Range_Image_Deep_Delta_Encoding_CVPR_2022_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2022) &lt;strong&gt;Neural Data-Dependent Transform for Learned Image Compression&lt;/strong&gt; Wang, Dezhao and Yang, Wenhan and Hu, Yueyu and Liu, Jiaying [&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2022/papers/Wang_Neural_Data-Dependent_Transform_for_Learned_Image_Compression_CVPR_2022_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPRW 2022) &lt;strong&gt;Self-Supervised Variable Rate Image Compression using Visual Attention&lt;/strong&gt; Sinha, Abhishek Kumar and Moorthi, S Manthira and Dhar, Debajyoti [&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2022W/CLIC/papers/Sinha_Self-Supervised_Variable_Rate_Image_Compression_Using_Visual_Attention_CVPRW_2022_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPRW 2022) &lt;strong&gt;User-Guided Variable Rate Learned Image Compression&lt;/strong&gt; Gupta, Rushil and BV, Suryateja and Kapoor, Nikhil and Jaiswal, Rajat and Nangi, Sharmila Reddy and Kulkarni, Kuldeep [&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2022W/CLIC/papers/Gupta_User-Guided_Variable_Rate_Learned_Image_Compression_CVPRW_2022_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2022) &lt;strong&gt;RDONet: Rate-Distortion Optimized Learned Image Compression With Variable Depth&lt;/strong&gt; Brand, Fabian and Fischer, Kristian and Kopte, Alexander and Windsheimer, Marc and Kaup, Andr{'e}. [&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2022W/CLIC/papers/Brand_RDONet_Rate-Distortion_Optimized_Learned_Image_Compression_With_Variable_Depth_CVPRW_2022_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2022) &lt;strong&gt;LC-FDNet: Learned Lossless Image Compression with Frequency Decomposition Network&lt;/strong&gt; Rhee, Hochang and Jang, Yeong Il and Kim, Seyun and Cho, Nam Ik. [&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2022/papers/Rhee_LC-FDNet_Learned_Lossless_Image_Compression_With_Frequency_Decomposition_Network_CVPR_2022_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2022) &lt;strong&gt;PO-ELIC: Perception-Oriented Efficient Learned Image Coding&lt;/strong&gt; He, Dailan and Yang, Ziming and Yu, Hongjiu and Xu, Tongda and Luo, Jixiang and Chen, Yuan and Gao, Chenjian and Shi, Xinjie and Qin, Hongwei and Wang, Yan. [&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2022W/CLIC/papers/He_PO-ELIC_Perception-Oriented_Efficient_Learned_Image_Coding_CVPRW_2022_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2022) &lt;strong&gt;Online Meta Adaptation for Variable-Rate Learned Image Compression&lt;/strong&gt; Jiang, Wei and Wang, Wei and Li, Songnan and Liu, Shan. [&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2022W/NTIRE/papers/Jiang_Online_Meta_Adaptation_for_Variable-Rate_Learned_Image_Compression_CVPRW_2022_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2022) &lt;strong&gt;Unified Multivariate Gaussian Mixture for Efficient Neural Image Compression&lt;/strong&gt; Zhu, Xiaosu and Song, Jingkuan and Gao, Lianli and Zheng, Feng and Shen, Heng Tao. [&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2022/papers/Zhu_Unified_Multivariate_Gaussian_Mixture_for_Efficient_Neural_Image_Compression_CVPR_2022_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2022) &lt;strong&gt;Split Hierarchical Variational Compression&lt;/strong&gt; Ryder, Tom and Zhang, Chen and Kang, Ning and Zhang, Shifeng. [&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2022/papers/Ryder_Split_Hierarchical_Variational_Compression_CVPR_2022_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2022) &lt;strong&gt;SASIC: Stereo Image Compression With Latent Shifts and Stereo Attention&lt;/strong&gt; W{&amp;quot;o}dlinger, Matthias and Kotera, Jan and Xu, Jan and Sablatnig, Robert. [&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2022/papers/Wodlinger_SASIC_Stereo_Image_Compression_With_Latent_Shifts_and_Stereo_Attention_CVPR_2022_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2022) &lt;strong&gt;Deep Stereo Image Compression via Bi-directional Coding&lt;/strong&gt;, Lei, Jianjun and Liu, Xiangrui and Peng, Bo and Jin, Dengchao and Li, Wanqing and Gu, Jingxiao [&lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2022/papers/Lei_Deep_Stereo_Image_Compression_via_Bi-Directional_Coding_CVPR_2022_paper.pdf" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(AAAI 2022) &lt;strong&gt;OoDHDR-Codec: Out-of-Distribution Generalization for HDR Image Compression&lt;/strong&gt;, Cao, Linfeng and Jiang, Aofan and Li, Wei and Wu, Huaying and Ye, Nanyang &lt;a class="link" href="https://www.aaai.org/AAAI22Papers/AAAI-8610.CaoL.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; (HDR)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2022) &lt;strong&gt;Unified Multivariate Gaussian Mixture for Efficient Neural Image Compression&lt;/strong&gt;, Zhu, Xiaosu and Song, Jingkuan and Gao, Lianli and Zheng, Feng and Shen, Heng Tao &lt;a class="link" href="https://arxiv.org/pdf/2203.10897.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;a class="link" href="https://github.com/xiaosu-zhu/McQuic" target="_blank" rel="noopener"
&gt;[code]&lt;/a&gt; (E)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2022) &lt;strong&gt;Estimating the Resize Parameter in End-to-end Learned Image Compression&lt;/strong&gt;, Chen, Li-Heng and Bampis, Christos G and Li, Zhi and Krasula, Luk{'a}{\v{s}} and Bovik, Alan C &lt;a class="link" href="https://arxiv.org/pdf/2204.12022.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;a class="link" href="https://github.com/xiaosu-zhu/McQuic" target="_blank" rel="noopener"
&gt;[code]&lt;/a&gt; (Sa)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2022) &lt;strong&gt;DeepFGS: Fine-Grained Scalable Coding for Learned Image Compression&lt;/strong&gt;, Ma, Yi and Zhai, Yongqi and Wang, Ronggang &lt;a class="link" href="https://arxiv.org/pdf/2201.01173.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;(Sa)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2022) &lt;strong&gt;End-to-End Learned Block-Based Image Compression with Block-Level Masked Convolutions and Asymptotic Closed Loop Training&lt;/strong&gt;, Kamisli, Fatih &lt;a class="link" href="https://arxiv.org/pdf/2203.11686.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; (T+E)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2022) &lt;strong&gt;Transformations in Learned Image Compression from Modulation Perspective&lt;/strong&gt;, Bao, Youneng and Meng, Fangyang and Tan, Wen and Li, Chao and Tian, Yonghong and Liang, Yongsheng &lt;a class="link" href="https://arxiv.org/pdf/2203.02158.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2022) &lt;strong&gt;Identity Preserving Loss for Learned Image Compression&lt;/strong&gt;, Xiao, Jiuhong and Aggarwal, Lavisha and Banerjee, Prithviraj and Aggarwal, Manoj and Medioni, Gerard &lt;a class="link" href="https://arxiv.org/pdf/2204.10869.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2022) &lt;strong&gt;High-Efficiency Lossy Image Coding Through Adaptive Neighborhood Information Aggregation&lt;/strong&gt;, Lu, Ming and Ma, Zhan &lt;a class="link" href="https://arxiv.org/pdf/2204.11448.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; (E)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2022) &lt;strong&gt;Learning Weighting Map for Bit-Depth Expansion within a Rational Range&lt;/strong&gt;, Liu, Yuqing and Jia, Qi and Zhang, Jian and Fan, Xin and Wang, Shanshe and Ma, Siwei and Gao, Wen &lt;a class="link" href="https://arxiv.org/pdf/2204.12039.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; &lt;a class="link" href="https://github.com/yuqing-liu-dut/bit-depth-expansion" target="_blank" rel="noopener"
&gt;[code]&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2022) &lt;strong&gt;Joint Image Compression and Denoising via Latent-Space Scalability&lt;/strong&gt;, Ranjbar Alvar, Saeed and Ulhaq, Mateen and Choi, Hyomin and Baji{'c}, Ivan V &lt;a class="link" href="https://arxiv.org/pdf/2205.01874.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="2021"&gt;✔2021
&lt;/h2&gt;&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;(TPAMI 2021) &lt;strong&gt;Learning end-to-end lossy image compression: A benchmark&lt;/strong&gt;, Hu, Yueyu and Yang, Wenhan and Ma, Zhan and Liu, Jiaying &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9376651" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; &lt;a class="link" href="https://github.com/huzi96/Coarse2Fine-PyTorch" target="_blank" rel="noopener"
&gt;[code]&lt;/a&gt;(Benchmark)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(IJCV 2021) &lt;strong&gt;Semantics-to-signal scalable image compression with learned revertible representations&lt;/strong&gt;, Liu, Kang and Liu, Dong and Li, Li and Yan, Ning and Li, Houqiang &lt;a class="link" href="https://link.springer.com/content/pdf/10.1007/s11263-021-01491-7.pdf1" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;[&lt;a class="link" href="https://github.com/micmic123/QmapCompression" target="_blank" rel="noopener"
&gt;code&lt;/a&gt;] (Scalable)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TIP 2021) &lt;strong&gt;Semantic Perceptual Image Compression With a Laplacian Pyramid of Convolutional Networks&lt;/strong&gt;, Wang, Juan and Duan, Yiping and Tao, Xiaoming and Xu, Mai and Lu, Jianhua &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=9381614" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICLR 2021) &lt;strong&gt;Hierarchical Image Compression Framework&lt;/strong&gt;, Ge, Yunying and Wang, Jing and Shi, Yibo and Gao, Shangyin &lt;a class="link" href="https://openreview.net/pdf?id=8rPXT-SVgjh" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICCV 2021) &lt;strong&gt;Variable-Rate Deep Image Compression through Spatially-Adaptive Feature Transform&lt;/strong&gt;, Song, Myungseo and Choi, Jinyoung and Han, Bohyung &lt;a class="link" href="https://openaccess.thecvf.com/content/ICCV2021/papers/Song_Variable-Rate_Deep_Image_Compression_Through_Spatially-Adaptive_Feature_Transform_ICCV_2021_paper.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; (E)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2021) &lt;strong&gt;Asymmetric Gained Deep Image Compression With Continuous Rate Adaptation&lt;/strong&gt;, Cui, Ze and Wang, Jing and Gao, Shangyin and Guo, Tiansheng and Feng, Yihui and Bai, Bo &lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2021/papers/Cui_Asymmetric_Gained_Deep_Image_Compression_With_Continuous_Rate_Adaptation_CVPR_2021_paper.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; &lt;a class="link" href="https://github.com/mmSir/GainedVAE" target="_blank" rel="noopener"
&gt;[code]&lt;/a&gt;(VR)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2021) &lt;strong&gt;Checkerboard context model for efficient learned image compression&lt;/strong&gt;, He, Dailan and Zheng, Yaoyan and Sun, Baocheng and Wang, Yan and Qin, Hongwei &lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2021/papers/He_Checkerboard_Context_Model_for_Efficient_Learned_Image_Compression_CVPR_2021_paper.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; &lt;a class="link" href="https://github.com/leelitian/Checkerboard-Context-Model-Pytorch" target="_blank" rel="noopener"
&gt;[code1]&lt;/a&gt; &lt;a class="link" href="https://github.com/JiangWeibeta/Checkerboard-Context-Model-for-Efficient-Learned-Image-Compression" target="_blank" rel="noopener"
&gt;[code]&lt;/a&gt; (E)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 2021) &lt;strong&gt;Learning scalable ly=-constrained near-lossless image compression via joint lossy image and residual compression&lt;/strong&gt;, Bai, Yuanchao and Liu, Xianming and Zuo, Wangmeng and Wang, Yaowei and Ji, Xiangyang &lt;a class="link" href="[https://openaccess.thecvf.com/content/CVPR2021/papers/Cui_Asymmetric_Gained_Deep_Image_Compression_With_Continuous_Rate_Adaptation_CVPR_2021_paper.pdf" &gt;[paper]&lt;/a&gt; &lt;a class="link" href="https://github.com/mmSir/GainedVAE]%28https://openaccess.thecvf.com/content/CVPR2021/papers/Bai_Learning_Scalable_lY-Constrained_Near-Lossless_Image_Compression_via_Joint_Lossy_Image_CVPR_2021_paper.pdf%29" target="_blank" rel="noopener"
&gt;[code]&lt;/a&gt;(lossless)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPRW 2021) &lt;strong&gt;End-to-end optimized image compression with competition of prior distributions&lt;/strong&gt;, Brummer, Benoit and De Vleeschouwer, Christophe &lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2021W/CLIC/papers/Brummer_End-to-End_Optimized_Image_Compression_With_Competition_of_Prior_Distributions_CVPRW_2021_paper.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; &lt;a class="link" href="https://github.com/trougnouf/Manypriors" target="_blank" rel="noopener"
&gt;[code]&lt;/a&gt;(E)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPRW 2021) &lt;strong&gt;Subjective Quality Optimized Efficient Image Compression&lt;/strong&gt;, Wang, Xining and Chen, Tong and Ma, Zhan &lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2021W/CLIC/papers/Wang_Subjective_Quality_Optimized_Efficient_Image_Compression_CVPRW_2021_paper.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; &lt;a class="link" href="https://github.com/mmSir/GainedVAE" target="_blank" rel="noopener"
&gt;[code]&lt;/a&gt;(perceptual)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPRW 2021) &lt;strong&gt;Variable Rate ROI Image Compression Optimized for Visual Quality&lt;/strong&gt;, Ma, Yi and Zhai, Yongqi and Yang, Chunhui and Yang, Jiayu and Wang, Ruofan and Zhou, Jing and Li, Kai and Chen, Ying and Wang, Ronggang &lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2021W/CLIC/papers/Ma_Variable_Rate_ROI_Image_Compression_Optimized_for_Visual_Quality_CVPRW_2021_paper.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;(VR)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPRW 2021) &lt;strong&gt;Image Compression with Recurrent Neural Network and Generalized Divisive Normalization&lt;/strong&gt;, Islam, Khawar and Dang, L Minh and Lee, Sujin and Moon, Hyeonjoon &lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2021W/CLIC/papers/Islam_Image_Compression_With_Recurrent_Neural_Network_and_Generalized_Divisive_Normalization_CVPRW_2021_paper.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;a class="link" href="https://github.com/khawar-islam/cvpr" target="_blank" rel="noopener"
&gt;[code]&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPRW 2021) &lt;strong&gt;End-to-End Learned Image Compression with Augmented Normalizing Flows&lt;/strong&gt;, Ho, Yung-Han and Chan, Chih-Chun and Peng, Wen-Hsiao and Hang, Hsueh-Ming &lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2021W/CLIC/papers/Islam_Image_Compression_With_Recurrent_Neural_Network_and_Generalized_Divisive_Normalization_CVPRW_2021_paper.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;a class="link" href="https://github.com/dororojames/anfic" target="_blank" rel="noopener"
&gt;[code]&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPRW 2021) &lt;strong&gt;Learned Image Compression with Super-Resolution Residual Modules and DISTS Optimization&lt;/strong&gt;, Suzuki, Akifumi and Akutsu, Hiroaki and Naruko, Takahiro and Tsubota, Koki and Aizawa, Kiyoharu &lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2021W/CLIC/papers/Suzuki_Learned_Image_Compression_With_Super-Resolution_Residual_Modules_and_DISTS_Optimization_CVPRW_2021_paper.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; (Perceptual)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPRW 2021) &lt;strong&gt;Perceptual Friendly Variable Rate Image Compression&lt;/strong&gt;, Gao, Yixin and Wu, Yaojun and Guo, Zongyu and Zhang, Zhizheng and Chen, Zhibo &lt;a class="link" href="https://openaccess.thecvf.com/content/CVPR2021W/CLIC/papers/Gao_Perceptual_Friendly_Variable_Rate_Image_Compression_CVPRW_2021_paper.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; (VR+Perceptual)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(WACV 2021) &lt;strong&gt;Saliency Driven Perceptual Image Compression&lt;/strong&gt;, Patel, Yash and Appalaraju, Srikar and Manmatha, R &lt;a class="link" href="https://openaccess.thecvf.com/content/WACV2021/papers/Patel_Saliency_Driven_Perceptual_Image_Compression_WACV_2021_paper.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; (perceputal)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TCSVT 2021) &lt;strong&gt;Causal contextual prediction for learned image compression&lt;/strong&gt;, Guo, Zongyu and Zhang, Zhizheng and Feng, Runsen and Chen, Zhibo &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9455349" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; (E)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TCSVT 2021) &lt;strong&gt;Learned Block-based Hybrid Image Compression&lt;/strong&gt;, Wu, Yaojun and Li, Xin and Zhang, Zhizheng and Jin, Xin and Chen, Zhibo &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9455349" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; (T+E)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ACMMM 2021) &lt;strong&gt;Enhanced Invertible Encoding for Learned Image Compression&lt;/strong&gt;, Yueqi Xie, Ka Leong Cheng, Qifeng Chen &lt;a class="link" href="https://dl.acm.org/doi/pdf/10.1145/3474085.3475213" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; &lt;a class="link" href="https://github.com/xyq7/InvCompress" target="_blank" rel="noopener"
&gt;[code]&lt;/a&gt; (Invertible)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ACMMM 2021) &lt;strong&gt;Semantic Scalable Image Compression with Cross-Layer Priors&lt;/strong&gt;, Tu, Hanyue and Li, Li and Zhou, Wengang and Li, Houqiang &lt;a class="link" href="https://dl.acm.org/doi/pdf/10.1145/3474085.3475533" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; (Scalable)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ACMMM 2021) &lt;strong&gt;Interpolation Variable Rate Image Compression&lt;/strong&gt;, Sun, Zhenhong and Tan, Zhiyu and Sun, Xiuyu and Zhang, Fangyi and Qian, Yichen and Li, Dongyang and Li, Hao &lt;a class="link" href="https://dl.acm.org/doi/pdf/10.1145/3474085.3475698" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; (VR)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(TMM 2021) &lt;strong&gt;Learned Multi-Resolution Variable-Rate Image Compression With Octave-Based Residual Blocks&lt;/strong&gt;, Akbari, Mohammad and Liang, Jie and Han, Jingning and Tu, Chengjie &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=9385968&amp;amp;tag=1" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; (VR)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(DCC 2021) &lt;strong&gt;Accelerate Neural Image Compression with Channel-adaptive Arithmetic Coding&lt;/strong&gt;, uo, Zongyu and Fu, Jun and Feng, Runsen and Chen, Zhibo &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=9401277" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(ICIP 2021) &lt;strong&gt;Graph-Convolution Network for Image Compression&lt;/strong&gt;, Yang, Chunhui and Ma, Yi and Yang, Jiayu and Liu, Shiyi and Wang, Ronggang &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9506704" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(PMLR 2021) &lt;strong&gt;Soft then hard: Rethinking the quantization in neural image compression&lt;/strong&gt;, Z Guo，Z Zhang，R Feng，Z Chen &lt;a class="link" href="http://proceedings.mlr.press/v139/guo21c/guo21c.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2021) &lt;strong&gt;Learned Image Compression for Machine Perception&lt;/strong&gt;, Codevilla, Felipe and Simard, Jean Gabriel and Goroshin, Ross and Pal, Chris &lt;a class="link" href="https://arxiv.org/pdf/2111.02249.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; (Perceptual)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2021) &lt;strong&gt;Substitutional Neural Image Compression&lt;/strong&gt;, Wang, Xiao and Jiang, Wei and Wang, Wei and Liu, Shan and Kulis, Brian and Chin, Peter &lt;a class="link" href="https://arxiv.org/pdf/2105.07512.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; (VR)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2021) &lt;strong&gt;DPICT: Deep Progressive Image Compression Using Trit-Planes&lt;/strong&gt;, Lee, Jae-Han and Jeon, Seungmin and Choi, Kwang Pyo and Park, Youngo and Kim, Chang-Su &lt;a class="link" href="https://arxiv.org/pdf/2112.06334.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; (VR)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2021) &lt;strong&gt;Implicit Neural Representations for Image Compression&lt;/strong&gt;, Str{&amp;quot;u}mpler, Yannick and Postels, Janis and Yang, Ren and Van Gool, Luc and Tombari, Federico &lt;a class="link" href="https://arxiv.org/pdf/2112.04267.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2021) &lt;strong&gt;A Novel Framework for Image-to-image Translation and Image Compression&lt;/strong&gt;, Yang, Fei and Wang, Yaxing and Herranz, Luis and Cheng, Yongmei and Mozerov, Mikhail &lt;a class="link" href="https://arxiv.org/pdf/2111.13105.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; (I2I)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2021) &lt;strong&gt;Semantic-assisted image compression&lt;/strong&gt;, Sun, Qizheng and Guo, Caili and Yang, Yang and Chen, Jiujiu and Xue, Xijun &lt;a class="link" href="https://arxiv.org/pdf/2201.12599.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2021) &lt;strong&gt;End-to-End Learned Image Compression with Quantized Weights and Activations&lt;/strong&gt;, Sun, Heming and Yu, Lu and Katto, Jiro &lt;a class="link" href="https://arxiv.org/pdf/2111.09348.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2021) &lt;strong&gt;End-to-End Image Compression with Probabilistic Decoding&lt;/strong&gt;, Ma, Haichuan and Liu, Dong and Dong, Cunhui and Li, Li and Wu, Feng &lt;a class="link" href="https://arxiv.org/pdf/2109.14837.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2021) &lt;strong&gt;Towards End-to-End Image Compression and Analysis with Transformers&lt;/strong&gt;, Bai, Yuanchao and Yang, Xu and Liu, Xianming and Jiang, Junjun and Wang, Yaowei and Ji, Xiangyang and Gao, Wen &lt;a class="link" href="https://arxiv.org/pdf/2112.09300.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2021) &lt;strong&gt;A Cross Channel Context Model for Latents in Deep Image Compression&lt;/strong&gt;, Ma, Changyue and Wang, Zhao and Liao, Ruling and Ye, Yan &lt;a class="link" href="https://arxiv.org/pdf/2103.02884.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2021) &lt;strong&gt;Online Meta Adaptation for Variable-Rate Learned Image Compression&lt;/strong&gt;, Wei Jiang, Wei Wang, Songnan Li, Shan Liu &lt;a class="link" href="https://arxiv.org/abs/2111.08256" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt; (VR)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(arXiv preprint 2021) &lt;strong&gt;Transformer-based Image Compression&lt;/strong&gt;, Ming Lu, Peiyao Guo, Huiqing Shi, Chuntong Cao, Zhan Ma [&lt;a class="link" href="https://arxiv.org/abs/2111.06707" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="2020"&gt;✔2020
&lt;/h2&gt;&lt;ul&gt;
&lt;li&gt;[arXiv preprint 2020] &lt;strong&gt;Lossless Image Compression through Super-Resolution&lt;/strong&gt;, Sheng Cao, Chao-Yuan Wu, Philipp Krähenbühl [&lt;a class="link" href="https://arxiv.org/abs/2004.02872" target="_blank" rel="noopener"
&gt;paper&lt;/a&gt;]&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="2019"&gt;✔2019
&lt;/h2&gt;&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;(PCS 19) &lt;strong&gt;A novel deep progressive image compression framework&lt;/strong&gt;, Cai, Chunlei and Chen, Li and Zhang, Xiaoyun and Lu, Guo and Gao, Zhiyong. &lt;a class="link" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8954500" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(CVPR 19) &lt;strong&gt;Learning image and video compression through spatial-temporal energy compaction&lt;/strong&gt;, Cheng, Zhengxue and Sun, Heming and Takeuchi, Masaru and Katto, Jiro. &lt;a class="link" href="https://openaccess.thecvf.com/content_CVPR_2019/papers/Cheng_Learning_Image_and_Video_Compression_Through_Spatial-Temporal_Energy_Compaction_CVPR_2019_paper.pdf" target="_blank" rel="noopener"
&gt;[paper]&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="2018"&gt;✔2018
&lt;/h2&gt;&lt;hr&gt;</description></item><item><title>Awesome LLM Apps</title><link>https://hanguangwu.github.io/blog/en/p/awesome-llm-apps/</link><pubDate>Tue, 10 Feb 2026 18:34:25 -0800</pubDate><guid>https://hanguangwu.github.io/blog/en/p/awesome-llm-apps/</guid><description>&lt;h1 id="-awesome-llm-apps"&gt;🌟 Awesome LLM Apps
&lt;/h1&gt;&lt;h2 id="introduction"&gt;Introduction
&lt;/h2&gt;&lt;p&gt;A curated collection of &lt;strong&gt;Awesome LLM apps built with RAG, AI Agents, Multi-agent Teams, MCP, Voice Agents, and more.&lt;/strong&gt; This repository features LLM apps that use models from &lt;img src="https://cdn.simpleicons.org/openai" alt="openai logo" width="25" height="15"&gt;&lt;strong&gt;OpenAI&lt;/strong&gt; , &lt;img src="https://cdn.simpleicons.org/anthropic" alt="anthropic logo" width="25" height="15"&gt;&lt;strong&gt;Anthropic&lt;/strong&gt;, &lt;img src="https://cdn.simpleicons.org/googlegemini" alt="google logo" width="25" height="18"&gt;&lt;strong&gt;Google&lt;/strong&gt;, &lt;img src="https://cdn.simpleicons.org/x" alt="X logo" width="25" height="15"&gt;&lt;strong&gt;xAI&lt;/strong&gt; and open-source models like &lt;img src="https://cdn.simpleicons.org/alibabacloud" alt="alibaba logo" width="25" height="15"&gt;&lt;strong&gt;Qwen&lt;/strong&gt; or &lt;img src="https://cdn.simpleicons.org/meta" alt="meta logo" width="25" height="15"&gt;&lt;strong&gt;Llama&lt;/strong&gt; that you can run locally on your computer.&lt;/p&gt;
&lt;p&gt;&lt;a class="link" href="https://github.com/Shubhamsaboo/awesome-llm-apps" target="_blank" rel="noopener"
&gt;GitHub-Awesome LLM Apps&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a class="link" href="https://www.theunwindai.com/" target="_blank" rel="noopener"
&gt;Collection of awesome LLM apps with AI Agents and RAG using OpenAI, Anthropic, Gemini and opensource models.&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="-why-awesome-llm-apps"&gt;🤔 Why Awesome LLM Apps?
&lt;/h2&gt;&lt;ul&gt;
&lt;li&gt;💡 Discover practical and creative ways LLMs can be applied across different domains, from code repositories to email inboxes and more.&lt;/li&gt;
&lt;li&gt;🔥 Explore apps that combine LLMs from OpenAI, Anthropic, Gemini, and open-source alternatives with AI Agents, Agent Teams, MCP &amp;amp; RAG.&lt;/li&gt;
&lt;li&gt;🎓 Learn from well-documented projects and contribute to the growing open-source ecosystem of LLM-powered applications.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="-featured-ai-projects"&gt;📂 Featured AI Projects
&lt;/h2&gt;&lt;h3 id="ai-agents"&gt;AI Agents
&lt;/h3&gt;&lt;h3 id="-starter-ai-agents"&gt;🌱 Starter AI Agents
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="starter_ai_agents/ai_blog_to_podcast_agent/" &gt;🎙️ AI Blog to Podcast Agent&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="starter_ai_agents/ai_breakup_recovery_agent/" &gt;❤️‍🩹 AI Breakup Recovery Agent&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="starter_ai_agents/ai_data_analysis_agent/" &gt;📊 AI Data Analysis Agent&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="starter_ai_agents/ai_medical_imaging_agent/" &gt;🩻 AI Medical Imaging Agent&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="starter_ai_agents/ai_meme_generator_agent_browseruse/" &gt;😂 AI Meme Generator Agent (Browser)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="starter_ai_agents/ai_music_generator_agent/" &gt;🎵 AI Music Generator Agent&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="starter_ai_agents/ai_travel_agent/" &gt;🛫 AI Travel Agent (Local &amp;amp; Cloud)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="starter_ai_agents/gemini_multimodal_agent_demo/" &gt;✨ Gemini Multimodal Agent&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="starter_ai_agents/mixture_of_agents/" &gt;🔄 Mixture of Agents&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="starter_ai_agents/xai_finance_agent/" &gt;📊 xAI Finance Agent&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="starter_ai_agents/opeani_research_agent/" &gt;🔍 OpenAI Research Agent&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="starter_ai_agents/web_scrapping_ai_agent/" &gt;🕸️ Web Scraping AI Agent (Local &amp;amp; Cloud SDK)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="-advanced-ai-agents"&gt;🚀 Advanced AI Agents
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="advanced_ai_agents/multi_agent_apps/ai_home_renovation_agent" &gt;🏚️ 🍌 AI Home Renovation Agent with Nano Banana Pro&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="advanced_ai_agents/single_agent_apps/ai_deep_research_agent/" &gt;🔍 AI Deep Research Agent&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="advanced_ai_agents/multi_agent_apps/agent_teams/ai_vc_due_diligence_agent_team" &gt;📊 AI VC Due Diligence Agent Team&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="advanced_ai_agents/single_agent_apps/research_agent_gemini_interaction_api" &gt;🔬 AI Research Planner &amp;amp; Executor (Google Interactions API)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="advanced_ai_agents/single_agent_apps/ai_consultant_agent" &gt;🤝 AI Consultant Agent&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="advanced_ai_agents/single_agent_apps/ai_system_architect_r1/" &gt;🏗️ AI System Architect Agent&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="advanced_ai_agents/multi_agent_apps/ai_financial_coach_agent/" &gt;💰 AI Financial Coach Agent&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="advanced_ai_agents/single_agent_apps/ai_movie_production_agent/" &gt;🎬 AI Movie Production Agent&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="advanced_ai_agents/single_agent_apps/ai_investment_agent/" &gt;📈 AI Investment Agent&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="advanced_ai_agents/single_agent_apps/ai_health_fitness_agent/" &gt;🏋️‍♂️ AI Health &amp;amp; Fitness Agent&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="advanced_ai_agents/multi_agent_apps/product_launch_intelligence_agent" &gt;🚀 AI Product Launch Intelligence Agent&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="advanced_ai_agents/single_agent_apps/ai_journalist_agent/" &gt;🗞️ AI Journalist Agent&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="advanced_ai_agents/multi_agent_apps/ai_mental_wellbeing_agent/" &gt;🧠 AI Mental Wellbeing Agent&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="advanced_ai_agents/single_agent_apps/ai_meeting_agent/" &gt;📑 AI Meeting Agent&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="advanced_ai_agents/multi_agent_apps/ai_Self-Evolving_agent/" &gt;🧬 AI Self-Evolving Agent&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="advanced_ai_agents/multi_agent_apps/agent_teams/ai_sales_intelligence_agent_team" &gt;👨🏻‍💼 AI Sales Intelligence Agent Team&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="advanced_ai_agents/multi_agent_apps/ai_news_and_podcast_agents/" &gt;🎧 AI Social Media News and Podcast Agent&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/accomplish-ai/openwork" target="_blank" rel="noopener"
&gt;🌐 Openwork - Open Browser Automation Agent&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="-autonomous-game-playing-agents"&gt;🎮 Autonomous Game Playing Agents
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="advanced_ai_agents/autonomous_game_playing_agent_apps/ai_3dpygame_r1/" &gt;🎮 AI 3D Pygame Agent&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="advanced_ai_agents/autonomous_game_playing_agent_apps/ai_chess_agent/" &gt;♜ AI Chess Agent&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="advanced_ai_agents/autonomous_game_playing_agent_apps/ai_tic_tac_toe_agent/" &gt;🎲 AI Tic-Tac-Toe Agent&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="-multi-agent-teams"&gt;🤝 Multi-agent Teams
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="advanced_ai_agents/multi_agent_apps/agent_teams/ai_competitor_intelligence_agent_team/" &gt;🧲 AI Competitor Intelligence Agent Team&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="advanced_ai_agents/multi_agent_apps/agent_teams/ai_finance_agent_team/" &gt;💲 AI Finance Agent Team&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="advanced_ai_agents/multi_agent_apps/agent_teams/ai_game_design_agent_team/" &gt;🎨 AI Game Design Agent Team&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="advanced_ai_agents/multi_agent_apps/agent_teams/ai_legal_agent_team/" &gt;👨‍⚖️ AI Legal Agent Team (Cloud &amp;amp; Local)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="advanced_ai_agents/multi_agent_apps/agent_teams/ai_recruitment_agent_team/" &gt;💼 AI Recruitment Agent Team&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="advanced_ai_agents/multi_agent_apps/agent_teams/ai_real_estate_agent_team" &gt;🏠 AI Real Estate Agent Team&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="advanced_ai_agents/multi_agent_apps/agent_teams/ai_services_agency/" &gt;👨‍💼 AI Services Agency (CrewAI)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="advanced_ai_agents/multi_agent_apps/agent_teams/ai_teaching_agent_team/" &gt;👨‍🏫 AI Teaching Agent Team&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="advanced_ai_agents/multi_agent_apps/agent_teams/multimodal_coding_agent_team/" &gt;💻 Multimodal Coding Agent Team&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="advanced_ai_agents/multi_agent_apps/agent_teams/multimodal_design_agent_team/" &gt;✨ Multimodal Design Agent Team&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="advanced_ai_agents/multi_agent_apps/agent_teams/multimodal_uiux_feedback_agent_team/" &gt;🎨 🍌 Multimodal UI/UX Feedback Agent Team with Nano Banana&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://hanguangwu.github.io/blog/advanced_ai_agents/multi_agent_apps/agent_teams/ai_travel_planner_agent_team/" &gt;🌏 AI Travel Planner Agent Team&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="-voice-ai-agents"&gt;🗣️ Voice AI Agents
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="voice_ai_agents/ai_audio_tour_agent/" &gt;🗣️ AI Audio Tour Agent&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="voice_ai_agents/customer_support_voice_agent/" &gt;📞 Customer Support Voice Agent&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="voice_ai_agents/voice_rag_openaisdk/" &gt;🔊 Voice RAG Agent (OpenAI SDK)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/akshayaggarwal99/jarvis-ai-assistant" target="_blank" rel="noopener"
&gt;🎙️ OpenSource Voice Dictation Agent (like Wispr Flow&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="mcp-ai-agents"&gt;&lt;img src="https://cdn.simpleicons.org/modelcontextprotocol" alt="mcp logo" width="25" height="20"&gt; MCP AI Agents
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="mcp_ai_agents/browser_mcp_agent/" &gt;♾️ Browser MCP Agent&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="mcp_ai_agents/github_mcp_agent/" &gt;🐙 GitHub MCP Agent&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="mcp_ai_agents/notion_mcp_agent" &gt;📑 Notion MCP Agent&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="mcp_ai_agents/ai_travel_planner_mcp_agent_team" &gt;🌍 AI Travel Planner MCP Agent&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="-rag-retrieval-augmented-generation"&gt;📀 RAG (Retrieval Augmented Generation)
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="rag_tutorials/agentic_rag_embedding_gemma" &gt;🔥 Agentic RAG with Embedding Gemma&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="rag_tutorials/agentic_rag_with_reasoning/" &gt;🧐 Agentic RAG with Reasoning&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="rag_tutorials/ai_blog_search/" &gt;📰 AI Blog Search (RAG)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="rag_tutorials/autonomous_rag/" &gt;🔍 Autonomous RAG&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="rag_tutorials/contextualai_rag_agent/" &gt;🔄 Contextual AI RAG Agent&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="rag_tutorials/corrective_rag/" &gt;🔄 Corrective RAG (CRAG)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="rag_tutorials/deepseek_local_rag_agent/" &gt;🐋 Deepseek Local RAG Agent&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="rag_tutorials/gemini_agentic_rag/" &gt;🤔 Gemini Agentic RAG&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="rag_tutorials/hybrid_search_rag/" &gt;👀 Hybrid Search RAG (Cloud)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="rag_tutorials/llama3.1_local_rag/" &gt;🔄 Llama 3.1 Local RAG&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="rag_tutorials/local_hybrid_search_rag/" &gt;🖥️ Local Hybrid Search RAG&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="rag_tutorials/local_rag_agent/" &gt;🦙 Local RAG Agent&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="rag_tutorials/rag-as-a-service/" &gt;🧩 RAG-as-a-Service&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="rag_tutorials/rag_agent_cohere/" &gt;✨ RAG Agent with Cohere&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="rag_tutorials/rag_chain/" &gt;⛓️ Basic RAG Chain&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="rag_tutorials/rag_database_routing/" &gt;📠 RAG with Database Routing&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="rag_tutorials/vision_rag/" &gt;🖼️ Vision RAG&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="-llm-apps-with-memory-tutorials"&gt;💾 LLM Apps with Memory Tutorials
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="advanced_llm_apps/llm_apps_with_memory_tutorials/ai_arxiv_agent_memory/" &gt;💾 AI ArXiv Agent with Memory&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="advanced_llm_apps/llm_apps_with_memory_tutorials/ai_travel_agent_memory/" &gt;🛩️ AI Travel Agent with Memory&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="advanced_llm_apps/llm_apps_with_memory_tutorials/llama3_stateful_chat/" &gt;💬 Llama3 Stateful Chat&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="advanced_llm_apps/llm_apps_with_memory_tutorials/llm_app_personalized_memory/" &gt;📝 LLM App with Personalized Memory&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="advanced_llm_apps/llm_apps_with_memory_tutorials/local_chatgpt_with_memory/" &gt;🗄️ Local ChatGPT Clone with Memory&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="advanced_llm_apps/llm_apps_with_memory_tutorials/multi_llm_memory/" &gt;🧠 Multi-LLM Application with Shared Memory&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="-chat-with-x-tutorials"&gt;💬 Chat with X Tutorials
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="advanced_llm_apps/chat_with_X_tutorials/chat_with_github/" &gt;💬 Chat with GitHub (GPT &amp;amp; Llama3)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="advanced_llm_apps/chat_with_X_tutorials/chat_with_gmail/" &gt;📨 Chat with Gmail&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="advanced_llm_apps/chat_with_X_tutorials/chat_with_pdf/" &gt;📄 Chat with PDF (GPT &amp;amp; Llama3)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="advanced_llm_apps/chat_with_X_tutorials/chat_with_research_papers/" &gt;📚 Chat with Research Papers (ArXiv) (GPT &amp;amp; Llama3)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="advanced_llm_apps/chat_with_X_tutorials/chat_with_substack/" &gt;📝 Chat with Substack&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="advanced_llm_apps/chat_with_X_tutorials/chat_with_youtube_videos/" &gt;📽️ Chat with YouTube Videos&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="-llm-optimization-tools"&gt;🎯 LLM Optimization Tools
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="advanced_llm_apps/llm_optimization_tools/toonify_token_optimization/" &gt;🎯 Toonify Token Optimization&lt;/a&gt; - Reduce LLM API costs by 30-60% using TOON format&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="advanced_llm_apps/llm_optimization_tools/headroom_context_optimization/" &gt;🧠 Headroom Context Optimization&lt;/a&gt; - Reduce LLM API costs by 50-90% through intelligent context compression for AI agents (includes persistent memory &amp;amp; MCP support)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="-llm-fine-tuning-tutorials"&gt;🔧 LLM Fine-tuning Tutorials
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;&lt;img src="https://cdn.simpleicons.org/google" alt="google logo" width="20" height="15"&gt; &lt;a class="link" href="advanced_llm_apps/llm_finetuning_tutorials/gemma3_finetuning/" &gt;Gemma 3 Fine-tuning&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;img src="https://cdn.simpleicons.org/meta" alt="meta logo" width="25" height="15"&gt; &lt;a class="link" href="advanced_llm_apps/llm_finetuning_tutorials/llama3.2_finetuning/" &gt;Llama 3.2 Fine-tuning&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="-ai-agent-framework-crash-course"&gt;🧑‍🏫 AI Agent Framework Crash Course
&lt;/h3&gt;&lt;p&gt;&lt;img src="https://cdn.simpleicons.org/google" alt="google logo" width="25" height="15"&gt; &lt;a class="link" href="ai_agent_framework_crash_course/google_adk_crash_course/" &gt;Google ADK Crash Course&lt;/a&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Starter agent; model‑agnostic (OpenAI, Claude)&lt;/li&gt;
&lt;li&gt;Structured outputs (Pydantic)&lt;/li&gt;
&lt;li&gt;Tools: built‑in, function, third‑party, MCP tools&lt;/li&gt;
&lt;li&gt;Memory; callbacks; Plugins&lt;/li&gt;
&lt;li&gt;Simple multi‑agent; Multi‑agent patterns&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src="https://cdn.simpleicons.org/openai" alt="openai logo" width="25" height="15"&gt; &lt;a class="link" href="ai_agent_framework_crash_course/openai_sdk_crash_course/" &gt;OpenAI Agents SDK Crash Course&lt;/a&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Starter agent; function calling; structured outputs&lt;/li&gt;
&lt;li&gt;Tools: built‑in, function, third‑party integrations&lt;/li&gt;
&lt;li&gt;Memory; callbacks; evaluation&lt;/li&gt;
&lt;li&gt;Multi‑agent patterns; agent handoffs&lt;/li&gt;
&lt;li&gt;Swarm orchestration; routing logic&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="-getting-started"&gt;🚀 Getting Started
&lt;/h2&gt;&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Clone the repository&lt;/strong&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Navigate to the desired project directory&lt;/strong&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; awesome-llm-apps/starter_ai_agents/ai_travel_agent
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Install the required dependencies&lt;/strong&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;pip install -r requirements.txt
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Follow the project-specific instructions&lt;/strong&gt; in each project&amp;rsquo;s &lt;code&gt;README.md&lt;/code&gt; file to set up and run the app.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;</description></item><item><title>New Era ERP System-ERPNext</title><link>https://hanguangwu.github.io/blog/en/p/new-era-erp-system-erpnext/</link><pubDate>Mon, 02 Feb 2026 18:34:25 -0800</pubDate><guid>https://hanguangwu.github.io/blog/en/p/new-era-erp-system-erpnext/</guid><description>&lt;h1 id="new-era-erp-system-erpnext"&gt;New Era ERP System-ERPNext
&lt;/h1&gt;&lt;h2 id="introduction"&gt;Introduction
&lt;/h2&gt;&lt;p&gt;&lt;a class="link" href="https://github.com/frappe/erpnext" target="_blank" rel="noopener"
&gt;100% Open-Source ERP system to help you run your business.&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="motivation"&gt;Motivation
&lt;/h3&gt;&lt;p&gt;Running a business is a complex task - handling invoices, tracking stock, managing personnel and even more ad-hoc activities. In a market where software is sold separately to manage each of these tasks, ERPNext does all of the above and more, for free.&lt;/p&gt;
&lt;h3 id="key-features"&gt;Key Features
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Accounting&lt;/strong&gt;: All the tools you need to manage cash flow in one place, right from recording transactions to summarizing and analyzing financial reports.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Order Management&lt;/strong&gt;: Track inventory levels, replenish stock, and manage sales orders, customers, suppliers, shipments, deliverables, and order fulfillment.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Manufacturing&lt;/strong&gt;: Simplifies the production cycle, helps track material consumption, exhibits capacity planning, handles subcontracting, and more!&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Asset Management&lt;/strong&gt;: From purchase to perishment, IT infrastructure to equipment. Cover every branch of your organization, all in one centralized system.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Projects&lt;/strong&gt;: Delivery both internal and external Projects on time, budget and Profitability. Track tasks, timesheets, and issues by project.&lt;/li&gt;
&lt;/ul&gt;
&lt;details open&gt;
&lt;summary&gt;More&lt;/summary&gt;
&lt;img src="https://erpnext.com/files/v16_bom.png"/&gt;
&lt;img src="https://erpnext.com/files/v16_stock_summary.png"/&gt;
&lt;img src="https://erpnext.com/files/v16_job_card.png"/&gt;
&lt;img src="https://erpnext.com/files/v16_tasks.png"/&gt;
&lt;/details&gt;
&lt;h3 id="under-the-hood"&gt;Under the Hood
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a class="link" href="https://github.com/frappe/frappe" target="_blank" rel="noopener"
&gt;&lt;strong&gt;Frappe Framework&lt;/strong&gt;&lt;/a&gt;: A full-stack web application framework written in Python and Javascript. The framework provides a robust foundation for building web applications, including a database abstraction layer, user authentication, and a REST API.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a class="link" href="https://github.com/frappe/frappe-ui" target="_blank" rel="noopener"
&gt;&lt;strong&gt;Frappe UI&lt;/strong&gt;&lt;/a&gt;: A Vue-based UI library, to provide a modern user interface. The Frappe UI library provides a variety of components that can be used to build single-page applications on top of the Frappe Framework.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="production-setup"&gt;Production Setup
&lt;/h2&gt;&lt;h3 id="managed-hosting"&gt;Managed Hosting
&lt;/h3&gt;&lt;p&gt;You can try &lt;a class="link" href="https://frappecloud.com" target="_blank" rel="noopener"
&gt;Frappe Cloud&lt;/a&gt;, a simple, user-friendly and sophisticated &lt;a class="link" href="https://github.com/frappe/press" target="_blank" rel="noopener"
&gt;open-source&lt;/a&gt; platform to host Frappe applications with peace of mind.&lt;/p&gt;
&lt;p&gt;It takes care of installation, setup, upgrades, monitoring, maintenance and support of your Frappe deployments. It is a fully featured developer platform with an ability to manage and control multiple Frappe deployments.&lt;/p&gt;
&lt;div&gt;
&lt;a href="https://erpnext-demo.frappe.cloud/app/home" target="_blank"&gt;
&lt;picture&gt;
&lt;source media="(prefers-color-scheme: dark)" srcset="https://frappe.io/files/try-on-fc-white.png"&gt;
&lt;img src="https://frappe.io/files/try-on-fc-black.png" alt="Try on Frappe Cloud" height="28" /&gt;
&lt;/picture&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;h3 id="self-hosted"&gt;Self-Hosted
&lt;/h3&gt;&lt;h4 id="docker"&gt;Docker
&lt;/h4&gt;&lt;p&gt;Prerequisites: docker, docker-compose, git. Refer &lt;a class="link" href="https://docs.docker.com" target="_blank" rel="noopener"
&gt;Docker Documentation&lt;/a&gt; for more details on Docker setup.&lt;/p&gt;
&lt;p&gt;Run following commands:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;span class="lnt"&gt;2
&lt;/span&gt;&lt;span class="lnt"&gt;3
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-fallback" data-lang="fallback"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;git clone https://github.com/frappe/frappe_docker
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;cd frappe_docker
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;docker compose -f pwd.yml up -d
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;After a couple of minutes, site should be accessible on your localhost port: 8080. Use below default login credentials to access the site.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Username: Administrator&lt;/li&gt;
&lt;li&gt;Password: admin&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;See &lt;a class="link" href="https://github.com/frappe/frappe_docker?tab=readme-ov-file#to-run-on-arm64-architecture-follow-this-instructions" target="_blank" rel="noopener"
&gt;Frappe Docker&lt;/a&gt; for ARM based docker setup.&lt;/p&gt;
&lt;h2 id="development-setup"&gt;Development Setup
&lt;/h2&gt;&lt;h3 id="manual-install"&gt;Manual Install
&lt;/h3&gt;&lt;p&gt;The Easy Way: our install script for bench will install all dependencies (e.g. MariaDB). See &lt;a class="link" href="https://github.com/frappe/bench" target="_blank" rel="noopener"
&gt;https://github.com/frappe/bench&lt;/a&gt; for more details.&lt;/p&gt;
&lt;p&gt;New passwords will be created for the ERPNext &amp;ldquo;Administrator&amp;rdquo; user, the MariaDB root user, and the frappe user (the script displays the passwords and saves them to ~/frappe_passwords.txt).&lt;/p&gt;
&lt;h3 id="local"&gt;Local
&lt;/h3&gt;&lt;p&gt;To setup the repository locally follow the steps mentioned below:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Setup bench by following the &lt;a class="link" href="https://frappeframework.com/docs/user/en/installation" target="_blank" rel="noopener"
&gt;Installation Steps&lt;/a&gt; and start the server&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-fallback" data-lang="fallback"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;bench start
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In a separate terminal window, run the following commands:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;span class="lnt"&gt;2
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-fallback" data-lang="fallback"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Create a new site
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;bench new-site erpnext.localhost
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Get the ERPNext app and install it&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;span class="lnt"&gt;2
&lt;/span&gt;&lt;span class="lnt"&gt;3
&lt;/span&gt;&lt;span class="lnt"&gt;4
&lt;/span&gt;&lt;span class="lnt"&gt;5
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-fallback" data-lang="fallback"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Get the ERPNext app
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;bench get-app https://github.com/frappe/erpnext
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;# Install the app
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;bench --site erpnext.localhost install-app erpnext
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Open the URL &lt;code&gt;http://erpnext.localhost:8000/app&lt;/code&gt; in your browser, you should see the app running&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id="learning-and-community"&gt;Learning and community
&lt;/h2&gt;&lt;ol&gt;
&lt;li&gt;&lt;a class="link" href="https://school.frappe.io" target="_blank" rel="noopener"
&gt;Frappe School&lt;/a&gt; - Learn Frappe Framework and ERPNext from the various courses by the maintainers or from the community.&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://docs.erpnext.com/" target="_blank" rel="noopener"
&gt;Official documentation&lt;/a&gt; - Extensive documentation for ERPNext.&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://discuss.frappe.io/c/erpnext/6" target="_blank" rel="noopener"
&gt;Discussion Forum&lt;/a&gt; - Engage with community of ERPNext users and service providers.&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://erpnext_public.t.me" target="_blank" rel="noopener"
&gt;Telegram Group&lt;/a&gt; - Get instant help from huge community of users.&lt;/li&gt;
&lt;/ol&gt;</description></item><item><title>500+ AI Agent Projects / UseCases</title><link>https://hanguangwu.github.io/blog/en/p/500-ai-agent-projects-/-usecases/</link><pubDate>Mon, 02 Feb 2026 17:34:25 -0800</pubDate><guid>https://hanguangwu.github.io/blog/en/p/500-ai-agent-projects-/-usecases/</guid><description>&lt;h1 id="-500-ai-agent-projects--usecases"&gt;🌟 500+ AI Agent Projects / UseCases
&lt;/h1&gt;&lt;p&gt;&lt;img src="https://cdn.jsdelivr.net/gh/Hanguangwu/MyImageBed01/img/20260202175639846.png"
loading="lazy"
&gt;&lt;/p&gt;
&lt;p&gt;A curated collection of AI agent use cases across industries, showcasing practical applications and linking to open-source projects for implementation. Explore how AI agents are transforming industries like healthcare, finance, education, and more! 🤖✨&lt;/p&gt;
&lt;p&gt;&lt;a class="link" href="https://github.com/ashishpatel26/500-AI-Agents-Projects" target="_blank" rel="noopener"
&gt;GitHub-Repo&lt;/a&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="-introduction"&gt;🧠 Introduction
&lt;/h2&gt;&lt;p&gt;Artificial Intelligence (AI) agents are revolutionizing the way industries operate. From personalized learning to financial trading bots, AI agents bring efficiency, innovation, and scalability. This repository provides:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A categorized list of industries where AI agents are making an impact.&lt;/li&gt;
&lt;li&gt;Detailed use cases with links to open-source projects for implementation.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Whether you&amp;rsquo;re a developer, researcher, or business enthusiast, this repository is your go-to resource for AI agent inspiration and learning.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="selected-usecase-by-myself"&gt;Selected Usecase By Myself
&lt;/h2&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Use Case&lt;/th&gt;
&lt;th&gt;Industry&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;th&gt;Code Github&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;AI Health Assistant&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Healthcare&lt;/td&gt;
&lt;td&gt;Diagnoses and monitors diseases using patient data.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/ahmadvh/AI-Agents-for-Medical-Diagnostics.git" target="_blank" rel="noopener"
&gt;&lt;img src="https://camo.githubusercontent.com/465962dd14abb8181b8d1a3dbaf186be171bc3c5338d347e03b863e17980be8b/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f436f64652d4769744875622d626c61636b3f6c6f676f3d676974687562"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Automated Trading Bot&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Finance&lt;/td&gt;
&lt;td&gt;Automates stock trading with real-time market analysis.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/MingyuJ666/Stockagent.git" target="_blank" rel="noopener"
&gt;&lt;img src="https://camo.githubusercontent.com/465962dd14abb8181b8d1a3dbaf186be171bc3c5338d347e03b863e17980be8b/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f436f64652d4769744875622d626c61636b3f6c6f676f3d676974687562"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Content Personalization Agent&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Entertainment&lt;/td&gt;
&lt;td&gt;Recommends personalized media based on preferences.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/crosleythomas/MirrorGPT" target="_blank" rel="noopener"
&gt;&lt;img src="https://camo.githubusercontent.com/465962dd14abb8181b8d1a3dbaf186be171bc3c5338d347e03b863e17980be8b/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f436f64652d4769744875622d626c61636b3f6c6f676f3d676974687562"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Legal Document Review Assistant&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Legal&lt;/td&gt;
&lt;td&gt;Automates document review and highlights key clauses.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/firica/legalai" target="_blank" rel="noopener"
&gt;&lt;img src="https://camo.githubusercontent.com/465962dd14abb8181b8d1a3dbaf186be171bc3c5338d347e03b863e17980be8b/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f436f64652d4769744875622d626c61636b3f6c6f676f3d676974687562"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Recruitment Recommendation Agent&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Human Resources&lt;/td&gt;
&lt;td&gt;Suggests best-fit candidates for job openings.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/sentient-engineering/jobber" target="_blank" rel="noopener"
&gt;&lt;img src="https://camo.githubusercontent.com/465962dd14abb8181b8d1a3dbaf186be171bc3c5338d347e03b863e17980be8b/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f436f64652d4769744875622d626c61636b3f6c6f676f3d676974687562"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Virtual Travel Assistant&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Hospitality&lt;/td&gt;
&lt;td&gt;Plans travel itineraries based on preferences.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/nirbar1985/ai-travel-agent" target="_blank" rel="noopener"
&gt;&lt;img src="https://camo.githubusercontent.com/465962dd14abb8181b8d1a3dbaf186be171bc3c5338d347e03b863e17980be8b/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f436f64652d4769744875622d626c61636b3f6c6f676f3d676974687562"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;AI Game Companion Agent&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Gaming&lt;/td&gt;
&lt;td&gt;Enhances player experience with real-time assistance.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/onjas-buidl/LLM-agent-game" target="_blank" rel="noopener"
&gt;&lt;img src="https://camo.githubusercontent.com/465962dd14abb8181b8d1a3dbaf186be171bc3c5338d347e03b863e17980be8b/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f436f64652d4769744875622d626c61636b3f6c6f676f3d676974687562"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;🌐 Landing Page Generator&lt;/td&gt;
&lt;td&gt;💻 Web Development&lt;/td&gt;
&lt;td&gt;Automates the creation of landing pages for websites, facilitating web development tasks.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/crewAIInc/crewAI-examples/tree/main/crews/landing_page_generator" target="_blank" rel="noopener"
&gt;&lt;img src="https://camo.githubusercontent.com/78ef5623d7361e74e909b90ea5f4af9d939df5307c2896284062b70b0762bdbe/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4769744875622d5265706f7369746f72792d626c7565"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;🎮 Game Builder Crew&lt;/td&gt;
&lt;td&gt;🎮 Game Development&lt;/td&gt;
&lt;td&gt;Assists in the development of games by automating certain aspects of game creation.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/crewAIInc/crewAI-examples/tree/main/crews/game-builder-crew" target="_blank" rel="noopener"
&gt;&lt;img src="https://camo.githubusercontent.com/78ef5623d7361e74e909b90ea5f4af9d939df5307c2896284062b70b0762bdbe/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4769744875622d5265706f7369746f72792d626c7565"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;💹 Stock Analysis Tool&lt;/td&gt;
&lt;td&gt;💰 Finance&lt;/td&gt;
&lt;td&gt;Provides tools for analyzing stock market data to assist in financial decision-making.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/crewAIInc/crewAI-examples/tree/main/crews/stock_analysis" target="_blank" rel="noopener"
&gt;&lt;img src="https://camo.githubusercontent.com/78ef5623d7361e74e909b90ea5f4af9d939df5307c2896284062b70b0762bdbe/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4769744875622d5265706f7369746f72792d626c7565"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;🗺️ Trip Planner&lt;/td&gt;
&lt;td&gt;✈️ Travel&lt;/td&gt;
&lt;td&gt;Assists in planning trips by organizing itineraries and managing travel details.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/crewAIInc/crewAI-examples/tree/main/crews/trip_planner" target="_blank" rel="noopener"
&gt;&lt;img src="https://camo.githubusercontent.com/78ef5623d7361e74e909b90ea5f4af9d939df5307c2896284062b70b0762bdbe/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4769744875622d5265706f7369746f72792d626c7565"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;🎁 Surprise Trip Planner&lt;/td&gt;
&lt;td&gt;✈️ Travel&lt;/td&gt;
&lt;td&gt;Plans surprise trips by selecting destinations and activities based on user preferences.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/crewAIInc/crewAI-examples/tree/main/crews/surprise_trip" target="_blank" rel="noopener"
&gt;&lt;img src="https://camo.githubusercontent.com/78ef5623d7361e74e909b90ea5f4af9d939df5307c2896284062b70b0762bdbe/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4769744875622d5265706f7369746f72792d626c7565"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;📚 Write a Book with Flows&lt;/td&gt;
&lt;td&gt;✍️ Creative Writing&lt;/td&gt;
&lt;td&gt;Assists authors in writing books by providing structured workflows and writing assistance.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/crewAIInc/crewAI-examples/tree/main/flows/write_a_book_with_flows" target="_blank" rel="noopener"
&gt;&lt;img src="https://camo.githubusercontent.com/78ef5623d7361e74e909b90ea5f4af9d939df5307c2896284062b70b0762bdbe/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4769744875622d5265706f7369746f72792d626c7565"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;🎬 Screenplay Writer&lt;/td&gt;
&lt;td&gt;✍️ Creative Writing&lt;/td&gt;
&lt;td&gt;Aids in writing screenplays by offering templates and guidance for script development.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/crewAIInc/crewAI-examples/tree/main/crews/screenplay_writer" target="_blank" rel="noopener"
&gt;&lt;img src="https://camo.githubusercontent.com/78ef5623d7361e74e909b90ea5f4af9d939df5307c2896284062b70b0762bdbe/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4769744875622d5265706f7369746f72792d626c7565"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;✅ Markdown Validator&lt;/td&gt;
&lt;td&gt;📄 Documentation&lt;/td&gt;
&lt;td&gt;Validates Markdown files to ensure proper formatting and adherence to standards.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/crewAIInc/crewAI-examples/tree/main/crews/markdown_validator" target="_blank" rel="noopener"
&gt;&lt;img src="https://camo.githubusercontent.com/78ef5623d7361e74e909b90ea5f4af9d939df5307c2896284062b70b0762bdbe/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4769744875622d5265706f7369746f72792d626c7565"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="-industry-usecase-mindmap"&gt;🏭 Industry UseCase MindMap
&lt;/h2&gt;&lt;p&gt;&lt;img src="https://cdn.jsdelivr.net/gh/Hanguangwu/MyImageBed01/img/20260202175656635.png"
loading="lazy"
&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="-use-case-table"&gt;🧩 Use Case Table
&lt;/h2&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Use Case&lt;/th&gt;
&lt;th&gt;Industry&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;th&gt;Code Github&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;HIA (Health Insights Agent)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Healthcare&lt;/td&gt;
&lt;td&gt;analyses medical reports and provide health insights.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/harshhh28/hia.git" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/Code-GitHub-black?logo=github"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;AI Health Assistant&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Healthcare&lt;/td&gt;
&lt;td&gt;Diagnoses and monitors diseases using patient data.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/ahmadvh/AI-Agents-for-Medical-Diagnostics.git" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/Code-GitHub-black?logo=github"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Automated Trading Bot&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Finance&lt;/td&gt;
&lt;td&gt;Automates stock trading with real-time market analysis.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/MingyuJ666/Stockagent.git" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/Code-GitHub-black?logo=github"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Virtual AI Tutor&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Education&lt;/td&gt;
&lt;td&gt;Provides personalized education tailored to users.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/hqanhh/EduGPT.git" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/Code-GitHub-black?logo=github"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;24/7 AI Chatbot&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Customer Service&lt;/td&gt;
&lt;td&gt;Handles customer queries around the clock.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/NirDiamant/GenAI_Agents/blob/main/all_agents_tutorials/customer_support_agent_langgraph.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/Code-GitHub-black?logo=github"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Product Recommendation Agent&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Retail&lt;/td&gt;
&lt;td&gt;Suggests products based on user preferences and history.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/microsoft/RecAI" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/Code-GitHub-black?logo=github"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Self-Driving Delivery Agent&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Transportation&lt;/td&gt;
&lt;td&gt;Optimizes routes and autonomously delivers packages.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/sled-group/driVLMe" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/Code-GitHub-black?logo=github"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Factory Process Monitoring Agent&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Manufacturing&lt;/td&gt;
&lt;td&gt;Monitors production lines and ensures quality control.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/yuchenxia/llm4ias" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/Code-GitHub-black?logo=github"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Property Pricing Agent&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Real Estate&lt;/td&gt;
&lt;td&gt;Analyzes market trends to determine property prices.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/AleksNeStu/ai-real-estate-assistant" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/Code-GitHub-black?logo=github"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Smart Farming Assistant&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Agriculture&lt;/td&gt;
&lt;td&gt;Provides insights on crop health and yield predictions.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/mohammed97ashraf/LLM_Agri_Bot" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/Code-GitHub-black?logo=github"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Energy Demand Forecasting Agent&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Energy&lt;/td&gt;
&lt;td&gt;Predicts energy usage to optimize grid management.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/yecchen/MIRAI" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/Code-GitHub-black?logo=github"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Content Personalization Agent&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Entertainment&lt;/td&gt;
&lt;td&gt;Recommends personalized media based on preferences.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/crosleythomas/MirrorGPT" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/Code-GitHub-black?logo=github"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Legal Document Review Assistant&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Legal&lt;/td&gt;
&lt;td&gt;Automates document review and highlights key clauses.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/firica/legalai" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/Code-GitHub-black?logo=github"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Recruitment Recommendation Agent&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Human Resources&lt;/td&gt;
&lt;td&gt;Suggests best-fit candidates for job openings.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/sentient-engineering/jobber" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/Code-GitHub-black?logo=github"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Virtual Travel Assistant&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Hospitality&lt;/td&gt;
&lt;td&gt;Plans travel itineraries based on preferences.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/nirbar1985/ai-travel-agent" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/Code-GitHub-black?logo=github"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;AI Game Companion Agent&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Gaming&lt;/td&gt;
&lt;td&gt;Enhances player experience with real-time assistance.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/onjas-buidl/LLM-agent-game" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/Code-GitHub-black?logo=github"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Real-Time Threat Detection Agent&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Cybersecurity&lt;/td&gt;
&lt;td&gt;Identifies potential threats and mitigates attacks.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/NVISOsecurity/cyber-security-llm-agents" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/Code-GitHub-black?logo=github"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;E-commerce Personal Shopper Agent&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;E-commerce&lt;/td&gt;
&lt;td&gt;Helps customers find products they’ll love.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/Hoanganhvu123/ShoppingGPT" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/Code-GitHub-black?logo=github"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Logistics Optimization Agent&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Supply Chain&lt;/td&gt;
&lt;td&gt;Plans efficient delivery routes and manages inventory.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/microsoft/OptiGuide" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/Code-GitHub-black?logo=github"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Vibe Hacking Agent&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Cybersecurity&lt;/td&gt;
&lt;td&gt;Autonomous Multi-Agent Based Red Team Testing Service.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/PurpleAILAB/Decepticon" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/Code-GitHub-black?logo=github"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;MediSuite-Ai-Agent&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Health insurance&lt;/td&gt;
&lt;td&gt;A medical ai agent that helps automating the process of hospitals / insurance claiming workflow.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/MahmoudRabea13/MediSuite-Ai-Agent" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/Code-GitHub-black?logo=github"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Lina-Egyptian-Medical-Chatbot&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Health insurance&lt;/td&gt;
&lt;td&gt;A medical ai agent that helps automating the process of hospitals / insurance claiming workflow.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/MahmoudRabea13/MediSuite-Ai-Agent" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/Code-GitHub-black?logo=github"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="framework-wise-usecases"&gt;Framework wise Usecases
&lt;/h2&gt;&lt;hr&gt;
&lt;h3 id="framework-name-crewai"&gt;&lt;strong&gt;Framework Name&lt;/strong&gt;: &lt;strong&gt;CrewAI&lt;/strong&gt;
&lt;/h3&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Use Case&lt;/th&gt;
&lt;th&gt;Industry&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;th&gt;GitHub&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;📧 Email Auto Responder Flow&lt;/td&gt;
&lt;td&gt;🗣️ Communication&lt;/td&gt;
&lt;td&gt;Automates email responses based on predefined criteria to enhance communication efficiency.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/crewAIInc/crewAI-examples/tree/main/flows/email_auto_responder_flow" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/GitHub-Repository-blue"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;📝 Meeting Assistant Flow&lt;/td&gt;
&lt;td&gt;🛠️ Productivity&lt;/td&gt;
&lt;td&gt;Assists in organizing and managing meetings, including scheduling and agenda preparation.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/crewAIInc/crewAI-examples/tree/main/flows/meeting_assistant_flow" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/GitHub-Repository-blue"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;🔄 Self Evaluation Loop Flow&lt;/td&gt;
&lt;td&gt;👥 Human Resources&lt;/td&gt;
&lt;td&gt;Facilitates self-assessment processes within an organization, aiding in performance reviews.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/crewAIInc/crewAI-examples/tree/main/flows/self_evaluation_loop_flow" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/GitHub-Repository-blue"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;📈 Lead Score Flow&lt;/td&gt;
&lt;td&gt;💼 Sales&lt;/td&gt;
&lt;td&gt;Evaluates and scores potential leads to prioritize outreach in sales strategies.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/crewAIInc/crewAI-examples/tree/main/flows/lead-score-flow" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/GitHub-Repository-blue"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;📊 Marketing Strategy Generator&lt;/td&gt;
&lt;td&gt;📢 Marketing&lt;/td&gt;
&lt;td&gt;Develops marketing strategies by analyzing market trends and audience data.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/crewAIInc/crewAI-examples/tree/main/crews/marketing_strategy" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/GitHub-Repository-blue"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;📝 Job Posting Generator&lt;/td&gt;
&lt;td&gt;🧑‍💼 Recruitment&lt;/td&gt;
&lt;td&gt;Creates job postings by analyzing job requirements, aiding in recruitment processes.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/crewAIInc/crewAI-examples/tree/main/crews/job-posting" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/GitHub-Repository-blue"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;🔄 Recruitment Workflow&lt;/td&gt;
&lt;td&gt;🧑‍💼 Recruitment&lt;/td&gt;
&lt;td&gt;Streamlines the recruitment process by automating various tasks involved in hiring.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/crewAIInc/crewAI-examples/tree/main/crews/recruitment" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/GitHub-Repository-blue"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;🔍 Match Profile to Positions&lt;/td&gt;
&lt;td&gt;🧑‍💼 Recruitment&lt;/td&gt;
&lt;td&gt;Matches candidate profiles to suitable job positions to enhance recruitment efficiency.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/crewAIInc/crewAI-examples/tree/main/crews/match_profile_to_positions" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/GitHub-Repository-blue"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;📸 Instagram Post Generator&lt;/td&gt;
&lt;td&gt;📱 Social Media&lt;/td&gt;
&lt;td&gt;Generates and schedules Instagram posts automatically, streamlining social media management.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/crewAIInc/crewAI-examples/tree/main/crews/instagram_post" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/GitHub-Repository-blue"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;🌐 Landing Page Generator&lt;/td&gt;
&lt;td&gt;💻 Web Development&lt;/td&gt;
&lt;td&gt;Automates the creation of landing pages for websites, facilitating web development tasks.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/crewAIInc/crewAI-examples/tree/main/crews/landing_page_generator" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/GitHub-Repository-blue"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;🎮 Game Builder Crew&lt;/td&gt;
&lt;td&gt;🎮 Game Development&lt;/td&gt;
&lt;td&gt;Assists in the development of games by automating certain aspects of game creation.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/crewAIInc/crewAI-examples/tree/main/crews/game-builder-crew" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/GitHub-Repository-blue"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;💹 Stock Analysis Tool&lt;/td&gt;
&lt;td&gt;💰 Finance&lt;/td&gt;
&lt;td&gt;Provides tools for analyzing stock market data to assist in financial decision-making.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/crewAIInc/crewAI-examples/tree/main/crews/stock_analysis" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/GitHub-Repository-blue"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;🗺️ Trip Planner&lt;/td&gt;
&lt;td&gt;✈️ Travel&lt;/td&gt;
&lt;td&gt;Assists in planning trips by organizing itineraries and managing travel details.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/crewAIInc/crewAI-examples/tree/main/crews/trip_planner" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/GitHub-Repository-blue"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;🎁 Surprise Trip Planner&lt;/td&gt;
&lt;td&gt;✈️ Travel&lt;/td&gt;
&lt;td&gt;Plans surprise trips by selecting destinations and activities based on user preferences.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/crewAIInc/crewAI-examples/tree/main/crews/surprise_trip" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/GitHub-Repository-blue"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;📚 Write a Book with Flows&lt;/td&gt;
&lt;td&gt;✍️ Creative Writing&lt;/td&gt;
&lt;td&gt;Assists authors in writing books by providing structured workflows and writing assistance.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/crewAIInc/crewAI-examples/tree/main/flows/write_a_book_with_flows" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/GitHub-Repository-blue"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;🎬 Screenplay Writer&lt;/td&gt;
&lt;td&gt;✍️ Creative Writing&lt;/td&gt;
&lt;td&gt;Aids in writing screenplays by offering templates and guidance for script development.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/crewAIInc/crewAI-examples/tree/main/crews/screenplay_writer" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/GitHub-Repository-blue"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;✅ Markdown Validator&lt;/td&gt;
&lt;td&gt;📄 Documentation&lt;/td&gt;
&lt;td&gt;Validates Markdown files to ensure proper formatting and adherence to standards.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/crewAIInc/crewAI-examples/tree/main/crews/markdown_validator" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/GitHub-Repository-blue"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;🧠 Meta Quest Knowledge&lt;/td&gt;
&lt;td&gt;📚 Knowledge Management&lt;/td&gt;
&lt;td&gt;Manages and organizes knowledge related to Meta Quest, facilitating information retrieval.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/crewAIInc/crewAI-examples/tree/main/crews/meta_quest_knowledge" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/GitHub-Repository-blue"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;🤖 NVIDIA Models Integration&lt;/td&gt;
&lt;td&gt;🤖 AI Integration&lt;/td&gt;
&lt;td&gt;Integrates NVIDIA AI models into workflows to enhance computational capabilities.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/crewAIInc/crewAI-examples/tree/main/integrations/nvidia_models" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/GitHub-Repository-blue"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;🗂️ Prep for a Meeting&lt;/td&gt;
&lt;td&gt;🛠️ Productivity&lt;/td&gt;
&lt;td&gt;Assists in preparing for meetings by organizing materials and setting agendas.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/crewAIInc/crewAI-examples/tree/main/crews/prep-for-a-meeting" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/GitHub-Repository-blue"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;🛠️Starter Template&lt;/td&gt;
&lt;td&gt;🛠️ Development&lt;/td&gt;
&lt;td&gt;Provides a starter template for new projects to streamline the setup process.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/crewAIInc/crewAI-examples/tree/main/crews/starter_template" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/GitHub-Repository-blue"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;🔗CrewAI + LangGraph Integration&lt;/td&gt;
&lt;td&gt;🤖 AI Integration&lt;/td&gt;
&lt;td&gt;Demonstrates integration between CrewAI and LangGraph for enhanced workflow automation.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://github.com/crewAIInc/crewAI-examples/tree/main/integrations/CrewAI-LangGraph" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/GitHub-Repository-blue"
loading="lazy"
alt="GitHub"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id="framework-name-autogen"&gt;&lt;strong&gt;Framework Name&lt;/strong&gt;: &lt;strong&gt;Autogen&lt;/strong&gt;
&lt;/h3&gt;&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Code Generation, Execution, and Debugging&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Use Case&lt;/th&gt;
&lt;th&gt;Industry&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;th&gt;Notebook&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;🤖 Automated Task Solving with Code Generation, Execution &amp;amp; Debugging&lt;/td&gt;
&lt;td&gt;💻 Software Development&lt;/td&gt;
&lt;td&gt;Demonstrates automated task-solving by generating, executing, and debugging code.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://microsoft.github.io/autogen/0.2/docs/notebooks/agentchat_auto_feedback_from_code_execution" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;🧑‍💻 Automated Code Generation and Question Answering with Retrieval Augmented Agents&lt;/td&gt;
&lt;td&gt;💻 Software Development&lt;/td&gt;
&lt;td&gt;Generates code and answers questions using retrieval-augmented methods.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://microsoft.github.io/autogen/0.2/docs/notebooks/agentchat_RetrieveChat" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;🧠 Automated Code Generation and Question Answering with Qdrant-based Retrieval&lt;/td&gt;
&lt;td&gt;💻 Software Development&lt;/td&gt;
&lt;td&gt;Utilizes Qdrant for enhanced retrieval-augmented agent performance.&lt;/td&gt;
&lt;td&gt;&lt;a class="link" href="https://microsoft.github.io/autogen/0.2/docs/notebooks/agentchat_RetrieveChat_qdrant" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Multi-Agent Collaboration (&amp;gt;3 Agents)&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Use Case&lt;/th&gt;
&lt;th style="text-align: left"&gt;Industry&lt;/th&gt;
&lt;th style="text-align: left"&gt;Description&lt;/th&gt;
&lt;th style="text-align: left"&gt;Notebook&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🤝 Automated Task Solving by Group Chat (3 members, 1 manager)&lt;/td&gt;
&lt;td style="text-align: left"&gt;🤝 Collaboration&lt;/td&gt;
&lt;td style="text-align: left"&gt;Demonstrates group task-solving via multi-agent collaboration.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://microsoft.github.io/autogen/0.2/docs/notebooks/agentchat_groupchat" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;📊 Automated Data Visualization by Group Chat (3 members, 1 manager)&lt;/td&gt;
&lt;td style="text-align: left"&gt;📊 Data Analysis&lt;/td&gt;
&lt;td style="text-align: left"&gt;Uses multi-agent collaboration to create data visualizations.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://microsoft.github.io/autogen/0.2/docs/notebooks/agentchat_groupchat_vis" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🧩 Automated Complex Task Solving by Group Chat (6 members, 1 manager)&lt;/td&gt;
&lt;td style="text-align: left"&gt;🤝 Collaboration&lt;/td&gt;
&lt;td style="text-align: left"&gt;Solves complex tasks collaboratively with a larger group of agents.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://microsoft.github.io/autogen/0.2/docs/notebooks/agentchat_groupchat_research" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🧑‍💻 Automated Task Solving with Coding &amp;amp; Planning Agents&lt;/td&gt;
&lt;td style="text-align: left"&gt;🛠️ Planning &amp;amp; Development&lt;/td&gt;
&lt;td style="text-align: left"&gt;Combines coding and planning agents for solving tasks effectively.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_planning.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;📐 Automated Task Solving with Transition Paths Specified in a Graph&lt;/td&gt;
&lt;td style="text-align: left"&gt;🤝 Collaboration&lt;/td&gt;
&lt;td style="text-align: left"&gt;Uses predefined transition paths in a graph for solving tasks.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://microsoft.github.io/autogen/docs/notebooks/agentchat_groupchat_finite_state_machine" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🧠 Running a Group Chat as an Inner-Monologue via the SocietyOfMindAgent&lt;/td&gt;
&lt;td style="text-align: left"&gt;🧠 Cognitive Sciences&lt;/td&gt;
&lt;td style="text-align: left"&gt;Simulates inner-monologue for problem-solving using group chats.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://microsoft.github.io/autogen/0.2/docs/notebooks/agentchat_society_of_mind" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🔧 Running a Group Chat with Custom Speaker Selection Function&lt;/td&gt;
&lt;td style="text-align: left"&gt;🤝 Collaboration&lt;/td&gt;
&lt;td style="text-align: left"&gt;Implements a custom function for speaker selection in group chats.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://microsoft.github.io/autogen/0.2/docs/notebooks/agentchat_groupchat_customized" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Sequential Multi-Agent Chats&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Use Case&lt;/th&gt;
&lt;th style="text-align: left"&gt;Industry&lt;/th&gt;
&lt;th style="text-align: left"&gt;Description&lt;/th&gt;
&lt;th style="text-align: left"&gt;Notebook&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🔄 Solving Multiple Tasks in a Sequence of Chats Initiated by a Single Agent&lt;/td&gt;
&lt;td style="text-align: left"&gt;🔄 Workflow Automation&lt;/td&gt;
&lt;td style="text-align: left"&gt;Automates sequential task-solving with a single initiating agent.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://microsoft.github.io/autogen/0.2/docs/notebooks/agentchat_multi_task_chats" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;⏳ Async-solving Multiple Tasks in a Sequence of Chats Initiated by a Single Agent&lt;/td&gt;
&lt;td style="text-align: left"&gt;🔄 Workflow Automation&lt;/td&gt;
&lt;td style="text-align: left"&gt;Handles asynchronous task-solving in a sequence of chats initiated by one agent.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://microsoft.github.io/autogen/0.2/docs/notebooks/agentchat_multi_task_async_chats" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🤝 Solving Multiple Tasks in a Sequence of Chats Initiated by Different Agents&lt;/td&gt;
&lt;td style="text-align: left"&gt;🔄 Workflow Automation&lt;/td&gt;
&lt;td style="text-align: left"&gt;Facilitates sequential task-solving with different agents initiating each chat.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://microsoft.github.io/autogen/0.2/docs/notebooks/agentchats_sequential_chats" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Nested Chats&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Use Case&lt;/th&gt;
&lt;th style="text-align: left"&gt;Industry&lt;/th&gt;
&lt;th style="text-align: left"&gt;Description&lt;/th&gt;
&lt;th style="text-align: left"&gt;Notebook&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🧠 Solving Complex Tasks with Nested Chats&lt;/td&gt;
&lt;td style="text-align: left"&gt;🧠 Problem Solving&lt;/td&gt;
&lt;td style="text-align: left"&gt;Uses nested chats to solve hierarchical and complex problems.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://microsoft.github.io/autogen/0.2/docs/notebooks/agentchat_nestedchat" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🔄 Solving Complex Tasks with A Sequence of Nested Chats&lt;/td&gt;
&lt;td style="text-align: left"&gt;🧠 Problem Solving&lt;/td&gt;
&lt;td style="text-align: left"&gt;Demonstrates sequential task-solving using nested chats.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://microsoft.github.io/autogen/0.2/docs/notebooks/agentchat_nested_sequential_chats" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🏭 OptiGuide for Solving a Supply Chain Optimization Problem with Nested Chats&lt;/td&gt;
&lt;td style="text-align: left"&gt;🏭 Supply Chain Optimization&lt;/td&gt;
&lt;td style="text-align: left"&gt;Showcases how to solve supply chain optimization problems using nested chats, a coding agent, and a safeguard agent.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://microsoft.github.io/autogen/0.2/docs/notebooks/agentchat_nestedchat_optiguide" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;♟️ Conversational Chess with Nested Chats and Tool Use&lt;/td&gt;
&lt;td style="text-align: left"&gt;🎮 Gaming&lt;/td&gt;
&lt;td style="text-align: left"&gt;Explores the use of nested chats for playing conversational chess with integrated tools.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://microsoft.github.io/autogen/0.2/docs/notebooks/agentchat_nested_chats_chess" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Application&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Use Case&lt;/th&gt;
&lt;th style="text-align: left"&gt;Industry&lt;/th&gt;
&lt;th style="text-align: left"&gt;Description&lt;/th&gt;
&lt;th style="text-align: left"&gt;Notebook&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🔄 Automated Continual Learning from New Data&lt;/td&gt;
&lt;td style="text-align: left"&gt;📊 Machine Learning&lt;/td&gt;
&lt;td style="text-align: left"&gt;Continuously learns from new data inputs for adaptive AI.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_stream.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🏭 OptiGuide - Coding, Tool Using, Safeguarding &amp;amp; Question Answering for Supply Chain Optimization&lt;/td&gt;
&lt;td style="text-align: left"&gt;🏭 Supply Chain Optimization&lt;/td&gt;
&lt;td style="text-align: left"&gt;Highlights a solution combining coding, tool use, and safeguarding for supply chain optimization.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://microsoft.github.io/autogen/0.2/docs/notebooks/agentchat_nestedchat_optiguide" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🤖 AutoAnny - A Discord bot built using AutoGen&lt;/td&gt;
&lt;td style="text-align: left"&gt;💬 Communication Tools&lt;/td&gt;
&lt;td style="text-align: left"&gt;Showcases the development of a Discord bot using AutoGen for enhanced interaction.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/microsoft/autogen/tree/main/samples/apps/auto-anny" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Tools&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Use Case&lt;/th&gt;
&lt;th style="text-align: left"&gt;Industry&lt;/th&gt;
&lt;th style="text-align: left"&gt;Description&lt;/th&gt;
&lt;th style="text-align: left"&gt;Notebook&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🌐 Web Search: Solve Tasks Requiring Web Info&lt;/td&gt;
&lt;td style="text-align: left"&gt;🔍 Information Retrieval&lt;/td&gt;
&lt;td style="text-align: left"&gt;Searches the web to gather information required for completing tasks.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_web_info.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🔧 Use Provided Tools as Functions&lt;/td&gt;
&lt;td style="text-align: left"&gt;🛠️ Tool Integration&lt;/td&gt;
&lt;td style="text-align: left"&gt;Demonstrates how to use pre-provided tools as callable functions in AutoGen.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://microsoft.github.io/autogen/0.2/docs/notebooks/agentchat_function_call_currency_calculator" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🔗 Use Tools via Sync and Async Function Calling&lt;/td&gt;
&lt;td style="text-align: left"&gt;🛠️ Tool Integration&lt;/td&gt;
&lt;td style="text-align: left"&gt;Illustrates synchronous and asynchronous tool usage within AutoGen workflows.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://microsoft.github.io/autogen/0.2/docs/notebooks/agentchat_function_call_async" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🧩 Task Solving with Langchain Provided Tools as Functions&lt;/td&gt;
&lt;td style="text-align: left"&gt;🔍 Language Processing&lt;/td&gt;
&lt;td style="text-align: left"&gt;Leverages Langchain tools for task-solving within AutoGen.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_langchain.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;📚 RAG: Group Chat with Retrieval Augmented Generation&lt;/td&gt;
&lt;td style="text-align: left"&gt;🤝 Collaboration&lt;/td&gt;
&lt;td style="text-align: left"&gt;Enables group chat with Retrieval Augmented Generation (RAG) to support information sharing.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://microsoft.github.io/autogen/0.2/docs/notebooks/agentchat_groupchat_RAG" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;⚙️ Function Inception: Update/Remove Functions During Conversations&lt;/td&gt;
&lt;td style="text-align: left"&gt;🔧 Development Tools&lt;/td&gt;
&lt;td style="text-align: left"&gt;Allows AutoGen agents to modify their functions dynamically during conversations.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_inception_function.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🔊 Agent Chat with Whisper&lt;/td&gt;
&lt;td style="text-align: left"&gt;🎙️ Audio Processing&lt;/td&gt;
&lt;td style="text-align: left"&gt;Demonstrates AI agent capabilities for transcription and translation using Whisper.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://microsoft.github.io/autogen/0.2/docs/notebooks/agentchat_video_transcript_translate_with_whisper" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;📏 Constrained Responses via Guidance&lt;/td&gt;
&lt;td style="text-align: left"&gt;💡 Natural Language Processing&lt;/td&gt;
&lt;td style="text-align: left"&gt;Shows how to use guidance to constrain responses generated by agents.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_guidance.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🌍 Browse the Web with Agents&lt;/td&gt;
&lt;td style="text-align: left"&gt;🌐 Information Retrieval&lt;/td&gt;
&lt;td style="text-align: left"&gt;Explains how to configure agents to browse and retrieve information from the web.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_surfer.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;📊 SQL: Natural Language Text to SQL Query Using Spider Benchmark&lt;/td&gt;
&lt;td style="text-align: left"&gt;💾 Database Management&lt;/td&gt;
&lt;td style="text-align: left"&gt;Converts natural language inputs into SQL queries using the Spider benchmark.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_sql_spider.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🕸️ Web Scraping with Apify&lt;/td&gt;
&lt;td style="text-align: left"&gt;🌐 Data Gathering&lt;/td&gt;
&lt;td style="text-align: left"&gt;Illustrates web scraping techniques with Apify using AutoGen.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://microsoft.github.io/autogen/0.2/docs/notebooks/agentchat_webscraping_with_apify" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🕷️ Web Crawling: Crawl Entire Domain with Spider API&lt;/td&gt;
&lt;td style="text-align: left"&gt;🌐 Data Gathering&lt;/td&gt;
&lt;td style="text-align: left"&gt;Explains how to crawl entire domains using the Spider API.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://microsoft.github.io/autogen/0.2/docs/notebooks/agentchat_webcrawling_with_spider" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;💻 Write a Software App Task by Task with Specially Designed Functions&lt;/td&gt;
&lt;td style="text-align: left"&gt;💻 Software Development&lt;/td&gt;
&lt;td style="text-align: left"&gt;Builds a software application step-by-step using designed functions.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_function_call_code_writing.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Human Development&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Use Case&lt;/th&gt;
&lt;th style="text-align: left"&gt;Industry&lt;/th&gt;
&lt;th style="text-align: left"&gt;Description&lt;/th&gt;
&lt;th style="text-align: left"&gt;Notebook&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;💬 Simple Example in ChatGPT Style&lt;/td&gt;
&lt;td style="text-align: left"&gt;🧠 Conversational AI&lt;/td&gt;
&lt;td style="text-align: left"&gt;Demonstrates a simple conversational example in the style of ChatGPT.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/microsoft/autogen/blob/0.2/samples/simple_chat.py" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Example-blue?logo=openai"
loading="lazy"
alt="Example"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🤖 Auto Code Generation, Execution, Debugging and Human Feedback&lt;/td&gt;
&lt;td style="text-align: left"&gt;💻 Software Development&lt;/td&gt;
&lt;td style="text-align: left"&gt;Showcases code generation, execution, debugging with human feedback integrated into the workflow.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_human_feedback.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;👥 Automated Task Solving with GPT-4 + Multiple Human Users&lt;/td&gt;
&lt;td style="text-align: left"&gt;🤝 Collaboration&lt;/td&gt;
&lt;td style="text-align: left"&gt;Enables task solving with multiple human users collaborating with GPT-4.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_two_users.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🔄 Agent Chat with Async Human Inputs&lt;/td&gt;
&lt;td style="text-align: left"&gt;🧠 Conversational AI&lt;/td&gt;
&lt;td style="text-align: left"&gt;Supports asynchronous human input during agent conversations.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/microsoft/autogen/blob/0.2/notebook/Async_human_input.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Agent Teaching and Learning&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Use Case&lt;/th&gt;
&lt;th style="text-align: left"&gt;Industry&lt;/th&gt;
&lt;th style="text-align: left"&gt;Description&lt;/th&gt;
&lt;th style="text-align: left"&gt;Notebook&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;📘 Teach Agents New Skills &amp;amp; Reuse via Automated Chat&lt;/td&gt;
&lt;td style="text-align: left"&gt;🎓 Education &amp;amp; Training&lt;/td&gt;
&lt;td style="text-align: left"&gt;Demonstrates teaching new skills to agents and enabling their reuse in automated chats.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://microsoft.github.io/autogen/0.2/docs/notebooks/agentchat_teaching" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🧠 Teach Agents New Facts, User Preferences and Skills Beyond Coding&lt;/td&gt;
&lt;td style="text-align: left"&gt;🎓 Education &amp;amp; Training&lt;/td&gt;
&lt;td style="text-align: left"&gt;Shows how to teach agents new facts, user preferences, and non-coding skills.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://microsoft.github.io/autogen/0.2/docs/notebooks/agentchat_teachability" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🤖 Teach OpenAI Assistants Through GPTAssistantAgent&lt;/td&gt;
&lt;td style="text-align: left"&gt;💻 AI Assistant Development&lt;/td&gt;
&lt;td style="text-align: left"&gt;Illustrates how to enhance OpenAI assistants&amp;rsquo; capabilities using GPTAssistantAgent.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_teachable_oai_assistants.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🔄 Agent Optimizer: Train Agents in an Agentic Way&lt;/td&gt;
&lt;td style="text-align: left"&gt;🛠️ Optimization&lt;/td&gt;
&lt;td style="text-align: left"&gt;Explains how to train agents effectively in an agentic manner using the Agent Optimizer.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_agentoptimizer.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Multi-Agent Chat with OpenAI Assistants in the loop&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Use Case&lt;/th&gt;
&lt;th style="text-align: left"&gt;Industry&lt;/th&gt;
&lt;th style="text-align: left"&gt;Description&lt;/th&gt;
&lt;th style="text-align: left"&gt;Notebook&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🌟 Hello-World Chat with OpenAI Assistant in AutoGen&lt;/td&gt;
&lt;td style="text-align: left"&gt;🤖 Conversational AI&lt;/td&gt;
&lt;td style="text-align: left"&gt;A basic example of chatting with OpenAI Assistant using AutoGen.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_oai_assistant_twoagents_basic.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🔧 Chat with OpenAI Assistant using Function Call&lt;/td&gt;
&lt;td style="text-align: left"&gt;🔧 Development Tools&lt;/td&gt;
&lt;td style="text-align: left"&gt;Illustrates how to use function calls with OpenAI Assistant in chats.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_oai_assistant_function_call.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🧠 Chat with OpenAI Assistant with Code Interpreter&lt;/td&gt;
&lt;td style="text-align: left"&gt;💻 Software Development&lt;/td&gt;
&lt;td style="text-align: left"&gt;Demonstrates the use of OpenAI Assistant as a code interpreter in chats.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_oai_code_interpreter.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🔍 Chat with OpenAI Assistant with Retrieval Augmentation&lt;/td&gt;
&lt;td style="text-align: left"&gt;📚 Information Retrieval&lt;/td&gt;
&lt;td style="text-align: left"&gt;Enables retrieval-augmented conversations with OpenAI Assistant.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_oai_assistant_retrieval.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🤝 OpenAI Assistant in a Group Chat&lt;/td&gt;
&lt;td style="text-align: left"&gt;🤝 Collaboration&lt;/td&gt;
&lt;td style="text-align: left"&gt;Shows how OpenAI Assistant can collaborate with other agents in a group chat.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_oai_assistant_groupchat.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🛠️ GPTAssistantAgent based Multi-Agent Tool Use&lt;/td&gt;
&lt;td style="text-align: left"&gt;🔧 Development Tools&lt;/td&gt;
&lt;td style="text-align: left"&gt;Explains how to use GPTAssistantAgent for multi-agent tool usage.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/microsoft/autogen/blob/0.2/notebook/gpt_assistant_agent_function_call.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Non-OpenAI Models&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Use Case&lt;/th&gt;
&lt;th style="text-align: left"&gt;Industry&lt;/th&gt;
&lt;th style="text-align: left"&gt;Description&lt;/th&gt;
&lt;th style="text-align: left"&gt;Notebook&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;♟️ Conversational Chess using Non-OpenAI Models&lt;/td&gt;
&lt;td style="text-align: left"&gt;🎮 Gaming&lt;/td&gt;
&lt;td style="text-align: left"&gt;Explores conversational chess implemented with non-OpenAI models.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://microsoft.github.io/autogen/0.2/docs/notebooks/agentchat_nested_chats_chess_altmodels" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Multimodal Agent&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Use Case&lt;/th&gt;
&lt;th style="text-align: left"&gt;Industry&lt;/th&gt;
&lt;th style="text-align: left"&gt;Description&lt;/th&gt;
&lt;th style="text-align: left"&gt;Notebook&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🎨 Multimodal Agent Chat with DALLE and GPT-4V&lt;/td&gt;
&lt;td style="text-align: left"&gt;🖼️ Multimedia AI&lt;/td&gt;
&lt;td style="text-align: left"&gt;Combines DALLE and GPT-4V for multimodal agent communication.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_dalle_and_gpt4v.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🖌️ Multimodal Agent Chat with Llava&lt;/td&gt;
&lt;td style="text-align: left"&gt;📷 Image Processing&lt;/td&gt;
&lt;td style="text-align: left"&gt;Uses Llava for enabling multimodal agent conversations with image processing.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_lmm_llava.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🖼️ Multimodal Agent Chat with GPT-4V&lt;/td&gt;
&lt;td style="text-align: left"&gt;🖼️ Multimedia AI&lt;/td&gt;
&lt;td style="text-align: left"&gt;Leverages GPT-4V for visual and conversational interactions in multimodal agents.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_lmm_gpt-4v.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Long Context Handling&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Use Case&lt;/th&gt;
&lt;th style="text-align: left"&gt;Industry&lt;/th&gt;
&lt;th style="text-align: left"&gt;Description&lt;/th&gt;
&lt;th style="text-align: left"&gt;Notebook&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;📜 Long Context Handling as A Capability&lt;/td&gt;
&lt;td style="text-align: left"&gt;🧠 AI Capability&lt;/td&gt;
&lt;td style="text-align: left"&gt;Demonstrates techniques for handling long context effectively within AI workflows.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://microsoft.github.io/autogen/0.2/docs/notebooks/agentchat_transform_messages" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Evaluation and Assessment&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Use Case&lt;/th&gt;
&lt;th style="text-align: left"&gt;Industry&lt;/th&gt;
&lt;th style="text-align: left"&gt;Description&lt;/th&gt;
&lt;th style="text-align: left"&gt;Notebook&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;📊 AgentEval: A Multi-Agent System for Assessing Utility of LLM-Powered Applications&lt;/td&gt;
&lt;td style="text-align: left"&gt;📈 Performance Evaluation&lt;/td&gt;
&lt;td style="text-align: left"&gt;Introduces AgentEval for evaluating and assessing the performance of LLM-based applications.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/microsoft/autogen/blob/0.2/notebook/agenteval_cq_math.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Automatic Agent Building&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Use Case&lt;/th&gt;
&lt;th style="text-align: left"&gt;Industry&lt;/th&gt;
&lt;th style="text-align: left"&gt;Description&lt;/th&gt;
&lt;th style="text-align: left"&gt;Notebook&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🏗️ Automatically Build Multi-agent System with AgentBuilder&lt;/td&gt;
&lt;td style="text-align: left"&gt;🤖 AI Development&lt;/td&gt;
&lt;td style="text-align: left"&gt;Explains how to automatically build multi-agent systems using the AgentBuilder tool.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/microsoft/autogen/blob/0.2/notebook/autobuild_basic.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;📚 Automatically Build Multi-agent System from Agent Library&lt;/td&gt;
&lt;td style="text-align: left"&gt;🤖 AI Development&lt;/td&gt;
&lt;td style="text-align: left"&gt;Shows how to construct multi-agent systems by leveraging a pre-defined agent library.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/microsoft/autogen/blob/0.2/notebook/autobuild_agent_library.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Observability&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Use Case&lt;/th&gt;
&lt;th style="text-align: left"&gt;Industry&lt;/th&gt;
&lt;th style="text-align: left"&gt;Description&lt;/th&gt;
&lt;th style="text-align: left"&gt;Notebook&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;📊 Track LLM Calls, Tool Usage, Actions and Errors using AgentOps&lt;/td&gt;
&lt;td style="text-align: left"&gt;📈 Monitoring &amp;amp; Analytics&lt;/td&gt;
&lt;td style="text-align: left"&gt;Demonstrates how to monitor LLM interactions, tool usage, and errors using AgentOps.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_agentops.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Enhanced Inferences&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Use Case&lt;/th&gt;
&lt;th style="text-align: left"&gt;Industry&lt;/th&gt;
&lt;th style="text-align: left"&gt;Description&lt;/th&gt;
&lt;th style="text-align: left"&gt;Notebook&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🔗 API Unification&lt;/td&gt;
&lt;td style="text-align: left"&gt;🔧 API Management&lt;/td&gt;
&lt;td style="text-align: left"&gt;Explains how to unify API usage with documentation and code examples.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://microsoft.github.io/autogen/docs/Use-Cases/enhanced_inference/#api-unification" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Documentation-blue?logo=readthedocs"
loading="lazy"
alt="Documentation"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;⚙️ Utility Functions to Help Managing API Configurations Effectively&lt;/td&gt;
&lt;td style="text-align: left"&gt;🔧 API Management&lt;/td&gt;
&lt;td style="text-align: left"&gt;Demonstrates utility functions to manage API configurations more effectively.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://microsoft.github.io/autogen/0.2/docs/topics/llm_configuration" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;💰 Cost Calculation&lt;/td&gt;
&lt;td style="text-align: left"&gt;📈 Cost Management&lt;/td&gt;
&lt;td style="text-align: left"&gt;Introduces methods for tracking token usage and estimating costs for LLM interactions.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_cost_token_tracking.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;⚡ Optimize for Code Generation&lt;/td&gt;
&lt;td style="text-align: left"&gt;📊 Optimization&lt;/td&gt;
&lt;td style="text-align: left"&gt;Highlights cost-effective optimization techniques for improving code generation with LLMs.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/microsoft/autogen/blob/0.2/notebook/oai_completion.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;📐 Optimize for Math&lt;/td&gt;
&lt;td style="text-align: left"&gt;📊 Optimization&lt;/td&gt;
&lt;td style="text-align: left"&gt;Explains techniques to optimize LLM performance for solving mathematical problems.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/microsoft/autogen/blob/0.2/notebook/oai_chatgpt_gpt4.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/badge/View-Notebook-blue?logo=jupyter"
loading="lazy"
alt="Notebook"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id="framework-name-agno"&gt;&lt;strong&gt;Framework Name&lt;/strong&gt;: &lt;strong&gt;Agno&lt;/strong&gt;
&lt;/h3&gt;&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;UseCase&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Use Case&lt;/th&gt;
&lt;th style="text-align: left"&gt;Industry&lt;/th&gt;
&lt;th style="text-align: left"&gt;Description&lt;/th&gt;
&lt;th style="text-align: left"&gt;Notebook&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🤖 Support Agent&lt;/td&gt;
&lt;td style="text-align: left"&gt;💻 Software Development / AI / Framework Support&lt;/td&gt;
&lt;td style="text-align: left"&gt;The Agno Support Agent helps developers with the Agno framework by providing real-time answers, explanations, and code examples.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/agno-agi/agno/blob/main/cookbook/examples/agents/agno_support_agent.py" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/static/v1?label=AI&amp;#43;Agent&amp;#43;Code&amp;amp;message=Python&amp;amp;color=%23244cd1"
loading="lazy"
alt="AI Agent Code - Python"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🎥 YouTube Agent&lt;/td&gt;
&lt;td style="text-align: left"&gt;📺 Media &amp;amp; Content&lt;/td&gt;
&lt;td style="text-align: left"&gt;An intelligent agent that analyzes YouTube videos by generating detailed summaries, timestamps, themes, and content breakdowns using AI tools.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/agno-agi/agno/blob/main/cookbook/examples/agents/youtube_agent.py" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/static/v1?label=AI&amp;#43;Agent&amp;#43;Code&amp;amp;message=Python&amp;amp;color=%23244cd1"
loading="lazy"
alt="AI Agent Code - Python"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;📊 Finance Agent&lt;/td&gt;
&lt;td style="text-align: left"&gt;💼 Finance&lt;/td&gt;
&lt;td style="text-align: left"&gt;An advanced AI-powered market analyst that delivers real-time stock market insights, analyst recommendations, financial deep-dives, and sector-specific trends. Supports prompts for detailed analysis of companies like AAPL, TSLA, NVDA, etc.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/agno-agi/agno/blob/main/cookbook/examples/agents/thinking_finance_agent.py" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/static/v1?label=AI&amp;#43;Agent&amp;#43;Code&amp;amp;message=Python&amp;amp;color=%23244cd1"
loading="lazy"
alt="AI Agent Code - Python"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;📚 Study Partner&lt;/td&gt;
&lt;td style="text-align: left"&gt;🎓 Education&lt;/td&gt;
&lt;td style="text-align: left"&gt;Assists users in learning by finding resources, answering questions, and creating study plans.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/agno-agi/agno/blob/main/cookbook/examples/agents/study_partner.py" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/static/v1?label=AI&amp;#43;Agent&amp;#43;Code&amp;amp;message=Python&amp;amp;color=%23244cd1"
loading="lazy"
alt="AI Agent Code - Python"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🛍️ Shopping Partner Agent&lt;/td&gt;
&lt;td style="text-align: left"&gt;🏬 E-commerce&lt;/td&gt;
&lt;td style="text-align: left"&gt;A product recommender agent that helps users find matching products based on preferences from trusted platforms like Amazon, Flipkart, etc.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/agno-agi/agno/blob/main/cookbook/examples/agents/shopping_partner.py" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/static/v1?label=AI&amp;#43;Agent&amp;#43;Code&amp;amp;message=Python&amp;amp;color=%23244cd1"
loading="lazy"
alt="AI Agent Code - Python"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🎓 Research Scholar Agent&lt;/td&gt;
&lt;td style="text-align: left"&gt;🧠 Education / Research&lt;/td&gt;
&lt;td style="text-align: left"&gt;An AI-powered academic assistant that performs advanced academic searches, analyzes recent publications, synthesizes findings across disciplines, and writes well-structured academic reports with proper citations.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/agno-agi/agno/blob/main/cookbook/examples/agents/research_agent_exa.py" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/static/v1?label=AI&amp;#43;Agent&amp;#43;Code&amp;amp;message=Python&amp;amp;color=%23244cd1"
loading="lazy"
alt="AI Agent Code - Python"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🧠 Research Agent&lt;/td&gt;
&lt;td style="text-align: left"&gt;🗞️ Media &amp;amp; Journalism&lt;/td&gt;
&lt;td style="text-align: left"&gt;A research agent that combines web search and professional journalistic writing. It performs deep investigations and produces NYT-style reports.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/agno-agi/agno/blob/main/cookbook/examples/agents/research_agent.py" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/static/v1?label=AI&amp;#43;Agent&amp;#43;Code&amp;amp;message=Python&amp;amp;color=%23244cd1"
loading="lazy"
alt="AI Agent Code - Python"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🍳 Recipe Creator&lt;/td&gt;
&lt;td style="text-align: left"&gt;🍽️ Food &amp;amp; Culinary&lt;/td&gt;
&lt;td style="text-align: left"&gt;An AI-powered recipe recommendation agent that provides personalized recipes based on ingredients, preferences, and time constraints.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/agno-agi/agno/blob/main/cookbook/examples/agents/recipe_creator.py" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/static/v1?label=AI&amp;#43;Agent&amp;#43;Code&amp;amp;message=Python&amp;amp;color=%23244cd1"
loading="lazy"
alt="AI Agent Code - Python"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🗞️ Finance Agent&lt;/td&gt;
&lt;td style="text-align: left"&gt;💼 Finance&lt;/td&gt;
&lt;td style="text-align: left"&gt;A powerful financial analyst agent combining real-time stock data, analyst insights, company fundamentals, and market news. Ideal for analyzing companies like Apple, Tesla, NVIDIA, and sectors like semiconductors or automotive.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/agno-agi/agno/blob/main/cookbook/examples/agents/finance_agent.py" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/static/v1?label=AI&amp;#43;Agent&amp;#43;Code&amp;amp;message=Python&amp;amp;color=%23244cd1"
loading="lazy"
alt="AI Agent Code - Python"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🧠 Financial Reasoning Agent&lt;/td&gt;
&lt;td style="text-align: left"&gt;📈 Finance&lt;/td&gt;
&lt;td style="text-align: left"&gt;Uses a Claude-3.5 Sonnet-based agent to analyze stocks like NVDA using tools for reasoning and Yahoo Finance data.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/agno-agi/agno/blob/main/cookbook/examples/agents/reasoning_finance_agent.py" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/static/v1?label=AI&amp;#43;Agent&amp;#43;Code&amp;amp;message=Python&amp;amp;color=%23244cd1"
loading="lazy"
alt="AI Agent Code - Python"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🤖 Readme Generator Agent&lt;/td&gt;
&lt;td style="text-align: left"&gt;💻 Software Dev&lt;/td&gt;
&lt;td style="text-align: left"&gt;Agent generates high-quality READMEs for GitHub repositories using repo metadata.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/agno-agi/agno/blob/main/cookbook/examples/agents/readme_generator.py" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/static/v1?label=AI&amp;#43;Agent&amp;#43;Code&amp;amp;message=Python&amp;amp;color=%23244cd1"
loading="lazy"
alt="AI Agent Code - Python"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🎬 Movie Recommendation Agent&lt;/td&gt;
&lt;td style="text-align: left"&gt;🎥 Entertainment&lt;/td&gt;
&lt;td style="text-align: left"&gt;An intelligent agent that gives personalized movie recommendations using Exa and GPT-4o, analyzing genres, themes, and latest ratings.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/agno-agi/agno/blob/main/cookbook/examples/agents/movie_recommedation.py" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/static/v1?label=AI&amp;#43;Agent&amp;#43;Code&amp;amp;message=Python&amp;amp;color=%23244cd1"
loading="lazy"
alt="AI Agent Code - Python"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🔍 Media Trend Analysis Agent&lt;/td&gt;
&lt;td style="text-align: left"&gt;📰 Media &amp;amp; News&lt;/td&gt;
&lt;td style="text-align: left"&gt;Analyzes emerging trends, patterns, and influencers from digital platforms using AI-powered agents and scraping.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/agno-agi/agno/blob/main/cookbook/examples/agents/media_trend_analysis_agent.py" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/static/v1?label=AI&amp;#43;Agent&amp;#43;Code&amp;amp;message=Python&amp;amp;color=%23244cd1"
loading="lazy"
alt="AI Agent Code - Python"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;⚖️ Legal Document Analysis Agent&lt;/td&gt;
&lt;td style="text-align: left"&gt;🏛️ Legal Tech&lt;/td&gt;
&lt;td style="text-align: left"&gt;An AI agent that analyzes legal documents from PDF URLs and provides legal insights based on a knowledge base using vector embeddings and GPT-4o.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/agno-agi/agno/blob/main/cookbook/examples/agents/legal_consultant.py" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/static/v1?label=AI&amp;#43;Agent&amp;#43;Code&amp;amp;message=Python&amp;amp;color=%23244cd1"
loading="lazy"
alt="AI Agent Code - Python"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🤔 DeepKnowledge&lt;/td&gt;
&lt;td style="text-align: left"&gt;🧠 Research&lt;/td&gt;
&lt;td style="text-align: left"&gt;This agent performs iterative searches through its knowledge base, breaking down complex queries into sub-questions and synthesizing comprehensive answers. It uses Agno docs for demonstration and is designed for deep reasoning and exploration.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/agno-agi/agno/blob/main/cookbook/examples/agents/deep_knowledge.py" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/static/v1?label=AI&amp;#43;Agent&amp;#43;Code&amp;amp;message=Python&amp;amp;color=%23244cd1"
loading="lazy"
alt="AI Agent Code - Python"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;📚 Book Recommendation Agent&lt;/td&gt;
&lt;td style="text-align: left"&gt;🧠 Publishing &amp;amp; Media&lt;/td&gt;
&lt;td style="text-align: left"&gt;An intelligent agent that provides personalized book suggestions using literary data, reader preferences, reviews, and release info.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/agno-agi/agno/blob/main/cookbook/examples/agents/book_recommendation.py" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/static/v1?label=AI&amp;#43;Agent&amp;#43;Code&amp;amp;message=Python&amp;amp;color=%23244cd1"
loading="lazy"
alt="AI Agent Code - Python"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🏠 MCP Airbnb Agent&lt;/td&gt;
&lt;td style="text-align: left"&gt;🛎️ Hospitality&lt;/td&gt;
&lt;td style="text-align: left"&gt;Create an AI Agent using MCP and Llama 4 to search Airbnb listings with filters like workspace &amp;amp; transport proximity.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/agno-agi/agno/blob/main/cookbook/examples/agents/airbnb_mcp.py" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/static/v1?label=AI&amp;#43;Agent&amp;#43;Code&amp;amp;message=Python&amp;amp;color=%23244cd1"
loading="lazy"
alt="AI Agent Code - Python"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🤖 Assist Agent&lt;/td&gt;
&lt;td style="text-align: left"&gt;🧠 AI Framework&lt;/td&gt;
&lt;td style="text-align: left"&gt;An AI agent using GPT-4o to answer questions about the Agno framework with hybrid search and embedded knowledge.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/agno-agi/agno/blob/main/cookbook/examples/agents/agno_assist.py" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/static/v1?label=AI&amp;#43;Agent&amp;#43;Code&amp;amp;message=Python&amp;amp;color=%23244cd1"
loading="lazy"
alt="AI Agent Code - Python"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id="framework-name-langgraph"&gt;&lt;strong&gt;Framework Name&lt;/strong&gt;: &lt;strong&gt;Langgraph&lt;/strong&gt;
&lt;/h3&gt;&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;UseCase&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Use Case&lt;/th&gt;
&lt;th style="text-align: left"&gt;Industry&lt;/th&gt;
&lt;th style="text-align: left"&gt;Description&lt;/th&gt;
&lt;th style="text-align: left"&gt;Notebook&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🤖 Chatbot Simulation Evaluation&lt;/td&gt;
&lt;td style="text-align: left"&gt;💻 💬 AI / Quality Assurance&lt;/td&gt;
&lt;td style="text-align: left"&gt;Simulate user interactions to evaluate chatbot performance, ensuring robustness and reliability in real-world scenarios.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/langchain-ai/langgraph/blob/main/docs/docs/tutorials/chatbot-simulation-evaluation/agent-simulation-evaluation.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/static/v1?label=AI&amp;#43;Agent&amp;#43;Code&amp;amp;message=Python&amp;amp;color=%23244cd1"
loading="lazy"
alt="AI Agent Code - Python"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🧠 Information Gathering via Prompting&lt;/td&gt;
&lt;td style="text-align: left"&gt;🧠 AI / Research &amp;amp; Development&lt;/td&gt;
&lt;td style="text-align: left"&gt;This tutorial demonstrates how to design a LangGraph workflow that utilizes prompting techniques to gather information effectively. It showcases how to structure prompts and manage the flow of information to build intelligent agents.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/langchain-ai/langgraph/blob/main/docs/docs/tutorials/chatbots/information-gather-prompting.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/static/v1?label=AI&amp;#43;Agent&amp;#43;Code&amp;amp;message=Python&amp;amp;color=%23244cd1"
loading="lazy"
alt="AI Agent Code - Python"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🧠 Code Assistant with LangGraph&lt;/td&gt;
&lt;td style="text-align: left"&gt;💻 Software Development&lt;/td&gt;
&lt;td style="text-align: left"&gt;This tutorial demonstrates how to build a resilient code assistant using LangGraph. It guides you through creating a graph-based agent that can handle code generation, error checking, and iterative refinement, ensuring robust and accurate coding assistance.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/langchain-ai/langgraph/blob/main/docs/docs/tutorials/code_assistant/langgraph_code_assistant.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/static/v1?label=AI&amp;#43;Agent&amp;#43;Code&amp;amp;message=Python&amp;amp;color=%23244cd1"
loading="lazy"
alt="AI Agent Code - Python"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🧑‍💼 Customer Support Agent&lt;/td&gt;
&lt;td style="text-align: left"&gt;🧑‍💼 Customer Support Agent&lt;/td&gt;
&lt;td style="text-align: left"&gt;This tutorial demonstrates how to build a customer support agent using LangGraph. It guides you through creating a graph-based agent that can handle customer inquiries, providing automated support and enhancing user experience.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/langchain-ai/langgraph/blob/main/docs/docs/tutorials/customer-support/customer-support.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/static/v1?label=AI&amp;#43;Agent&amp;#43;Code&amp;amp;message=Python&amp;amp;color=%23244cd1"
loading="lazy"
alt="AI Agent Code - Python"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🔁 Extraction with Retries&lt;/td&gt;
&lt;td style="text-align: left"&gt;🧠 AI / Data Extraction&lt;/td&gt;
&lt;td style="text-align: left"&gt;This tutorial demonstrates how to implement retry mechanisms in LangGraph workflows, ensuring robust data extraction processes that can handle transient errors and improve reliability.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/langchain-ai/langgraph/blob/main/docs/docs/tutorials/extraction/retries.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/static/v1?label=AI&amp;#43;Agent&amp;#43;Code&amp;amp;message=Python&amp;amp;color=%23244cd1"
loading="lazy"
alt="AI Agent Code - Python"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🧠 Multi-Agent Workflow&lt;/td&gt;
&lt;td style="text-align: left"&gt;🧠 AI / Workflow Orchestration&lt;/td&gt;
&lt;td style="text-align: left"&gt;This tutorial demonstrates how to build a multi-agent system using LangGraph&amp;rsquo;s agent supervisor. It guides you through creating a supervisor agent that orchestrates multiple specialized agents, managing task delegation and communication flow.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/langchain-ai/langgraph/blob/main/docs/docs/tutorials/multi_agent/agent_supervisor.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/static/v1?label=AI&amp;#43;Agent&amp;#43;Code&amp;amp;message=Python&amp;amp;color=%23244cd1"
loading="lazy"
alt="AI Agent Code - Python"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🧠 Hierarchical Agent Teams&lt;/td&gt;
&lt;td style="text-align: left"&gt;🧠 AI / Workflow Orchestration&lt;/td&gt;
&lt;td style="text-align: left"&gt;This tutorial demonstrates how to build a hierarchical agent system using LangGraph. It guides you through creating a top-level supervisor agent that delegates tasks to specialized sub-agents, enabling complex workflows with clear task delegation and communication.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/langchain-ai/langgraph/blob/main/docs/docs/tutorials/multi_agent/hierarchical_agent_teams.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/static/v1?label=AI&amp;#43;Agent&amp;#43;Code&amp;amp;message=Python&amp;amp;color=%23244cd1"
loading="lazy"
alt="AI Agent Code - Python"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🤝 Multi-Agent Collaboration&lt;/td&gt;
&lt;td style="text-align: left"&gt;🧠 AI / Workflow Orchestration&lt;/td&gt;
&lt;td style="text-align: left"&gt;This tutorial demonstrates how to implement multi-agent collaboration using LangGraph. It guides you through creating multiple specialized agents that work together to accomplish a complex task, showcasing the power of agent collaboration in AI workflows.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/langchain-ai/langgraph/blob/main/docs/docs/tutorials/multi_agent/multi-agent-collaboration.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/static/v1?label=AI&amp;#43;Agent&amp;#43;Code&amp;amp;message=Python&amp;amp;color=%23244cd1"
loading="lazy"
alt="AI Agent Code - Python"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🧠 Plan-and-Execute Agent&lt;/td&gt;
&lt;td style="text-align: left"&gt;🧠 AI / Workflow Orchestration&lt;/td&gt;
&lt;td style="text-align: left"&gt;This tutorial demonstrates how to build a &amp;ldquo;Plan-and-Execute&amp;rdquo; style agent using LangGraph. It guides you through creating an agent that first generates a multi-step plan and then executes each step sequentially, revisiting and modifying the plan as necessary. This approach is inspired by the Plan-and-Solve paper and the Baby-AGI project, aiming to enhance long-term planning and task execution in AI workflows.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/langchain-ai/langgraph/blob/main/docs/docs/tutorials/plan-and-execute/plan-and-execute.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/static/v1?label=AI&amp;#43;Agent&amp;#43;Code&amp;amp;message=Python&amp;amp;color=%23244cd1"
loading="lazy"
alt="AI Agent Code - Python"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🧠 SQL Agent&lt;/td&gt;
&lt;td style="text-align: left"&gt;🧠 AI / Database Interaction&lt;/td&gt;
&lt;td style="text-align: left"&gt;This tutorial demonstrates how to build an agent that can answer questions about a SQL database. The agent fetches available tables, determines relevance to the question, retrieves schemas, generates a query, checks for errors, executes it, and formulates a response.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/langchain-ai/langgraph/blob/main/docs/docs/tutorials/sql-agent.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/static/v1?label=AI&amp;#43;Agent&amp;#43;Code&amp;amp;message=Python&amp;amp;color=%23244cd1"
loading="lazy"
alt="AI Agent Code - Python"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🧠 Reflection Agent&lt;/td&gt;
&lt;td style="text-align: left"&gt;🧠 AI / Workflow Orchestration&lt;/td&gt;
&lt;td style="text-align: left"&gt;This tutorial demonstrates how to build a reflection agent using LangGraph. It guides you through creating an agent that can critique and revise its own outputs, enhancing the quality and reliability of generated content.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/langchain-ai/langgraph/blob/main/docs/docs/tutorials/reflection/reflection.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/static/v1?label=AI&amp;#43;Agent&amp;#43;Code&amp;amp;message=Python&amp;amp;color=%23244cd1"
loading="lazy"
alt="AI Agent Code - Python"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🧠 Reflexion Agent&lt;/td&gt;
&lt;td style="text-align: left"&gt;🧠 AI / Workflow Orchestration&lt;/td&gt;
&lt;td style="text-align: left"&gt;This tutorial demonstrates how to build a reflexion agent using LangGraph. It guides you through creating an agent that can reflect on its actions and outcomes, enabling iterative improvement and more accurate decision-making in complex workflows.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/langchain-ai/langgraph/blob/main/docs/docs/tutorials/reflexion/reflexion.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/static/v1?label=AI&amp;#43;Agent&amp;#43;Code&amp;amp;message=Python&amp;amp;color=%23244cd1"
loading="lazy"
alt="AI Agent Code - Python"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;strong&gt;LangGraph Agentic RAG&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🧠 &lt;strong&gt;Adaptive RAG&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left"&gt;🧠 AI / Information Retrieval&lt;/td&gt;
&lt;td style="text-align: left"&gt;This tutorial demonstrates how to build an Adaptive RAG system using LangGraph. It guides you through creating a dynamic retrieval process that adjusts based on query complexity, enhancing the efficiency and accuracy of information retrieval.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/langchain-ai/langgraph/blob/main/docs/docs/tutorials/rag/langgraph_adaptive_rag.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/static/v1?label=AI&amp;#43;Agent&amp;#43;Code&amp;amp;message=Python&amp;amp;color=%23244cd1"
loading="lazy"
alt="AI Agent Code - Python"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🧠 &lt;strong&gt;Adaptive RAG (Local)&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left"&gt;🧠 AI / Information Retrieval&lt;/td&gt;
&lt;td style="text-align: left"&gt;This tutorial focuses on implementing Adaptive RAG with local models, allowing for offline retrieval and generation, which is crucial for environments with limited internet access or privacy concerns.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/langchain-ai/langgraph/blob/main/docs/docs/tutorials/rag/langgraph_adaptive_rag_local.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/static/v1?label=AI&amp;#43;Agent&amp;#43;Code&amp;amp;message=Python&amp;amp;color=%23244cd1"
loading="lazy"
alt="AI Agent Code - Python"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🤖 &lt;strong&gt;Agentic RAG&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left"&gt;🤖 AI / Intelligent Agents&lt;/td&gt;
&lt;td style="text-align: left"&gt;Learn to build an Agentic RAG system where an agent determines the best retrieval strategy before generating a response, improving the relevance and accuracy of answers.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/langchain-ai/langgraph/blob/main/docs/docs/tutorials/rag/langgraph_agentic_rag.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/static/v1?label=AI&amp;#43;Agent&amp;#43;Code&amp;amp;message=Python&amp;amp;color=%23244cd1"
loading="lazy"
alt="AI Agent Code - Python"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🤖 &lt;strong&gt;Agentic RAG (Local)&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left"&gt;🤖 AI / Intelligent Agents&lt;/td&gt;
&lt;td style="text-align: left"&gt;This tutorial extends Agentic RAG to local environments, enabling the use of local models and data sources for retrieval and generation tasks.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/langchain-ai/langgraph/blob/main/docs/docs/tutorials/rag/langgraph_agentic_rag_local.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/static/v1?label=AI&amp;#43;Agent&amp;#43;Code&amp;amp;message=Python&amp;amp;color=%23244cd1"
loading="lazy"
alt="AI Agent Code - Python"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🧠 &lt;strong&gt;Corrective RAG (CRAG)&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left"&gt;🧠 AI / Information Retrieval&lt;/td&gt;
&lt;td style="text-align: left"&gt;Implement a Corrective RAG system that evaluates and refines retrieved documents before passing them to the generator, ensuring higher-quality outputs.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/langchain-ai/langgraph/blob/main/docs/docs/tutorials/rag/langgraph_crag.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/static/v1?label=AI&amp;#43;Agent&amp;#43;Code&amp;amp;message=Python&amp;amp;color=%23244cd1"
loading="lazy"
alt="AI Agent Code - Python"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🧠 &lt;strong&gt;Corrective RAG (Local)&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left"&gt;🧠 AI / Information Retrieval&lt;/td&gt;
&lt;td style="text-align: left"&gt;This tutorial focuses on building a Corrective RAG system using local resources, allowing for offline document evaluation and refinement processes.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/langchain-ai/langgraph/blob/main/docs/docs/tutorials/rag/langgraph_crag_local.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/static/v1?label=AI&amp;#43;Agent&amp;#43;Code&amp;amp;message=Python&amp;amp;color=%23244cd1"
loading="lazy"
alt="AI Agent Code - Python"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🧠 &lt;strong&gt;Self-RAG&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left"&gt;🧠 AI / Information Retrieval&lt;/td&gt;
&lt;td style="text-align: left"&gt;Learn to implement Self-RAG, where the system reflects on its responses and retrieves additional information if necessary, enhancing the accuracy and relevance of generated content.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/langchain-ai/langgraph/blob/main/docs/docs/tutorials/rag/langgraph_self_rag.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/static/v1?label=AI&amp;#43;Agent&amp;#43;Code&amp;amp;message=Python&amp;amp;color=%23244cd1"
loading="lazy"
alt="AI Agent Code - Python"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;🧠 &lt;strong&gt;Self-RAG (Local)&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left"&gt;🧠 AI / Information Retrieval&lt;/td&gt;
&lt;td style="text-align: left"&gt;This tutorial demonstrates how to implement Self-RAG using local models and data sources, enabling offline reflection and retrieval processes.&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;a class="link" href="https://github.com/langchain-ai/langgraph/blob/main/docs/docs/tutorials/rag/langgraph_self_rag_local.ipynb" target="_blank" rel="noopener"
&gt;&lt;img src="https://img.shields.io/static/v1?label=AI&amp;#43;Agent&amp;#43;Code&amp;amp;message=Python&amp;amp;color=%23244cd1"
loading="lazy"
alt="AI Agent Code - Python"
&gt;&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;</description></item><item><title>NeuroTechEDU's Awesome List of BCI-related Resources</title><link>https://hanguangwu.github.io/blog/en/p/neurotechedus-awesome-list-of-bci-related-resources/</link><pubDate>Mon, 02 Feb 2026 12:34:25 -0800</pubDate><guid>https://hanguangwu.github.io/blog/en/p/neurotechedus-awesome-list-of-bci-related-resources/</guid><description>&lt;h1 id="neurotechedus-awesome-list-of-bci-related-resources"&gt;NeuroTechEDU&amp;rsquo;s Awesome List of BCI-related Resources
&lt;/h1&gt;&lt;h2 id="introduction"&gt;Introduction
&lt;/h2&gt;&lt;p&gt;&lt;a class="link" href="https://github.com/NeuroTechX/awesome-bci" target="_blank" rel="noopener"
&gt;Curated Collection of BCI resources&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This is a list of tools, resources, and learning materials related to Brain-Computer Interfaces (BCI). The list is maintained by the &lt;a class="link" href="https://neurotechx.com/" target="_blank" rel="noopener"
&gt;NeuroTechX&lt;/a&gt; community.&lt;/p&gt;
&lt;h2 id="software"&gt;Software
&lt;/h2&gt;&lt;h3 id="bci-experiment-design-and-analysis"&gt;BCI Experiment Design and Analysis
&lt;/h3&gt;&lt;p&gt;These applications help you design BCI experiments, run them, collect data, and analyze the results.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/NeuroTechX/eeg-expy" target="_blank" rel="noopener"
&gt;EEG-ExPy&lt;/a&gt;: Free &amp;amp; Open-Source (FOSS) Python library for EEG &amp;amp; experiment design, recording, and analysis. Maintained by the EEG-ExPy team within NeuroTechX. &lt;a class="link" href="https://bit.ly/m/eeg-expy-cns" target="_blank" rel="noopener"
&gt;CNS2024 Poster&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://openvibe.inria.fr/" target="_blank" rel="noopener"
&gt;OpenViBE&lt;/a&gt;: A software platform dedicated to designing, testing, and using Brain-Computer Interfaces, maintained by the OpenViBE Consortium.&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.bci2000.org/mediawiki/index.php/Main_Page" target="_blank" rel="noopener"
&gt;BCI2000&lt;/a&gt;: Software suite with GUI based on C++ for data acquisition, stimulus presentation, and brain monitoring applications.&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://neuroimage.usc.edu/brainstorm/" target="_blank" rel="noopener"
&gt;Brainstorm&lt;/a&gt;: Collaborative, open-source application dedicated to the analysis of brain recordings: MEG, EEG, fNIRS, ECoG, depth electrodes and multiunit electrophysiology.&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://www.shifz.org/brainbay/" target="_blank" rel="noopener"
&gt;BrainBay&lt;/a&gt;: Bio- and neurofeedback application working with various hardware frameworks including OpenBCI/OpenEEG.&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://okazolab.com/" target="_blank" rel="noopener"
&gt;EventIDE&lt;/a&gt;: EventIDE is a software platform for designing and running multimodal experiments, with an IDE.&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://neuropype.io" target="_blank" rel="noopener"
&gt;NeuroPype&lt;/a&gt;: platform for real-time brain-computer interfacing (BCI), neuroimaging, and neural signal processing, which supports a range of biosignal modalities including EEG, fNIRS, ExG, etc.&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://mne.tools/stable/install/mne_tools_suite.html" target="_blank" rel="noopener"
&gt;MNE&lt;/a&gt;: MNE-Python is an open-source Python module for processing, analysis, and visualization of functional neuroimaging data (EEG, MEG, sEEG, ECoG, and fNIRS). The tools suite includes interoperable packages in Python, MATLAB, C++, etc., which operate in GUI, CLI, or API.&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.psychopy.org/builder/" target="_blank" rel="noopener"
&gt;PsychoPy Builder&lt;/a&gt;: PsychoPy is an open-source application for creating experiments in neuroscience, psychology, and psychophysics.&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://psychtoolbox.org/" target="_blank" rel="noopener"
&gt;PsychToolBox&lt;/a&gt;: Psychophysics Toolbox Version 3 (PTB-3) is a free set of Matlab and GNU Octave functions for vision and neuroscience research.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="matlab-toolboxes"&gt;Matlab Toolboxes
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="http://sccn.ucsd.edu/eeglab/" target="_blank" rel="noopener"
&gt;EEGLab&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://www.fieldtriptoolbox.org/" target="_blank" rel="noopener"
&gt;FieldTrip&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://sccn.ucsd.edu/wiki/BCILAB" target="_blank" rel="noopener"
&gt;BCILab&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/bbci/bbci_public" target="_blank" rel="noopener"
&gt;BBCI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://erpinfo.org/erplab" target="_blank" rel="noopener"
&gt;ERPLAB&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://psychtoolbox.org" target="_blank" rel="noopener"
&gt;Psychtoolbox&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://chronux.org/" target="_blank" rel="noopener"
&gt;Chronux&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="python-toolboxes"&gt;Python Toolboxes
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/thunder-project/thunder" target="_blank" rel="noopener"
&gt;Thunder&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/bbci/pyff" target="_blank" rel="noopener"
&gt;Pyff&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/bbci/mushu" target="_blank" rel="noopener"
&gt;Mushu&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/bbci/wyrm" target="_blank" rel="noopener"
&gt;Wyrm&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/curiositry/EEGrunt" target="_blank" rel="noopener"
&gt;EEGrunt&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://getcloudbrain.com/" target="_blank" rel="noopener"
&gt;Cloudbrain&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/mne-tools/mne-python" target="_blank" rel="noopener"
&gt;MNE-Python&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/strfry/OpenNFB" target="_blank" rel="noopener"
&gt;OpenNFB&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/octopicorn/bcikit" target="_blank" rel="noopener"
&gt;bcikit&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://www.psychopy.org/" target="_blank" rel="noopener"
&gt;PsychoPy&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/PIA-Group/BioSPPy" target="_blank" rel="noopener"
&gt;BioSPPy&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://timeflux.io" target="_blank" rel="noopener"
&gt;Timeflux&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/eegsynth/eegsynth" target="_blank" rel="noopener"
&gt;EEGsynth&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/pyRiemann/pyRiemann" target="_blank" rel="noopener"
&gt;pyRiemann&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/NeuroTechX/moabb" target="_blank" rel="noopener"
&gt;MOABB&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/nmc-costa/neuroprime" target="_blank" rel="noopener"
&gt;NeuroPrime&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://braindecode.org/dev/index.html" target="_blank" rel="noopener"
&gt;Braindecode&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://brainflow.org" target="_blank" rel="noopener"
&gt;Brainflow&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/neurotechx/eeg-expy" target="_blank" rel="noopener"
&gt;EEG-ExPy&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/LMBooth/pybci" target="_blank" rel="noopener"
&gt;PyBCI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/arctop/mw75-streamer" target="_blank" rel="noopener"
&gt;mw75-streamer&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="mobile-apps"&gt;Mobile Apps
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;MindMonitor: &lt;a class="link" href="https://apps.apple.com/ca/app/mind-monitor/id988527143" target="_blank" rel="noopener"
&gt;iOS App Store&lt;/a&gt;, &lt;a class="link" href="https://play.google.com/store/apps/details?id=com.sonicPenguins.museMonitor" target="_blank" rel="noopener"
&gt;Google Play Store&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;NeuroSky Android SDK: &lt;a class="link" href="https://github.com/pwittchen/neurosky-android-sdk" target="_blank" rel="noopener"
&gt;Google Play Store&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;EEG-101 (Now-deprecated): &lt;a class="link" href="https://github.com/NeuroTechX/eeg-101" target="_blank" rel="noopener"
&gt;Google Play Store&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="brain-visualizations"&gt;Brain Visualizations
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="http://brainbox.pasteur.fr/" target="_blank" rel="noopener"
&gt;BrainBox&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://brainbrowser.cbrain.mcgill.ca/" target="_blank" rel="noopener"
&gt;BrainBrowser&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://zzz.bwh.harvard.edu/luna/apps/moonlight/" target="_blank" rel="noopener"
&gt;Moonlight&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="raspberrypi-framework"&gt;RaspberryPi Framework
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://pieeg.com/" target="_blank" rel="noopener"
&gt;PiEEG&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/BlinoTech/BlinoTech.github.io" target="_blank" rel="noopener"
&gt;Blino PiNaps&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/AtlantsEmbedded/IntelliPi" target="_blank" rel="noopener"
&gt;IntelliPi&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="communication-protocols"&gt;Communication Protocols
&lt;/h3&gt;&lt;p&gt;These are some of the commonly used Communication protocols.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/sccn/labstreaminglayer" target="_blank" rel="noopener"
&gt;Lab Streaming Layer&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://www.opensoundcontrol.org/" target="_blank" rel="noopener"
&gt;Open Sound Control&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://www.fieldtriptoolbox.org/development/realtime/buffer_protocol/" target="_blank" rel="noopener"
&gt;FieldTrip buffer&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="hardware"&gt;Hardware
&lt;/h2&gt;&lt;p&gt;This section is separated into different sections based on the types of technology.&lt;/p&gt;
&lt;h3 id="eeg"&gt;EEG
&lt;/h3&gt;&lt;p&gt;&lt;a class="link" href="https://en.wikipedia.org/wiki/Electroencephalography" target="_blank" rel="noopener"
&gt;Electroencephalography&lt;/a&gt; is the most commonly used form of Neurotechnology. There are many options out there meaning that you can easily find a device that matches your needs and price.&lt;/p&gt;
&lt;h4 id="consumer-and-diy-devices"&gt;Consumer and DIY Devices
&lt;/h4&gt;&lt;p&gt;Some of these devices are still supported and actively developed by manufacturers, community members, or researchers. Others are no longer supported but may still have a community of users who can help you get access.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://choosemuse.com/" target="_blank" rel="noopener"
&gt;Muse 2016, Muse 2, Muse S&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://openbci.com" target="_blank" rel="noopener"
&gt;OpenBCI Ganglion, Cyton, Daisy, Galea&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://iduntechnologies.com/idun-guardian" target="_blank" rel="noopener"
&gt;IDUN Guardian&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://neurable.com/" target="_blank" rel="noopener"
&gt;Neurable MW75 Neuro&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://neurosity.co/" target="_blank" rel="noopener"
&gt;Neurosity Crown&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://brainbit.com/" target="_blank" rel="noopener"
&gt;BrainBit Headband &amp;amp; Flex&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://emotiv.com" target="_blank" rel="noopener"
&gt;Emotiv EPOC, Flex, Insight&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://beacon.bio/dreem-headband/" target="_blank" rel="noopener"
&gt;Dreem by Beacon Biosignals&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://www.cognionics.com/" target="_blank" rel="noopener"
&gt;Cognionics&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://elemindtech.com/" target="_blank" rel="noopener"
&gt;Elemind&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://neurosky.com/" target="_blank" rel="noopener"
&gt;Neurosky&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.crowdsupply.com/neuroidss/freeeeg32" target="_blank" rel="noopener"
&gt;FreeEEG32: an open source 32 channel eeg&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://bakerdh.wordpress.com/2013/01/31/a-first-look-at-the-olimex-eeg-smt/" target="_blank" rel="noopener"
&gt;EEG-SMT by Olimex&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.crowdsupply.com/starcat/hackeeg" target="_blank" rel="noopener"
&gt;HackEEG&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://icibici.github.io/site/" target="_blank" rel="noopener"
&gt;icibici&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://openeeg.sourceforge.net/doc/" target="_blank" rel="noopener"
&gt;OpenEEG&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id="research-devices-manufactures"&gt;Research Devices Manufactures
&lt;/h4&gt;&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="http://www.wearablesensing.com" target="_blank" rel="noopener"
&gt;Wearable Sensing Dry Electrode EEG&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.gtec.at" target="_blank" rel="noopener"
&gt;g.tec&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.egi.com/" target="_blank" rel="noopener"
&gt;EGI High Density EEG&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://www.biosemi.com/" target="_blank" rel="noopener"
&gt;BioSemi&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.ant-neuro.com" target="_blank" rel="noopener"
&gt;ANT Neuro&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://www.advancedbrainmonitoring.com" target="_blank" rel="noopener"
&gt;Advanced Brain Monitoring&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.brainproducts.com/" target="_blank" rel="noopener"
&gt;Brain Products&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://mentalab.com/" target="_blank" rel="noopener"
&gt;Mentalab Explore&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://www.natus.com" target="_blank" rel="noopener"
&gt;Natus Neuro&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.tmsi.com/products/" target="_blank" rel="noopener"
&gt;TMSi&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id="eeg-parts--supplies"&gt;EEG Parts &amp;amp; Supplies
&lt;/h4&gt;&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://conscious-labs.com/3-eeg-devices" target="_blank" rel="noopener"
&gt;Conscious Labs - EEG Supra Headphones&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.emotiv.com/products/flex-gel" target="_blank" rel="noopener"
&gt;Emotiv Flex Gel&lt;/a&gt; &amp;amp; &lt;a class="link" href="https://www.emotiv.com/products/flex-saline" target="_blank" rel="noopener"
&gt;Emotiv Flex Saline&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://fri-fl-shop.com/" target="_blank" rel="noopener"
&gt;Florida Research Instruments&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://oshpark.com/shared_projects/h2i1xBaW" target="_blank" rel="noopener"
&gt;DIY Electrode Design&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://www.ti.com/tool/ads1299eegfe-pdk" target="_blank" rel="noopener"
&gt;TI ADS1299EEG-FE&lt;/a&gt;: Analog Front End for EEG solutions. e.g., in OpenBCI Cyton.&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://intantech.com" target="_blank" rel="noopener"
&gt;Intan Technologies&lt;/a&gt;: Microchips and miniature recording &amp;amp; stimulation headstages.&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://shop.openbci.com/products/idun-dryode-kit" target="_blank" rel="noopener"
&gt;IDUN Dryode&lt;/a&gt;: Adhesive dry electrodes for EEG.&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://bio-medical.com/supplies/eeg-electrodes.html" target="_blank" rel="noopener"
&gt;Bio-Medical&lt;/a&gt;: For supplies and consumables&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://www.sciencedirect.com/science/article/pii/S1388245704003906" target="_blank" rel="noopener"
&gt;Comparison of different types of electrodes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="nirs"&gt;NIRS
&lt;/h3&gt;&lt;p&gt;Near-Infrared Spectroscopy (NIRS) is a technology that measures the concentration of hemoglobin in each brain region, which can be used to infer energy expenditure and hence higher activity in that region.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="http://www.artinis.com/" target="_blank" rel="noopener"
&gt;Artinis Medical Systems&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.cortivision.com/" target="_blank" rel="noopener"
&gt;CortiVision&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://www.hitachi-hightech.com/global/product_list/?ld=iis1&amp;amp;md=iis1-6" target="_blank" rel="noopener"
&gt;Hitachi Hightech&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://nirx.net/" target="_blank" rel="noopener"
&gt;NIRx&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://www.ssi.shimadzu.com/products/productgroup.cfm?subcatlink=tissueimaging" target="_blank" rel="noopener"
&gt;Shimadzu&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://kernel.co" target="_blank" rel="noopener"
&gt;Kernel Flow&lt;/a&gt;: EEG + TD-fNIRS&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="multimodal-neurotech"&gt;Multimodal Neurotech
&lt;/h3&gt;&lt;p&gt;These devices combine different type of sensors to measure or influence brain activity.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://foc.us/focus-eeg-dev-kit-instructions-guide/" target="_blank" rel="noopener"
&gt;Foc.us Dev kit: EEG,TDCS,fNIRS,TACS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://www.neuroelectrics.com/" target="_blank" rel="noopener"
&gt;Neuroelectrics: EEG,TDCS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://www.bitalino.com/" target="_blank" rel="noopener"
&gt;BITalino: EEG,EMG,ECG,EDA&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.emotibit.com/" target="_blank" rel="noopener"
&gt;Emotibit: EDA,PPG,Temperature&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="brain-stimulation"&gt;Brain Stimulation
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://www.instructables.com/id/Transcranial-Magnetic-Stimulation-TMS-Device/" target="_blank" rel="noopener"
&gt;DIY TMS&lt;/a&gt;: Transcranial Magnetic Stimulation (TMS)&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.bostonscientific.com/en-US/products/deep-brain-stimulation-systems.html" target="_blank" rel="noopener"
&gt;Boston Scientific&lt;/a&gt;: DBS, SCS&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://www.medtronic.com/us-en/index.html" target="_blank" rel="noopener"
&gt;Medtronic&lt;/a&gt;: DBS, tES, SCS&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.magstim.com/" target="_blank" rel="noopener"
&gt;Magstim&lt;/a&gt;: TMS&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.soterixmedical.com/" target="_blank" rel="noopener"
&gt;Soterix Medical&lt;/a&gt;: TDCS, tACS, tRNS&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://clarity-technologies.com/" target="_blank" rel="noopener"
&gt;Clarity&lt;/a&gt;: Light &amp;amp; Stimulation therapy for Alzheimer&amp;rsquo;s Disease&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://vielight.com/" target="_blank" rel="noopener"
&gt;Vielight&lt;/a&gt;: Transcranial Photobiomodulation&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://www.neuroelectrics.com/" target="_blank" rel="noopener"
&gt;Neuroelectrics&lt;/a&gt;: tDCS, tACS, tRNS&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.neuropace.com/" target="_blank" rel="noopener"
&gt;NeuroPace&lt;/a&gt;: RNS&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://nervexneurotech.com/" target="_blank" rel="noopener"
&gt;NerveX&lt;/a&gt;: VNS in canine epilepsy.&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://neurosigma.com/" target="_blank" rel="noopener"
&gt;NeuroSigma&lt;/a&gt;: eTNS&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.brainsway.com/" target="_blank" rel="noopener"
&gt;Brainsway&lt;/a&gt;: Deep TMS&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="upcoming-neuroimaging-tech"&gt;Upcoming NeuroImaging Tech
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="http://fultrasound.eu/" target="_blank" rel="noopener"
&gt;Functional Ultrasound (FUS)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://en.wikipedia.org/wiki/Event-related_optical_signal" target="_blank" rel="noopener"
&gt;Event Related Optical Signal&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.researchgate.net/publication/223360817_Shedding_light_on_brain_function_The_event-related_optical_signal" target="_blank" rel="noopener"
&gt;Event-Related Optical Signal&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://arxiv.org/pdf/cond-mat/9906188.pdf" target="_blank" rel="noopener"
&gt;Quasi-Ballistic Photons. (The Tech being used by Facebook&amp;rsquo;s BCI)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/OpenEIT/EIT_PCB" target="_blank" rel="noopener"
&gt;Open Electrical Impedance Tomography&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9805039/" target="_blank" rel="noopener"
&gt;Optically Pumped Magnetometers (OPM)&lt;/a&gt;, e.g., &lt;a class="link" href="https://quspin.com/" target="_blank" rel="noopener"
&gt;QuSpin&lt;/a&gt; and &lt;a class="link" href="https://www.cercamagnetics.com/cerca-opm-meg" target="_blank" rel="noopener"
&gt;Cerca&lt;/a&gt;:
&lt;ul&gt;
&lt;li&gt;&amp;ldquo;Optically&amp;rdquo; stabilizing highly sensitive magnetometers to measure the change in magnetic fields due to neural activity.&lt;/li&gt;
&lt;li&gt;Does not need Helium cooling like conventional (SQUID) MEG, and hence is much smaller and lighter, and somewhat cheaper.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Diffused Optical Imaging: Used for instance by Mary Lou Jepken et al @ &lt;a class="link" href="https://www.openwater.health/" target="_blank" rel="noopener"
&gt;Openwater&lt;/a&gt;, aiming to build a portable MRI. More info on the tech:
&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://en.wikipedia.org/wiki/Diffuse_optical_imaging" target="_blank" rel="noopener"
&gt;Diffuse optical imaging pt. 1 (wiki)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://drive.google.com/file/d/0B-G2rraXdWRlenk2U0QzbW9PdkU/view?usp=sharing" target="_blank" rel="noopener"
&gt;Diffuse Optical Imaging pt. 2&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="brain-databases"&gt;Brain Databases
&lt;/h2&gt;&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="http://sccn.ucsd.edu/~arno/fam2data/publicly_available_EEG_data.html" target="_blank" rel="noopener"
&gt;SCCN list of eeg/erp data for free public download&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://www.studycatalog.org/" target="_blank" rel="noopener"
&gt;EEG studies with the raw data&lt;/a&gt; - &lt;a class="link" href="http://www.bigeeg.org/" target="_blank" rel="noopener"
&gt;(from BigEEG Consortium)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://bnci-horizon-2020.eu/database/data-sets" target="_blank" rel="noopener"
&gt;BNCI Horizon Data Sets&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://fcon_1000.projects.nitrc.org/indi/cmi_eeg/" target="_blank" rel="noopener"
&gt;The Child Mind Institute MIPDB Dataset&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://memory.psych.upenn.edu/RAM" target="_blank" rel="noopener"
&gt;RAM (DARPA) Invasive Recording Dataset from U. Penn&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://mindbigdata.com/opendb/index.html" target="_blank" rel="noopener"
&gt;MindBigData MNIST of Brain Digits&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://www.mindbigdata.com/opendb/imagenet.html" target="_blank" rel="noopener"
&gt;MindBigData ImageNet of The Brain&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/meagmohit/EEG-Datasets" target="_blank" rel="noopener"
&gt;meagmohit&amp;rsquo;s List of EEG Datasets&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://openneuro.org/" target="_blank" rel="noopener"
&gt;OpenNeuro&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://physionet.org/" target="_blank" rel="noopener"
&gt;PhysioNet&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://sleepdata.org/" target="_blank" rel="noopener"
&gt;National Sleep Research Resource&lt;/a&gt;: A large collection of sleep data. Supported by the Sleep Research Society (SRS).&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://isip.piconepress.com/projects/" target="_blank" rel="noopener"
&gt;Temple University EEG Corpora&lt;/a&gt;: various datasets including health, epilepsy, artifactual, etc.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="tutorials-and-project-ideas"&gt;Tutorials and Project Ideas
&lt;/h2&gt;&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://eegedu.com" target="_blank" rel="noopener"
&gt;EEGEdu&lt;/a&gt;: Web-based live Tutorial on EEG and BCI, from basic to advanced. Maintained by the Mathewsons (&lt;a class="link" href="https://sites.psych.ualberta.ca/kylemathewson/" target="_blank" rel="noopener"
&gt;Ky&lt;/a&gt;&lt;a class="link" href="https://korymathewson.com/" target="_blank" rel="noopener"
&gt;Kor&lt;/a&gt;&lt;a class="link" href="https://www.linkedin.com/in/keyfer/" target="_blank" rel="noopener"
&gt;Key&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://www.frontiernerds.com/brain-hack" target="_blank" rel="noopener"
&gt;How to Hack Toy EEGs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/bcimontreal/bci_workshop/blob/master/INSTRUCTIONS.md" target="_blank" rel="noopener"
&gt;BCI Workshop&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://sccn.ucsd.edu/wiki/Introduction_To_Modern_Brain-Computer_Interface_Design" target="_blank" rel="noopener"
&gt;Introduction to Modern BCI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://eeghacker.blogspot.com/2015/03/brain-controlled-shark-attack.html" target="_blank" rel="noopener"
&gt;Brain-Controlled Shark Attack&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/neuralcubes/musephero" target="_blank" rel="noopener"
&gt;Controlling a sphero with a muse&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/jmanart/smartphone-bci" target="_blank" rel="noopener"
&gt;Building a 20 Euro EEG for your smartphone&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://web.archive.org/web/20240930191736/https://openvibe.inria.fr/forum/viewtopic.php?f=3&amp;amp;t=9668" target="_blank" rel="noopener"
&gt;Muse File Reader for OpenVibe&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/NeuroTechX/eeg-101" target="_blank" rel="noopener"
&gt;EEG 101: Interactive tutorial for Android and Muse&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/katie356/BrainwaveAnalyzer/tree/master/web-edition" target="_blank" rel="noopener"
&gt;Brainwave analyzer&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://naplab.ee.columbia.edu/bcilab.html" target="_blank" rel="noopener"
&gt;BCI Course offered by Columbia University&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/NeurotechBerkeley/bci-course" target="_blank" rel="noopener"
&gt;BCI Course at Berkeley by Pierre of NeuroTechX&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.humanbrainmapping.org/m/pages.cfm?pageID=3814" target="_blank" rel="noopener"
&gt;EEG and MRI Course offered by OHBM&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/inclusive-brains/prometheus-bci" target="_blank" rel="noopener"
&gt;Prometheus Multimodal BCI (Olympic Torch)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="communities-and-blogs"&gt;Communities and Blogs
&lt;/h2&gt;&lt;h3 id="forums"&gt;Forums
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://neurobb.com/" target="_blank" rel="noopener"
&gt;NeuroBB&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://openbci.com/community/" target="_blank" rel="noopener"
&gt;OpenBCI Community&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://forum.choosemuse.com/" target="_blank" rel="noopener"
&gt;Muse Community&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://support.neurosky.com/discussions" target="_blank" rel="noopener"
&gt;NeuroSky&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://forum.emotiv.com/" target="_blank" rel="noopener"
&gt;Emotiv&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="blogs"&gt;Blogs
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://neurotechx.medium.com/" target="_blank" rel="noopener"
&gt;NeuroTechX Content Lab&lt;/a&gt;: Articles, tutorials, and interviews on neurotechnology&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://eegnewsletter.substack.com/" target="_blank" rel="noopener"
&gt;The EEG Newsletter&lt;/a&gt;: News, events, and resources in EEG. By Raquel E. London&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://nschawor.github.io/posts/" target="_blank" rel="noopener"
&gt;Natalie Schaworonkow&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://www.autodidacts.io/" target="_blank" rel="noopener"
&gt;Autodidact&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://strfry.org/blog/" target="_blank" rel="noopener"
&gt;Strfry&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://sites.google.com/site/fabienlotte/research/code-and-softwares" target="_blank" rel="noopener"
&gt;Fabien Lotte&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://eeghacker.blogspot.ca/" target="_blank" rel="noopener"
&gt;Chip Audette EEG Hacker&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://alexandre.barachant.org/" target="_blank" rel="noopener"
&gt;Alexandre Barachant&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://lambdaloop.com/" target="_blank" rel="noopener"
&gt;Pierre Karashchuk&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://phd.jfrey.info/" target="_blank" rel="noopener"
&gt;Jeremy Frey&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://www.irenevigueguix.com" target="_blank" rel="noopener"
&gt;Irene Vigué Guix&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="competitions"&gt;Competitions
&lt;/h2&gt;&lt;h3 id="data-competitions"&gt;Data Competitions
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://www.kaggle.com/c/grasp-and-lift-eeg-detection" target="_blank" rel="noopener"
&gt;Kaggle Grasp and Lift&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.kaggle.com/c/inria-bci-challenge" target="_blank" rel="noopener"
&gt;Kaggle Error Detection&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.kaggle.com/c/decoding-the-human-brain" target="_blank" rel="noopener"
&gt;Kaggle Decode the Human Brain&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.kaggle.com/c/seizure-prediction" target="_blank" rel="noopener"
&gt;Kaggle Seizure Prediction&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.kaggle.com/c/seizure-detection" target="_blank" rel="noopener"
&gt;Kaggle Seizure Detection&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://www.bbci.de/competition/iv/" target="_blank" rel="noopener"
&gt;BCI Competition&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.br41n.io/" target="_blank" rel="noopener"
&gt;BR41N.io BCI Competition&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="brain-controlled-competitions"&gt;Brain Controlled Competitions
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="http://braindronerace.com/" target="_blank" rel="noopener"
&gt;Brain Drone Competition&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://www.cybathlon.ethz.ch/" target="_blank" rel="noopener"
&gt;Cybathlon&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="conferences-and-events"&gt;Conferences and Events
&lt;/h2&gt;&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="http://bcisociety.org/events/" target="_blank" rel="noopener"
&gt;&lt;strong&gt;List&lt;/strong&gt;: Curated list of events (BCI Society)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://bcisociety.org/bci-thursdays-online-events/" target="_blank" rel="noopener"
&gt;BCI Thursdays (BCI Society)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://bcisociety.org/bci-meeting/" target="_blank" rel="noopener"
&gt;BCI Meeting&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://www.neurogamingconf.com/" target="_blank" rel="noopener"
&gt;NeuroGaming / XTech&lt;/a&gt; &lt;a class="link" href="https://www.youtube.com/user/NeuroGamingCon/videos" target="_blank" rel="noopener"
&gt;(Youtube Videos)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://chi2016.acm.org/wp/" target="_blank" rel="noopener"
&gt;CHI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://conference.israelbrain.org/" target="_blank" rel="noopener"
&gt;BrainTech&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://brainsummit.com/" target="_blank" rel="noopener"
&gt;Brain Summit&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://nips.cc/" target="_blank" rel="noopener"
&gt;NIPS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://www.sfn.org/" target="_blank" rel="noopener"
&gt;SfN&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.gtec.at/event/bci-neurotechnology-spring-school-2025/" target="_blank" rel="noopener"
&gt;g.tec SpringSchool on BCI&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="reading-material"&gt;Reading Material
&lt;/h2&gt;&lt;h3 id="papers"&gt;Papers
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://www.researchgate.net/publication/51727880_Multiclass_Brain-Computer_Interface_Classification_by_Riemannian_Geometry" target="_blank" rel="noopener"
&gt;Multiclass Brain-Computer Interface Classification by Riemannian Geometry&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.researchgate.net/publication/258144410_A_New_Generation_of_Brain-Computer_Interface_Based_on_Riemannian_Geometry" target="_blank" rel="noopener"
&gt;A New Generation of Brain-Computer Interface Based on Riemannian Geometry&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0130129" target="_blank" rel="noopener"
&gt;My Virtual Dream: Collective Neurofeedback in an Immersive Art Environment &lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3272647/" target="_blank" rel="noopener"
&gt;BCI Competition IV – Data Set I: Learning Discriminative Patterns for Self-Paced EEG-Based Motor Imagery Detection&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://doc.ml.tu-berlin.de/bbci/publications/BlaLemTreHauMue10.pdf" target="_blank" rel="noopener"
&gt;Single-Trial Analysis and Classification of ERP Components – a Tutorial&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.researchgate.net/publication/301817936_Interpretable_Deep_Neural_Networks_for_Single-Trial_EEG_Classification" target="_blank" rel="noopener"
&gt;Interpretable Deep Neural Networks for Single-Trial EEG Classification&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0148886" target="_blank" rel="noopener"
&gt;Large-Scale Assessment of a Fully Automatic Co-Adaptive Motor Imagery-Based Brain Computer Interface&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.nature.com/articles/srep25803" target="_blank" rel="noopener"
&gt;Word pair classification during imagined speech using direct brain recording&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.ncbi.nlm.nih.gov/pubmed/28275048" target="_blank" rel="noopener"
&gt;Brain-Computer Interfaces Review, Nicolelis &amp;amp; Lebedev. 2017&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://www.pnas.org/content/112/44/E6058.abstract" target="_blank" rel="noopener"
&gt;High-speed spelling with a noninvasive brain–computer interface&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0172400" target="_blank" rel="noopener"
&gt;A high-speed brain-computer interface (BCI) using dry EEG electrodes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="introductory-books"&gt;Introductory Books
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="http://www.amazon.com/Beyond-Boundaries-Neuroscience-Connecting-Machines/dp/1250002613" target="_blank" rel="noopener"
&gt;Beyond Boundaries (Nicolellis)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://www.amazon.com/Rhythms-Brain-Gyorgy-Buzsaki/dp/0199828237" target="_blank" rel="noopener"
&gt;Rhythms of Brain (Buzsaki)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://www.amazon.com/Cycles-mind-rhythms-control-perception-ebook/dp/B013ZI5AIA" target="_blank" rel="noopener"
&gt;Cycles in mind (Cohen)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://www.amazon.com/Principles-Neural-Science-Eric-Kandel/dp/0838577016" target="_blank" rel="noopener"
&gt;Principles of Neural Science (Kandel et al)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://www.amazon.com/The-Future-Mind-Scientific-Understand/dp/038553082X" target="_blank" rel="noopener"
&gt;The Future of the Mind (Kaku)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="technical-books"&gt;Technical Books
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="http://www.amazon.com/Brain-Computer-Interfacing-Introduction-Rajesh-Rao/dp/0521769418" target="_blank" rel="noopener"
&gt;Brain-Computer Interfacing: An Introduction (Rao)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://www.amazon.com/Brain-Computer-Interfaces-Principles-Jonathan-Wolpaw/dp/0195388852" target="_blank" rel="noopener"
&gt;Brain Computer Interfaces (Wolpaw)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://mitpress.mit.edu/books/analyzing-neural-time-series-data" target="_blank" rel="noopener"
&gt;Analyzing Neural Time Series Data (Cohen)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://www.springer.com/us/book/9781461449836" target="_blank" rel="noopener"
&gt;Imaging Brain Function with EEG (Freeman &amp;amp; Quiroga)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://www.amazon.com/MATLAB-Neuroscientists-Introduction-Scientific-Computing/dp/0123745519" target="_blank" rel="noopener"
&gt;Matlab for Neuroscientists&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://books.google.ca/books?id=EJeQ0hAB76gC&amp;amp;pg=PR3&amp;amp;redir_esc=y#v=onepage&amp;amp;q&amp;amp;f=false" target="_blank" rel="noopener"
&gt;Biomedical Optics&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://imotions.com/blog/eeg-books/" target="_blank" rel="noopener"
&gt;iMotions Top 10 EEG Books&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="signal-processing"&gt;Signal Processing
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="http://ocw.mit.edu/resources/res-6-007-signals-and-systems-spring-2011/" target="_blank" rel="noopener"
&gt;Signals &amp;amp; Systems MIT Class&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Berkeley DSP class &lt;a class="link" href="https://www.youtube.com/watch?v=6_-ljdxjwac&amp;amp;list=PL-XXv-cvA_iCUQkarn2fxB3NggnPF_dob" target="_blank" rel="noopener"
&gt;lectures&lt;/a&gt;, &lt;a class="link" href="https://inst.eecs.berkeley.edu/~ee123/sp15/" target="_blank" rel="noopener"
&gt;page&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://www.amazon.com/Signals-Systems-Edition-Alan-Oppenheim/dp/0138147574" target="_blank" rel="noopener"
&gt;Signals &amp;amp; Systems (Oppenheim, Willsky, Hamid)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="http://www.amazon.com/Discrete-Time-Signal-Processing-Edition-Prentice-Hall/dp/0137549202" target="_blank" rel="noopener"
&gt;Discrete-Time Signal Processing (2nd Edition) (Oppenheim, Schafer, Buck)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.youtube.com/@mikexcohen1" target="_blank" rel="noopener"
&gt;Data analysis lecturelets (Mike X Cohen)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="schools--summer-courses"&gt;Schools &amp;amp; Summer Courses
&lt;/h2&gt;&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://neurotechmicrocreds.com/" target="_blank" rel="noopener"
&gt;NeuroTech MicroCredentials Course&lt;/a&gt;: An accredited series of theoretical and hands-on courses on Neurotechnology, offered by NeuroTechX and Queens University.&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://neuromatch.io/courses/" target="_blank" rel="noopener"
&gt;Neuromatch Academy (NMA) Summer Schools&lt;/a&gt;: An online, community-driven set of summer schools in computational sciences&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://sincxpress.com/summerschool.html" target="_blank" rel="noopener"
&gt;Sinxpress summer schools&lt;/a&gt; by Mike X. Cohen&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://brainhack.org/" target="_blank" rel="noopener"
&gt;Brainhack&lt;/a&gt;: A community-driven, online, and in-person school for neurotech enthusiasts, happening in many cities around you!&lt;/li&gt;
&lt;li&gt;Recurring summer schools or community-maintained lists of Neurotech-related summer schools
&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://nschawor.github.io/posts/2024/neuro-summer-schools/" target="_blank" rel="noopener"
&gt;List maintained by N. Schwar&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://nayanika-biswas.notion.site/58f1530bd891475eb92f1e2e4984022f?v=83fc50c53b3a4191aa6f7cdf8d9b4e40" target="_blank" rel="noopener"
&gt;List maintained by N. Biswas&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="other-resources"&gt;Other Resources
&lt;/h2&gt;&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://www.coursera.org/learn/medical-neuroscience" target="_blank" rel="noopener"
&gt;Neuroscience Duke Course (Coursera)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://web.archive.org/web/20230610175248/https://www.nuffieldbioethics.org/wp-content/uploads/2013/06/Novel_neurotechnologies_report_PDF_web_0.pdf" target="_blank" rel="noopener"
&gt;Novel Neurotechnologies Intervening in the Brain&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4072086/" target="_blank" rel="noopener"
&gt;Augment Human Cognition by optimizing cortical oscillations&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://open-neuroscience.com/" target="_blank" rel="noopener"
&gt;Open Neuroscience&lt;/a&gt; - a user-driven database of Open Source/Science projects related to Neurosciences&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/okbalefthanded/awesome-bci-reviews" target="_blank" rel="noopener"
&gt;Awesome-BCI-Reviews&lt;/a&gt; - Curated list of Brain-Computer Interface peer-reviewd published reviews and surveys ordered by year of publication.&lt;/li&gt;
&lt;/ul&gt;</description></item></channel></rss>