Abstract
COIN++ is a special variant of Implicit Neural Representation (INR), which encodes signals as modulations applied to the base INR network. It is becoming a promising method for applications in image compression. However, INR's effectiveness is hindered by its inability to capture high-frequency details in the image representation. Therefore, we propose a novel training framework for COIN++, inspired by the Chebyshev approximation. The framework maps coordinate inputs to Chebyshev polynomial domains, leading to minimized fitting global error, enhanced learning of high-frequency signals, and improved COIN++'s capability in image compression tasks. In addition, we design an adaptable image partitioning technology and an integrated quantization method to further the image compression performance of COIN++ in the framework. The experimental outcomes substantiate that our proposed framework leads to a noteworthy enhancement in both representational capacity and compression rate when contrasted with the existing COIN++ baseline. In particular, we observe a PSNR improvement of 2.3 dB in CIFAR-10 and a 0.6 dB increase in the Kodak dataset.
Original language | English |
---|---|
Title of host publication | The 7th Chinese Conference on Pattern Recognition and Computer Vision PRCV 2024 |
Publisher | Springer Publishing |
Publication status | Accepted/In press - 25 Jun 2024 |
Event | 7th Chinese Conference on Pattern Recognition and Computer Vision - Urumqi, Xinjiang, China Duration: 18 Oct 2024 → 20 Oct 2024 |
Conference
Conference | 7th Chinese Conference on Pattern Recognition and Computer Vision |
---|---|
Abbreviated title | PRCV 2024 |
Country/Territory | China |
City | Urumqi, Xinjiang |
Period | 18 Oct 2024 → 20 Oct 2024 |
Keywords
- implicit neural representation
- COIN++
- Chebyshev approximation