Lossless Encoding of Time-Aggregated Neuromorphic Vision Sensor Data Based on Point-Cloud Compression
Affiliation
Kingston University London; University of ChesterPublication Date
2024-02-21
Metadata
Show full item recordAbstract
Neuromorphic Vision Sensors (NVSs) are emerging sensors that acquire visual information asynchronously when changes occur in the scene. Their advantages versus synchronous capturing (frame-based video) include a low power consumption, a high dynamic range, an extremely high temporal resolution, and lower data rates. Although the acquisition strategy already results in much lower data rates than conventional video, NVS data can be further compressed. For this purpose, we recently proposed Time Aggregation-based Lossless Video Encoding for Neuromorphic Vision Sensor Data (TALVEN), consisting in the time aggregation of NVS events in the form of pixel-based event histograms, arrangement of the data in a specific format, and lossless compression inspired by video encoding. In this paper, we still leverage time aggregation but, rather than performing encoding inspired by frame-based video coding, we encode an appropriate representation of the time-aggregated data via point-cloud compression (similar to another one of our previous works, where time aggregation was not used). The proposed strategy, Time-Aggregated Lossless Encoding of Events based on Point-Cloud Compression (TALEN-PCC), outperforms the originally proposed TALVEN encoding strategy for the content in the considered dataset. The gain in terms of the compression ratio is the highest for low-event rate and low-complexity scenes, whereas the improvement is minimal for high-complexity and high-event rate scenes. According to experiments on outdoor and indoor spike event data, TALEN-PCC achieves higher compression gains for time aggregation intervals of more than 5 ms. However, the compression gains are lower when compared to state-of-the-art approaches for time aggregation intervals of less than 5 ms.Citation
Adhuran, J., Khan, N., & Martini, M. G. (2024). Lossless encoding of time-aggregated neuromorphic vision sensor data based on point-cloud compression. Sensors, 24(5), 1382. https://doi.org/10.3390/s24051382Publisher
MDPIJournal
SensorsAdditional Links
https://www.mdpi.com/1424-8220/24/5/1382Type
ArticleDescription
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).EISSN
1424-8220Sponsors
Funder: EPSRC; Grant(s): EP/P022715/1ae974a485f413a2113503eed53cd6c53
10.3390/s24051382
Scopus Count
Collections
Except where otherwise noted, this item's license is described as Licence for VoR version of this article starting on 2024-02-21: https://creativecommons.org/licenses/by/4.0/