Show simple item record

dc.contributor.authorAdhuran, Jayasingam
dc.contributor.authorKhan, Nabeel
dc.contributor.authorMartini, Maria
dc.date.accessioned2024-03-01T02:26:36Z
dc.date.available2024-03-01T02:26:36Z
dc.date.issued2024-02-21
dc.identifierhttps://chesterrep.openrepository.com/bitstream/handle/10034/628509/sensors-24-01382-v2.pdf?sequence=2
dc.identifier.citationAdhuran, J., Khan, N., & Martini, M. G. (2024). Lossless encoding of time-aggregated neuromorphic vision sensor data based on point-cloud compression. Sensors, 24(5), 1382. https://doi.org/10.3390/s24051382
dc.identifier.doi10.3390/s24051382
dc.identifier.urihttp://hdl.handle.net/10034/628509
dc.description© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
dc.description.abstractNeuromorphic Vision Sensors (NVSs) are emerging sensors that acquire visual information asynchronously when changes occur in the scene. Their advantages versus synchronous capturing (frame-based video) include a low power consumption, a high dynamic range, an extremely high temporal resolution, and lower data rates. Although the acquisition strategy already results in much lower data rates than conventional video, NVS data can be further compressed. For this purpose, we recently proposed Time Aggregation-based Lossless Video Encoding for Neuromorphic Vision Sensor Data (TALVEN), consisting in the time aggregation of NVS events in the form of pixel-based event histograms, arrangement of the data in a specific format, and lossless compression inspired by video encoding. In this paper, we still leverage time aggregation but, rather than performing encoding inspired by frame-based video coding, we encode an appropriate representation of the time-aggregated data via point-cloud compression (similar to another one of our previous works, where time aggregation was not used). The proposed strategy, Time-Aggregated Lossless Encoding of Events based on Point-Cloud Compression (TALEN-PCC), outperforms the originally proposed TALVEN encoding strategy for the content in the considered dataset. The gain in terms of the compression ratio is the highest for low-event rate and low-complexity scenes, whereas the improvement is minimal for high-complexity and high-event rate scenes. According to experiments on outdoor and indoor spike event data, TALEN-PCC achieves higher compression gains for time aggregation intervals of more than 5 ms. However, the compression gains are lower when compared to state-of-the-art approaches for time aggregation intervals of less than 5 ms.
dc.description.sponsorshipFunder: EPSRC; Grant(s): EP/P022715/1
dc.publisherMDPI
dc.relation.urlhttps://www.mdpi.com/1424-8220/24/5/1382
dc.rightsLicence for VoR version of this article starting on 2024-02-21: https://creativecommons.org/licenses/by/4.0/
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.sourceeissn: 1424-8220
dc.subjectElectrical and Electronic Engineering
dc.subjectBiochemistry
dc.subjectInstrumentation
dc.subjectAtomic and Molecular Physics, and Optics
dc.subjectAnalytical Chemistry
dc.titleLossless Encoding of Time-Aggregated Neuromorphic Vision Sensor Data Based on Point-Cloud Compression
dc.typeArticle
dc.identifier.eissn1424-8220
dc.contributor.departmentKingston University London; University of Chester
dc.identifier.journalSensors
dc.date.updated2024-03-01T02:26:35Z


Files in this item

Thumbnail
Name:
sensors-24-01382-v2.pdf
Size:
1.388Mb
Format:
PDF
Request:
Article - VoR

This item appears in the following Collection(s)

Show simple item record

Licence for VoR version of this article starting on 2024-02-21: https://creativecommons.org/licenses/by/4.0/
Except where otherwise noted, this item's license is described as Licence for VoR version of this article starting on 2024-02-21: https://creativecommons.org/licenses/by/4.0/