The Affective Audio Dataset (AAD) for Non-musical, Non-vocalized, Audio Emotion Research
dc.contributor.author | Ridley, Harrison | |
dc.contributor.author | Cunningham, Stuart | |
dc.contributor.author | Darby, John | |
dc.contributor.author | Henry, John | |
dc.contributor.author | Stocker, Richard | |
dc.date.accessioned | 2024-08-13T08:51:22Z | |
dc.date.available | 2024-08-13T08:51:22Z | |
dc.date.issued | 2024-08-02 | |
dc.identifier | https://chesterrep.openrepository.com/bitstream/handle/10034/628948/AAD_Affective_Audio_Dataset_revision1.pdf?sequence=2 | |
dc.identifier.citation | Ridley, H., Cunningham, S., Darby, J., Henry, J., & Stocker, R. (2025). The Affective Audio Dataset (AAD) for non-musical, non-vocalized, audio emotion research. IEEE Transactions on Affective Computing, 16(1), 394-404. https://doi.org/10.1109/TAFFC.2024.3437153 | en_US |
dc.identifier.doi | 10.1109/TAFFC.2024.3437153 | en_US |
dc.identifier.uri | http://hdl.handle.net/10034/628948 | |
dc.description | © 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | en_US |
dc.description.abstract | The Affective Audio Dataset (AAD) is a new and novel dataset of non-musical, non-anthropomorphic sounds intended for use in affective research. Sounds are annotated for their affective qualities by sets of human participants. The dataset was created in response to a lack of suitable datasets within the domain of audio emotion recognition. A total of 780 sounds are selected from the BBC Sounds Library. Participants are recruited online and asked to rate a subset of sounds based on how they make them feel. Each sound is rated for arousal and valence. It was found that while evenly distributed, there was bias towards the low-valence, high-arousal quadrant, and displayed a greater range of ratings in comparison to others. The AAD is compared with existing datasets to check its consistency and validity, with differences in data collection methods and intended use-cases highlighted. Using a subset of the data, the online ratings were validated against an in-person data collection experiment with findings strongly correlating. The AAD is used to train a basic affect-prediction model and results are discussed. Uses of this dataset include, human-emotion research, cultural studies, other affect-based research, and industry use such as audio post-production, gaming, and user-interface design. | en_US |
dc.description.sponsorship | Unfunded | en_US |
dc.publisher | IEEE | en_US |
dc.relation.url | https://ieeexplore.ieee.org/document/10621594 | en_US |
dc.rights | Licence for VoR version of this article starting on 2024-01-01: https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html | |
dc.rights.uri | https://creativecommons.org/licenses/by-nc-nd/4.0/ | en_US |
dc.source | eissn: 1949-3045 | |
dc.source | eissn: 2371-9850 | |
dc.subject | Music | en_US |
dc.subject | Emotion recognition | en_US |
dc.subject | Data collection | en_US |
dc.subject | Affective computing | en_US |
dc.subject | Numerical models | en_US |
dc.title | The Affective Audio Dataset (AAD) for Non-musical, Non-vocalized, Audio Emotion Research | en_US |
dc.type | Article | en_US |
dc.identifier.eissn | 1949-3045 | en_US |
dc.contributor.department | University of Chester; Manchester Metropolitan University | en_US |
dc.identifier.journal | IEEE Transactions on Affective Computing | en_US |
dc.date.updated | 2024-08-1308:51:22Z | |
dc.description.note | AAM added 13/08/2024. | |
dc.identifier.volume | 16 | |
dc.date.accepted | 2024-07-19 | |
rioxxterms.identifier.project | Unfunded | en_US |
rioxxterms.version | AM | en_US |
dc.source.issue | 1 | |
dc.source.beginpage | 394-404 | |
dc.date.deposited | 2024-08-13 | en_US |