Show simple item record

dc.contributor.authorGriffiths, Darryl
dc.contributor.authorCunningham, Stuart
dc.contributor.authorWeinel, Jonathan
dc.contributor.authorPicking, Richard
dc.date.accessioned2023-02-06T10:24:23Z
dc.date.available2023-02-06T10:24:23Z
dc.date.issued2021-09-21
dc.identifierhttps://chesterrep.openrepository.com/bitstream/handle/10034/627520/mer_jnmr_ver2.pdf?sequence=1
dc.identifier.citationGriffiths, D., Cunningham, S., Weinel, J., & Picking, R. (2021). A multi-genre model for music emotion recognition using linear regressors. Journal of New Music Research, 50(4), 355-372. https://doi.org/10.1080/09298215.2021.1977336en_US
dc.identifier.issn0929-8215
dc.identifier.doi10.1080/09298215.2021.1977336
dc.identifier.urihttp://hdl.handle.net/10034/627520
dc.descriptionThis is an Accepted Manuscript of an article published by Taylor & Francis in Journal of New Music Research on 21/09/2021, available online: https://doi.org/10.1080/09298215.2021.1977336en_US
dc.description.abstractMaking the link between human emotion and music is challenging. Our aim was to produce an efficient system that emotionally rates songs from multiple genres. To achieve this, we employed a series of online self-report studies, utilising Russell's circumplex model. The first study (n = 44) identified audio features that map to arousal and valence for 20 songs. From this, we constructed a set of linear regressors. The second study (n = 158) measured the efficacy of our system, utilising 40 new songs to create a ground truth. Results show our approach may be effective at emotionally rating music, particularly in the prediction of valence.en_US
dc.publisherTaylor & Francisen_US
dc.relation.urlhttps://www.tandfonline.com/doi/full/10.1080/09298215.2021.1977336en_US
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/en_US
dc.subjectArousalen_US
dc.subjectEmotionen_US
dc.subjectMERen_US
dc.subjectMusicen_US
dc.subjectPerceptionen_US
dc.subjectRegressionen_US
dc.titleA multi-genre model for music emotion recognition using linear regressorsen_US
dc.title.alternativeEnhancing film sound design using audio features, regression models and artificial neural networksen_US
dc.typeArticleen_US
dc.identifier.eissn1744-5027en_US
dc.contributor.departmentWrexham Glyndwr University; Manchester Metropolitan University; University of Greenwich; University of Chesteren_US
dc.identifier.journalJournal of New Music Researchen_US
or.grant.openaccessYesen_US
rioxxterms.funderunfundeden_US
rioxxterms.identifier.projectunfundeden_US
rioxxterms.versionAMen_US
rioxxterms.versionofrecord10.1080/09298215.2021.1977336en_US
rioxxterms.licenseref.startdate2023-03-21
dcterms.dateAccepted2021-09-01
rioxxterms.publicationdate2021-09-21
dc.date.deposited2023-02-06en_US


Files in this item

Thumbnail
Name:
mer_jnmr_ver2.pdf
Size:
858.8Kb
Format:
PDF
Request:
Article

This item appears in the following Collection(s)

Show simple item record

https://creativecommons.org/licenses/by-nc-nd/4.0/
Except where otherwise noted, this item's license is described as https://creativecommons.org/licenses/by-nc-nd/4.0/