Show simple item record

dc.contributor.authorYap, Chuin Hong; orcid: 0000-0003-2251-9308; email: chuin.h.yap@stu.mmu.ac.uk
dc.contributor.authorCunningham, Ryan; orcid: 0000-0001-6883-6515; email: Ryan.Cunningham@mmu.ac.uk
dc.contributor.authorDavison, Adrian K.; orcid: 0000-0002-6496-0209; email: adrian.davison@manchester.ac.uk
dc.contributor.authorYap, Moi Hoon; orcid: 0000-0001-7681-4287; email: M.Yap@mmu.ac.uk
dc.date.accessioned2021-08-13T23:13:05Z
dc.date.available2021-08-13T23:13:05Z
dc.date.issued2021-08-11
dc.identifierhttps://chesterrep.openrepository.com/bitstream/handle/10034/625573/jimaging-07-00142.pdf?sequence=2
dc.identifierhttps://chesterrep.openrepository.com/bitstream/handle/10034/625573/additional-files.zip?sequence=3
dc.identifierhttps://chesterrep.openrepository.com/bitstream/handle/10034/625573/jimaging-07-00142.xml?sequence=4
dc.identifier.citationJournal of Imaging, volume 7, issue 8, page e142
dc.identifier.urihttp://hdl.handle.net/10034/625573
dc.descriptionFrom MDPI via Jisc Publications Router
dc.descriptionHistory: accepted 2021-08-06, pub-electronic 2021-08-11
dc.descriptionPublication status: Published
dc.description.abstractLong video datasets of facial macro- and micro-expressions remains in strong demand with the current dominance of data-hungry deep learning methods. There are limited methods of generating long videos which contain micro-expressions. Moreover, there is a lack of performance metrics to quantify the generated data. To address the research gaps, we introduce a new approach to generate synthetic long videos and recommend assessment methods to inspect dataset quality. For synthetic long video generation, we use the state-of-the-art generative adversarial network style transfer method—StarGANv2. Using StarGANv2 pre-trained on the CelebA dataset, we transfer the style of a reference image from SAMM long videos (a facial micro- and macro-expression long video dataset) onto a source image of the FFHQ dataset to generate a synthetic dataset (SAMM-SYNTH). We evaluate SAMM-SYNTH by conducting an analysis based on the facial action units detected by OpenFace. For quantitative measurement, our findings show high correlation on two Action Units (AUs), i.e., AU12 and AU6, of the original and synthetic data with a Pearson’s correlation of 0.74 and 0.72, respectively. This is further supported by evaluation method proposed by OpenFace on those AUs, which also have high scores of 0.85 and 0.59. Additionally, optical flow is used to visually compare the original facial movements and the transferred facial movements. With this article, we publish our dataset to enable future research and to increase the data pool of micro-expressions research, especially in the spotting task.
dc.languageen
dc.publisherMDPI
dc.rightsLicence for this article: https://creativecommons.org/licenses/by/4.0/
dc.sourceeissn: 2313-433X
dc.subjectmicro-expressions
dc.subjectfacial expressions
dc.subjectstyle transfer
dc.subjectgenerative adversarial network
dc.subjectfacial action units
dc.titleSynthesising Facial Macro- and Micro-Expressions Using Reference Guided Style Transfer
dc.typearticle
dc.date.updated2021-08-13T23:13:04Z
dc.date.accepted2021-08-06


Files in this item

Thumbnail
Name:
jimaging-07-00142.pdf
Size:
1.694Mb
Format:
PDF
Thumbnail
Name:
additional-files.zip
Size:
593.9Kb
Format:
Unknown
Thumbnail
Name:
jimaging-07-00142.xml
Size:
7.274Kb
Format:
XML

This item appears in the following Collection(s)

Show simple item record