Context-Aware Mixed Reality: A Learning-based Framework for Semantic-level Interaction
AffiliationBournemouth University; University of Chester; University of Bradford
MetadataShow full item record
AbstractMixed Reality (MR) is a powerful interactive technology for new types of user experience. We present a semantic-based interactive MR framework that is beyond current geometry-based approaches, offering a step change in generating high-level context-aware interactions. Our key insight is that by building semantic understanding in MR, we can develop a system that not only greatly enhances user experience through object-specific behaviors, but also it paves the way for solving complex interaction design challenges. In this paper, our proposed framework generates semantic properties of the real-world environment through a dense scene reconstruction and deep image understanding scheme. We demonstrate our approach by developing a material-aware prototype system for context-aware physical interactions between the real and virtual objects. Quantitative and qualitative evaluation results show that the framework delivers accurate and consistent semantic information in an interactive MR environment, providing effective real-time semantic level interactions.
CitationChen, L., Tang, W., John, N, W., Wan, T, R. & Zhang, J, J. (2019 - forthcoming). Context-Aware Mixed Reality: A Learning-based Framework for Semantic-level Interaction. Computer Graphics Forum.
PublisherWiley Online Library
JournalComputer Graphics Forum
DescriptionThis is the peer reviewed version of the following article: [FULL CITE], which has been published in final form at [Link to final article using the DOI]. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Use of Self-Archived Versions
Except where otherwise noted, this item's license is described as https://creativecommons.org/licenses/by-nc-nd/4.0/