• Appearance Modeling of Living Human Tissues

      Maciel, Anderson; Meyer, Gary W.; John, Nigel W.; Walter, Marcelo; Nunes, Augusto L. P.; Baranoski, Gladimir V. G.; Federal Institute of Paraná, Londrina; Universidade Federal do Rio Grande do Sul; University of Minnesota; University of Chester; University of Waterloo (Wiley, 2019-02-27)
      The visual fidelity of realistic renderings in Computer Graphics depends fundamentally upon how we model the appearance of objects resulting from the interaction between light and matter reaching the eye. In this paper, we survey the research addressing appearance modeling of living human tissue. Among the many classes of natural materials already researched in Computer Graphics, living human tissues such as blood and skin have recently seen an increase in attention from graphics research. There is already an incipient but substantial body of literature on this topic, but we also lack a structured review as presented here. We introduce a classification for the approaches using the four types of human tissues as classifiers. We show a growing trend of solutions that use first principles from Physics and Biology as fundamental knowledge upon which the models are built. The organic quality of visual results provided by these Biophysical approaches is mainly determined by the optical properties of biophysical components interacting with light. Beyond just picture making, these models can be used in predictive simulations, with the potential for impact in many other areas.
    • Context-Aware Mixed Reality: A Learning-based Framework for Semantic-level Interaction

      Chen, Long; Tang, Wen; Zhang, Jian Jun; John, Nigel W.; Bournemouth University; University of Chester; University of Bradford (Wiley Online Library, 2019-11-14)
      Mixed Reality (MR) is a powerful interactive technology for new types of user experience. We present a semantic-based interactive MR framework that is beyond current geometry-based approaches, offering a step change in generating high-level context-aware interactions. Our key insight is that by building semantic understanding in MR, we can develop a system that not only greatly enhances user experience through object-specific behaviors, but also it paves the way for solving complex interaction design challenges. In this paper, our proposed framework generates semantic properties of the real-world environment through a dense scene reconstruction and deep image understanding scheme. We demonstrate our approach by developing a material-aware prototype system for context-aware physical interactions between the real and virtual objects. Quantitative and qualitative evaluation results show that the framework delivers accurate and consistent semantic information in an interactive MR environment, providing effective real-time semantic level interactions.