• Appearance Modeling of Living Human Tissues

      Maciel, Anderson; Meyer, Gary W.; John, Nigel W.; Walter, Marcelo; Nunes, Augusto L. P.; Baranoski, Gladimir V. G.; Federal Institute of Paraná, Londrina; Universidade Federal do Rio Grande do Sul; University of Minnesota; University of Chester; University of Waterloo (Wiley, 2019-02-27)
      The visual fidelity of realistic renderings in Computer Graphics depends fundamentally upon how we model the appearance of objects resulting from the interaction between light and matter reaching the eye. In this paper, we survey the research addressing appearance modeling of living human tissue. Among the many classes of natural materials already researched in Computer Graphics, living human tissues such as blood and skin have recently seen an increase in attention from graphics research. There is already an incipient but substantial body of literature on this topic, but we also lack a structured review as presented here. We introduce a classification for the approaches using the four types of human tissues as classifiers. We show a growing trend of solutions that use first principles from Physics and Biology as fundamental knowledge upon which the models are built. The organic quality of visual results provided by these Biophysical approaches is mainly determined by the optical properties of biophysical components interacting with light. Beyond just picture making, these models can be used in predictive simulations, with the potential for impact in many other areas.
    • Assisting Serious Games Level Design with an Augmented Reality Application and Workflow

      Beever, Lee; John, Nigel W.; Pop, Serban R.; University of Chester (Eurographics Proceedings, 2019-09-13)
      With the rise in popularity of serious games there is an increasing demand for virtual environments based on real-world locations. Emergency evacuation or fire safety training are prime examples of serious games that would benefit from accurate location depiction together with any application involving personal space. However, creating digital indoor models of real-world spaces is a difficult task and the results obtained by applying current techniques are often not suitable for use in real-time virtual environments. To address this problem, we have developed an application called LevelEd AR that makes indoor modelling accessible by utilizing consumer grade technology in the form of Apple’s ARKit and a smartphone. We compared our system to that of a tape measure and a system based on an infra-red depth sensor and application. We evaluated the accuracy and efficiency of each system over four different measuring tasks of increasing complexity. Our results suggest that our application is more accurate than the depth sensor system and as accurate and more time efficient as the tape measure over several tasks. Participants also showed a preference to our LevelEd AR application over the depth sensor system regarding usability. Finally, we carried out a preliminary case study that demonstrates how LevelEd AR can be successfully used as part of current industry workflows for serious games level design.
    • An Augmented Reality Tool to aid Radiotherapy Set Up implemented on a Tablet Device

      Cosentino, Francesco; Vaarkamp, Japp; John, Nigel W.; University of Chester, North Wales Cancer Treatment Centre (International Conference on the use of Computers in Radiation Therapy, 2016-06)
      The accurate daily set up of patients for radiotherapy treatment remains a challenge for which the development of new strategies and solutions continues to be an area of active research. We have developed an augmented reality tool to view the real world scene, i.e. the patient on a treatment couch, combined with computer graphics content, such as planning image data and any defined outlines of organ structures. We have built this on widely available hand held consumer tablet devices and describe here the implementation and initial experience. We suggest that, in contrast to other augmented reality tools explored for radiotherapy[1], due to the wide availability and low cost of the hardware platform the application has further potential as a tool for patients to visualize their treatment and demonstrate to patients e.g. the importance of compliance with instructions around bladder filling and rectal suppositories.
    • Context-Aware Mixed Reality: A Learning-based Framework for Semantic-level Interaction

      Chen, Long; Tang, Wen; Zhang, Jian Jun; John, Nigel W.; Bournemouth University; University of Chester; University of Bradford (Wiley Online Library, 2019-11-14)
      Mixed Reality (MR) is a powerful interactive technology for new types of user experience. We present a semantic-based interactive MR framework that is beyond current geometry-based approaches, offering a step change in generating high-level context-aware interactions. Our key insight is that by building semantic understanding in MR, we can develop a system that not only greatly enhances user experience through object-specific behaviors, but also it paves the way for solving complex interaction design challenges. In this paper, our proposed framework generates semantic properties of the real-world environment through a dense scene reconstruction and deep image understanding scheme. We demonstrate our approach by developing a material-aware prototype system for context-aware physical interactions between the real and virtual objects. Quantitative and qualitative evaluation results show that the framework delivers accurate and consistent semantic information in an interactive MR environment, providing effective real-time semantic level interactions.
    • A Cost-Effective Virtual Environment for Simulating and Training Powered Wheelchairs Manoeuvres

      Headleand, Christopher J.; Day, Thomas W.; Pop, Serban R.; Ritsos, Panagiotis D.; John, Nigel W.; Bangor University and University of Chester (IOS Press, 2016-04-07)
      Control of a powered wheelchair is often not intuitive, making training of new users a challenging and sometimes hazardous task. Collisions, due to a lack of experience can result in injury for the user and other individuals. By conducting training activities in virtual reality (VR), we can potentially improve driving skills whilst avoiding the risks inherent to the real world. However, until recently VR technology has been expensive and limited the commercial feasibility of a general training solution.We describe Wheelchair-Rift, a cost effective prototype simulator that makes use of the Oculus Rift head mounted display and the Leap Motion hand tracking device. It has been assessed for face validity by a panel of experts from a local Posture and Mobility Service. Initial results augur well for our cost-effective training solution.
    • De-smokeGCN: Generative Cooperative Networks for Joint Surgical Smoke Detection and Removal

      Chen, Long; Tang, Wen; John, Nigel W.; Wan, Tao Ruan; Zhang, Jian Jun; Bournemouth University; University of Chester; University of Bradford (IEEE XPlore, 2019-11-15)
      Surgical smoke removal algorithms can improve the quality of intra-operative imaging and reduce hazards in image-guided surgery, a highly desirable post-process for many clinical applications. These algorithms also enable effective computer vision tasks for future robotic surgery. In this paper, we present a new unsupervised learning framework for high-quality pixel-wise smoke detection and removal. One of the well recognized grand challenges in using convolutional neural networks (CNNs) for medical image processing is to obtain intra-operative medical imaging datasets for network training and validation, but availability and quality of these datasets are scarce. Our novel training framework does not require ground-truth image pairs. Instead, it learns purely from computer-generated simulation images. This approach opens up new avenues and bridges a substantial gap between conventional non-learning based methods and which requiring prior knowledge gained from extensive training datasets. Inspired by the Generative Adversarial Network (GAN), we have developed a novel generative-collaborative learning scheme that decomposes the de-smoke process into two separate tasks: smoke detection and smoke removal. The detection network is used as prior knowledge, and also as a loss function to maximize its support for training of the smoke removal network. Quantitative and qualitative studies show that the proposed training framework outperforms the state-of-the-art de-smoking approaches including the latest GAN framework (such as PIX2PIX). Although trained on synthetic images, experimental results on clinical images have proved the effectiveness of the proposed network for detecting and removing surgical smoke on both simulated and real-world laparoscopic images.
    • Efficacy of a virtual environment for training ball passing skills in rugby

      Miles, Helen C.; Pop, Serban R.; Watt, Simon J.; Lawrence, Gavin P.; John, Nigel W.; Perrot, Vincent; Mallet, Pierre; Mestre, Daniel R.; Morgan, Kenton (Springer, 2014-07-14)
      We have designed a configurable virtual environment to train rugby ball passing skills. Seeking to validate the system’s ability to correctly aid training, two experiments were performed. Ten participants took part in ball passing activities, which were used to compare the combinations of different user positions relative to the physical screen, the use of stereoscopic presentation and the use of a floor screen to extend the field of view of the virtual scene. Conversely to what was expected, the results indicate that the participants did not respond well to simulated target distances, and only the users physical distance from the screen had an effect on the distance thrown.
    • An Endoscope Interface for Immersive Virtual Reality

      John, Nigel W.; Day, Thomas W.; Wardle, Terrence; University of Chester
      This is a work in progress paper that describes a novel endoscope interface designed for use in an immersive virtual reality surgical simulator. We use an affordable off the shelf head mounted display to recreate the operating theatre environment. A hand held controller has been adapted so that it feels like the trainee is holding an endoscope controller with the same functionality. The simulator allows the endoscope shaft to be inserted into a virtual patient and pushed forward to a target position. The paper describes how we have built this surgical simulator with the intention of carrying out a full clinical study in the near future.
    • Evaluating LevelEd AR: An Indoor Modelling Application for Serious Games Level Design

      Beever, Lee; Pop, Serban R.; John, Nigel W.; University of Chester (IEEE Conference Publications, 2019-09-06)
      We developed an application that makes indoor modelling accessible by utilizing consumer grade technology in the form of Apple’s ARKit and a smartphone to assist with serious games level design. We compared our system to that of a tape measure and a system based on an infra-red depth sensor and application. We evaluated the accuracy and efficiency of each system over four different measuring tasks of increasing complexity. Our results suggest that our application is more accurate than the depth sensor system and as accurate and more time efficient as the tape measure over several tasks. Participants also showed a preference to our LevelEd AR application over the depth sensor system regarding usability.
    • The Implementation and Validation of a Virtual Environment for Training Powered Wheelchair Manoeuvres

      John, Nigel W.; Pop, Serban R.; Day, Thomas W.; Ritsos, Panagiotis D.; Headleand, Christopher J.; University of Chester; Bangor University; University of Lincoln (IEEE, 2017-05-02)
      Navigating a powered wheelchair and avoiding collisions is often a daunting task for new wheelchair users. It takes time and practice to gain the coordination needed to become a competent driver and this can be even more of a challenge for someone with a disability. We present a cost-effective virtual reality (VR) application that takes advantage of consumer level VR hardware. The system can be easily deployed in an assessment centre or for home use, and does not depend on a specialized high-end virtual environment such as a Powerwall or CAVE. This paper reviews previous work that has used virtual environments technology for training tasks, particularly wheelchair simulation. We then describe the implementation of our own system and the first validation study carried out using thirty three able bodied volunteers. The study results indicate that at a significance level of 5% then there is an improvement in driving skills from the use of our VR system. We thus have the potential to develop the competency of a wheelchair user whilst avoiding the risks inherent to training in the real world. However, the occurrence of cybersickness is a particular problem in this application that will need to be addressed.
    • An Information-Theoretic Approach to the Cost-benefit Analysis of Visualization in Virtual Environments

      Chen, Min; Gaither, Kelly; John, Nigel W.; McCann, Brian; University of Oxford; University of Texas at Austin; University of Chester (IEEE, 2018-08-20)
      Visualization and virtual environments (VEs) have been two interconnected parallel strands in visual computing for decades. Some VEs have been purposely developed for visualization applications, while many visualization applications are exemplary showcases in general-purpose VEs. Because of the development and operation costs of VEs, the majority of visualization applications in practice have yet to benefit from the capacity of VEs. In this paper, we examine this status quo from an information-theoretic perspective. Our objectives are to conduct cost-benefit analysis on typical VE systems (including augmented and mixed reality, theatre-based systems, and large powerwalls), to explain why some visualization applications benefit more from VEs than others, and to sketch out pathways for the future development of visualization applications in VEs. We support our theoretical propositions and analysis using theories and discoveries in the literature of cognitive sciences and the practical evidence reported in the literatures of visualization and VEs.
    • Interventional radiology virtual simulator for liver biopsy

      Villard, Pierre-Frédéric; Vidal, Franck P.; ap Cenydd, Llyr; Holbrey, Richard; Pisharody, S.; Johnson, Sheena; Bulpitt, Andy; John, Nigel W.; Bello, Fernando; Gould, Daniel (Springer, 2013-07-24)
      Training in Interventional Radiology currently uses the apprenticeship model, where clinical and technical skills of invasive procedures are learnt during practice in patients. This apprenticeship training method is increasingly limited by regulatory restrictions on working hours, concerns over patient risk through trainees' inexperience and the variable exposure to case mix and emergencies during training. To address this, we have developed a computer-based simulation of visceral needle puncture procedures. Methods A real-time framework has been built that includes: segmentation, physically based modelling, haptics rendering, pseudo-ultrasound generation and the concept of a physical mannequin. It is the result of a close collaboration between different universities, involving computer scientists, clinicians, clinical engineers and occupational psychologists. Results The technical implementation of the framework is a robust and real-time simulation environment combining a physical platform and an immersive computerized virtual environment. The face, content and construct validation have been previously assessed, showing the reliability and effectiveness of this framework, as well as its potential for teaching visceral needle puncture. Conclusion A simulator for ultrasound-guided liver biopsy has been developed. It includes functionalities and metrics extracted from cognitive task analysis. This framework can be useful during training, particularly given the known difficulties in gaining significant practice of core skills in patients.
    • LevelEd VR: A virtual reality level editor and workflow for virtual reality level design

      Beever, Lee; Pop, Serban W.; John, Nigel W.; University of Chester
      Virtual reality entertainment and serious games popularity has continued to rise but the processes for level design for VR games has not been adequately researched. Our paper contributes LevelEd VR; a generic runtime virtual reality level editor that supports the level design workflow used by developers and can potentially support user generated content. We evaluated our LevelEd VR application and compared it to an existing workflow of Unity on a desktop. Our current research indicates that users are accepting of such a system, and it has the potential to be preferred over existing workflows for VR level design. We found that the primary benefit of our system is an improved sense of scale and perspective when creating the geometry and implementing gameplay. The paper also contributes some best practices and lessons learned from creating a complex virtual reality tool, such as LevelEd VR.
    • LiTu - A Human-Computer Interface based on Frustrated Internal Reflection of Light

      Edwards, Marc R.; John, Nigel W.; University of Chester (IEEE Conference Publications, 2015-10)
      We have designed LiTu (Laɪ’Tu - Light Tube) as a customisable and low-cost (ca 30 Euros) human-computer interface. It is composed of an acrylic tube, a ball-bearing mirror, six LEDs and a webcam. Touching the tube causes frustrated internal reflection of light due to a change in the critical angle at the acrylic-skin boundary. Scattered light within the tube is reflected off the mirror into the camera at the opposite end for image processing. Illuminated contact regions in the video frames are segmented and processed to generate 2D information such as: pitch and volume, or x and y coordinates of a graphic. We demonstrate the functionality of LiTu both as a musical instrument and as an interactive computer graphics controller. For example, various musical notes can be generated by touching specific regions around the surface of the tube. Volume can be controlled by sliding a finger down the tube and pitch by sliding the finger radially. We demonstrate the adaptable nature of LiTu’s touch interface and discuss our plans to explore future physical modifications of the device.
    • Real-time Geometry-Aware Augmented Reality in Minimally Invasive Surgery

      Chen, Long; Tang, Wen; John, Nigel W.; Bournemouth University; University of Chester (IET, 2017-10-27)
      The potential of Augmented Reality (AR) technology to assist minimally invasive surgeries (MIS) lies in its computational performance and accuracy in dealing with challenging MIS scenes. Even with the latest hardware and software technologies, achieving both real-time and accurate augmented information overlay in MIS is still a formidable task. In this paper, we present a novel real-time AR framework for MIS that achieves interactive geometric aware augmented reality in endoscopic surgery with stereo views. Our framework tracks the movement of the endoscopic camera and simultaneously reconstructs a dense geometric mesh of the MIS scene. The movement of the camera is predicted by minimising the re-projection error to achieve a fast tracking performance, while the 3D mesh is incrementally built by a dense zero mean normalised cross correlation stereo matching method to improve the accuracy of the surface reconstruction. Our proposed system does not require any prior template or pre-operative scan and can infer the geometric information intra-operatively in real-time. With the geometric information available, our proposed AR framework is able to interactively add annotations, localisation of tumours and vessels, and measurement labelling with greater precision and accuracy compared with the state of the art approaches.
    • Real-Time Guidance and Anatomical Information by Image Projection onto Patients

      Edwards, Marc R.; Pop, Serban R.; John, Nigel W.; Ritsos, Panagiotis D.; Avis, Nick J.; University of Chester (Eurographics Association, 2016-09)
      The Image Projection onto Patients (IPoP) system is work in progress intended to assist medical practitioners perform procedures such as biopsies, or provide a novel anatomical education tool, by projecting anatomy and other relevant information from the operating room directly onto a patient’s skin. This approach is not currently used widely in hospitals but has the benefit of providing effective procedure guidance without the practitioner having to look away from the patient. Developmental work towards the alpha-phase of IPoP is presented including tracking methods for tools such as biopsy needles, patient tracking, image registration and problems encountered with the multi-mirror effect.
    • Recent Developments and Future Challenges in Medical Mixed Reality

      Chen, Long; Day, Thomas W.; Tang, Wen; John, Nigel W.; Bournemouth University and University of Chester (2017-11-23)
      Mixed Reality (MR) is of increasing interest within technology driven modern medicine but is not yet used in everyday practice. This situation is changing rapidly, however, and this paper explores the emergence of MR technology and the importance of its utility within medical applications. A classification of medical MR has been obtained by applying an unbiased text mining method to a database of 1,403 relevant research papers published over the last two decades. The classification results reveal a taxonomy for the development of medical MR research during this period as well as suggesting future trends. We then use the classification to analyse the technology and applications developed in the last five years. Our objective is to aid researchers to focus on the areas where technology advancements in medical MR are most needed, as well as providing medical practitioners with a useful source of reference.
    • Self-supervised monocular image depth learning and confidence estimation

      Chen, Long; Tang, Wen; Wan, Tao Ruan; John, Nigel W.; Bournemouth University; University of Bradford; University of Chester
      We present a novel self-supervised framework for monocular image depth learning and confidence estimation. Our framework reduces the amount of ground truth annotation data required for training Convolutional Neural Networks (CNNs), which is often a challenging problem for the fast deployment of CNNs in many computer vision tasks. Our DepthNet adopts a novel fully differential patch-based cost function through the Zero-Mean Normalized Cross Correlation (ZNCC) to take multi-scale patches as matching and learning strategies. This approach greatly increases the accuracy and robustness of the depth learning. Whilst the proposed patch-based cost function naturally provides a 0-to-1 confidence, it is then used to self-supervise the training of a parallel network for confidence map learning and estimation by exploiting the fact that ZNCC is a normalized measure of similarity which can be approximated as the confidence of the depth estimation. Therefore, the proposed corresponding confidence map learning and estimation operate in a self-supervised manner and is a parallel network to the DepthNet. Evaluation on the KITTI depth prediction evaluation dataset and Make3D dataset show that our method outperforms the state-of-the-art results.
    • SLAM-based dense surface reconstruction in monocular Minimally Invasive Surgery and its application to Augmented Reality

      Chen, Long; Tang, Wen; John, Nigel W.; Wan, Tao R.; Zhang, Jian Jun; Bournemouth University; University of Chester; University of Bradford (Elsevier, 2018-02-08)
      Background and Objective While Minimally Invasive Surgery (MIS) offers considerable benefits to patients, it also imposes big challenges on a surgeon's performance due to well-known issues and restrictions associated with the field of view (FOV), hand-eye misalignment and disorientation, as well as the lack of stereoscopic depth perception in monocular endoscopy. Augmented Reality (AR) technology can help to overcome these limitations by augmenting the real scene with annotations, labels, tumour measurements or even a 3D reconstruction of anatomy structures at the target surgical locations. However, previous research attempts of using AR technology in monocular MIS surgical scenes have been mainly focused on the information overlay without addressing correct spatial calibrations, which could lead to incorrect localization of annotations and labels, and inaccurate depth cues and tumour measurements. In this paper, we present a novel intra-operative dense surface reconstruction framework that is capable of providing geometry information from only monocular MIS videos for geometry-aware AR applications such as site measurements and depth cues. We address a number of compelling issues in augmenting a scene for a monocular MIS environment, such as drifting and inaccurate planar mapping. Methods A state-of-the-art Simultaneous Localization And Mapping (SLAM) algorithm used in robotics has been extended to deal with monocular MIS surgical scenes for reliable endoscopic camera tracking and salient point mapping. A robust global 3D surface reconstruction framework has been developed for building a dense surface using only unorganized sparse point clouds extracted from the SLAM. The 3D surface reconstruction framework employs the Moving Least Squares (MLS) smoothing algorithm and the Poisson surface reconstruction framework for real time processing of the point clouds data set. Finally, the 3D geometric information of the surgical scene allows better understanding and accurate placement AR augmentations based on a robust 3D calibration. Results We demonstrate the clinical relevance of our proposed system through two examples: a) measurement of the surface; b) depth cues in monocular endoscopy. The performance and accuracy evaluations of the proposed framework consist of two steps. First, we have created a computer-generated endoscopy simulation video to quantify the accuracy of the camera tracking by comparing the results of the video camera tracking with the recorded ground-truth camera trajectories. The accuracy of the surface reconstruction is assessed by evaluating the Root Mean Square Distance (RMSD) of surface vertices of the reconstructed mesh with that of the ground truth 3D models. An error of 1.24mm for the camera trajectories has been obtained and the RMSD for surface reconstruction is 2.54mm, which compare favourably with previous approaches. Second, in vivo laparoscopic videos are used to examine the quality of accurate AR based annotation and measurement, and the creation of depth cues. These results show the potential promise of our geometry-aware AR technology to be used in MIS surgical scenes. Conclusions The results show that the new framework is robust and accurate in dealing with challenging situations such as the rapid endoscopy camera movements in monocular MIS scenes. Both camera tracking and surface reconstruction based on a sparse point cloud are eff active and operated in real-time. This demonstrates the potential of our algorithm for accurate AR localization and depth augmentation with geometric cues and correct surface measurements in MIS with monocular endoscopes.
    • A Tablet-based Virtual Environment for Neurosurgery Training

      John, Nigel W.; Phillips, Nicholas I.; ap Cenydd, Llyr; Coope, David; Carleton-Bland, Nick; Kamaly-Asl, Ian; Grey, William P.; University of Chester, Leeds General Infirmary, Bangor University, University of Manchester, Cardiff University (MIT Press, 2015-10-15)
      The requirement for training surgical procedures without exposing the patient to additional risk is well accepted and is part of a national drive in the UK and internationally. Computer-based simulations are important in this context, including neurosurgical resident training. The objective of this study is to evaluate the effectiveness of a custom built virtual environment in assisting training of a ventriculostomy procedure. The training tool (called VCath) has been developed as an app for a tablet platform to provide easy access and availability to trainees. The study was conducted at the first boot camp organized for all year one trainees in neurosurgery in the UK. The attendees were randomly distributed between the VCath training group and the Control group. Efficacy of performing ventriculostomy for both groups was assessed at the beginning and end of the study using a simulated insertion task. Statistically significant changes in performance of selecting the burr hole entry point, the trajectory length and duration metrics for the VCath group, together with a good indicator of improved normalized jerk (representing the speed and smoothness of arm motion), all suggest that there has been a higher level cognitive benefit to using VCath. The app is successful as it is focused on the cognitive task of ventriculostomy, encouraging the trainee to rehearse the entry point and use anatomical landmarks to create a trajectory to the target. In straight-line trajectory procedures such as ventriculostomy, cognitive task based education is a useful adjunct to traditional methods and may reduce the learning curve and ultimately improve patient safety.