Now showing items 1-20 of 79

    • ParaVR: A Virtual Reality Training Simulator for Paramedic Skills maintenance

      Rees, Nigel; Dorrington, Keith; Rees, Lloyd; Day, Thomas W; Vaughan, Neil; John, Nigel W; Welsh Ambulance Services NHS Trust, University of Chester
      Background, Virtual Reality (VR) technology is emerging as a powerful educational tool which is used in medical training and has potential benefits for paramedic practice education. Aim The aim of this paper is to report development of ParaVR, which utilises VR to address skills maintenance for paramedics. Methods Computer scientists at the University of Chester and the Welsh Ambulance Services NHS Trust (WAST) developed ParaVR in four stages: 1. Identifying requirements and specifications 2. Alpha version development, 3. Beta version development 4. Management: Development of software, further funding and commercialisation. Results Needle Cricothyrotomy and Needle Thoracostomy emerged as candidates for the prototype ParaVR. The Oculus Rift head mounted display (HMD) combined with Novint Falcon haptic device was used, and a virtual environment crafted using 3D modelling software, ported (a computing term meaning transfer (software) from one system or machine to another) onto Oculus Go and Google cardboard VR platform. Conclusion VR is an emerging educational tool with the potential to enhance paramedic skills development and maintenance. The ParaVR program is the first step in our development, testing, and scaling up of this technology.
    • LevelEd VR: A virtual reality level editor and workflow for virtual reality level design

      Beever, Lee; Pop, Serban W.; John, Nigel W.; University of Chester
      Virtual reality entertainment and serious games popularity has continued to rise but the processes for level design for VR games has not been adequately researched. Our paper contributes LevelEd VR; a generic runtime virtual reality level editor that supports the level design workflow used by developers and can potentially support user generated content. We evaluated our LevelEd VR application and compared it to an existing workflow of Unity on a desktop. Our current research indicates that users are accepting of such a system, and it has the potential to be preferred over existing workflows for VR level design. We found that the primary benefit of our system is an improved sense of scale and perspective when creating the geometry and implementing gameplay. The paper also contributes some best practices and lessons learned from creating a complex virtual reality tool, such as LevelEd VR.
    • Interactive Three-Dimensional Simulation and Visualisation of Real Time Blood Flow in Vascular Networks

      John, Nigel; Pop, Serban; Holland, Mark, I (University of ChesterUnviersity of Chester, 2020-05)
      One of the challenges in cardiovascular disease management is the clinical decision-making process. When a clinician is dealing with complex and uncertain situations, the decision on whether or how to intervene is made based upon distinct information from diverse sources. There are several variables that can affect how the vascular system responds to treatment. These include: the extent of the damage and scarring, the efficiency of blood flow remodelling, and any associated pathology. Moreover, the effect of an intervention may lead to further unforeseen complications (e.g. another stenosis may be “hidden” further along the vessel). Currently, there is no tool for predicting or exploring such scenarios. This thesis explores the development of a highly adaptive real-time simulation of blood flow that considers patient specific data and clinician interaction. The simulation should model blood realistically, accurately, and through complex vascular networks in real-time. Developing robust flow scenarios that can be incorporated into the decision and planning medical tool set. The focus will be on specific regions of the anatomy, where accuracy is of the utmost importance and the flow can develop into specific patterns, with the aim of better understanding their condition and predicting factors of their future evolution. Results from the validation of the simulation showed promising comparisons with the literature and demonstrated a viability for clinical use.
    • Formal Verification of Astronaut-Rover Teams for Planetary Surface Operations

      Webster, Matt; Dennis, Louise A; Dixon, Clare; Fisher, Michael; Stocker, Richard; Sierhuis, Maarten; University of Liverpool; University of Chester; Ejenta, inc.
      This paper describes an approach to assuring the reliability of autonomous systems for Astronaut-Rover (ASRO) teams using the formal verification of models in the Brahms multi-agent modelling language. Planetary surface rovers have proven essential to several manned and unmanned missions to the moon and Mars. The first rovers were tele- or manuallyoperated, but autonomous systems are increasingly being used to increase the effectiveness and range of rover operations on missions such as the NASA Mars Science Laboratory. It is anticipated that future manned missions to the moon and Mars will use autonomous rovers to assist astronauts during extravehicular activity (EVA), including science, technical and construction operations. These ASRO teams have the potential to significantly increase the safety and efficiency of surface operations. We describe a new Brahms model in which an autonomous rover may perform several different activities including assisting an astronaut during EVA. These activities compete for the autonomous rovers “attention’ and therefore the rover must decide which activity is currently the most important and engage in that activity. The Brahms model also includes an astronaut agent, which models an astronauts predicted behaviour during an EVA. The rover must also respond to the astronauts activities. We show how this Brahms model can be simulated using the Brahms integrated development environment. The model can then also be formally verified with respect to system requirements using the SPIN model checker, through automatic translation from Brahms to PROMELA (the input language for SPIN). We show that such formal verification can be used to determine that mission- and safety critical operations are conducted correctly, and therefore increase the reliability of autonomous systems for planetary rovers in ASRO teams.
    • Towards Cyber-User Awareness: Design and Evaluation

      Oyinloye, Toyosi; Eze, Thaddeus; Speakman, Lee; University of Chester
      Human reliance on interconnected devices has given rise to a massive increase in cyber activities. There are about 17 billion interconnected devices in our World of about 8 billion people. Like the physical world, the cyber world is not void of entities whose activities, malicious or not, could be detrimental to other users who remain vulnerable as a result of their existence within cyberspace. Developments such as the introduction of 5G networks which advances communication speed among interconnected devices, undoubtedly proffer solutions for human living as well as adversely impacting systems. Vulnerabilities in applications embedded in devices, hardware deficiencies, user errors, are some of the loopholes that are exploited. Studies have revealed humans as weakest links in the cyber-chain, submitting that consistent implementation of cyber awareness programs would largely impact cybersecurity. Cyber-active systems have goals that compete with the implementation of cyber awareness programs, within limited resources. It is desirable to have cyber awareness systems that can be tailored around specific needs and considerations for important factors. This paper presents a system that aims to promote user awareness through a flexible, accessible, and cost-effective design. The system implements steps in a user awareness cycle, that considers human-factor (HF) and HF related root causes of cyber-attacks. We introduce a new user testing tool, adaptable for administering cybersecurity test questions for varying levels and categories of users. The tool was implemented experimentally by engaging cyber users within UK. Schemes and online documentations by UK Cybersecurity organisations were harnessed for assessing and providing relevant recommendations to participants. Results provided us with values representing each participants’ notional level of awareness which were subjected to a paired-T test for comparison with values derived in an automated assessment. This pilot study provides valuable details for projecting the efficacy of the system towards improving human influence in cybersecurity.
    • The Evolution of Ransomware Variants

      Wood, Ashley; Eze, Thaddeus
      Abstract: This paper investigates how ransomware is continuing to evolve and adapt as time progresses to become more damaging, resilient and sophisticated from one ransomware variant to another. This involves investigating how each ransomware sample including; Petya, WannaCry and CrySiS/Dharma interacts with the underlying system to implicate on both the systems functionality and its underlying data, by utilising several static and dynamic analysis tools. Our analysis shows, whilst ransomware is undoubtedly becoming more sophisticated, fundamental problems exist with its underlying encryption processes which has shown data recovery to be possible across all three samples studied whilst varying aspects of system functionality can be preserved or restored in their entirety.
    • Modelling the effects of glucagon during glucose tolerance testing

      Kelly, Ross A; Fitches, Molly J; Webb, Steven D; Pop, Serban R; Chidlow, Stewart J; Liverpool John Moores University; University of Dundee; University of Chester
      Background Glucose tolerance testing is a tool used to estimate glucose effectiveness and insulin sensitivity in diabetic patients. The importance of such tests has prompted the development and utilisation of mathematical models that describe glucose kinetics as a function of insulin activity. The hormone glucagon, also plays a fundamental role in systemic plasma glucose regulation and is secreted reciprocally to insulin, stimulating catabolic glucose utilisation. However, regulation of glucagon secretion by α-cells is impaired in type-1 and type-2 diabetes through pancreatic islet dysfunction. Despite this, inclusion of glucagon activity when modelling the glucose kinetics during glucose tolerance testing is often overlooked. This study presents two mathematical models of a glucose tolerance test that incorporate glucose-insulin-glucagon dynamics. The first model describes a non-linear relationship between glucagon and glucose, whereas the second model assumes a linear relationship. Results Both models are validated against insulin-modified and glucose infusion intravenous glucose tolerance test (IVGTT) data, as well as insulin infusion data, and are capable of estimating patient glucose effectiveness (sG) and insulin sensitivity (sI). Inclusion of glucagon dynamics proves to provide a more detailed representation of the metabolic portrait, enabling estimation of two new diagnostic parameters: glucagon effectiveness (sE) and glucagon sensitivity (δ). Conclusions The models are used to investigate how different degrees of patient glucagon sensitivity and effectiveness affect the concentration of blood glucose and plasma glucagon during IVGTT and insulin infusion tests, providing a platform from which the role of glucagon dynamics during a glucose tolerance test may be investigated and predicted.
    • In vitro and Computational Modelling of Drug Delivery across the Outer Blood-Retinal Barrier

      Davies, Alys E; Williams, Rachel L.; Lugano, Gaia; Pop, Serban R.; Kearns, Victoria R.; University of Liverpool; University of Chester
      The ability to produce rapid, cost-effective and human-relevant data has the potential to accelerate development of new drug delivery systems. Intraocular drug delivery is an area undergoing rapid expansion due to the increase in sight-threatening diseases linked to increasing age and lifestyle factors. The outer bloodretinal barrier (OBRB) is important in this area of drug delivery, as it separates the eye from the systemic blood flow. This study reports the development of complementary in vitro and in silico models to study drug transport from silicone oil across the outer blood-retinal barrier. Monolayer cultures of a human retinal pigmented epithelium cell line, ARPE-19, were added to chambers and exposed to a controlled flow to simulate drug clearance across the OBRB. Movement of dextran molecules and release of ibuprofen from silicone oil in this model were measured. Corresponding simulations were developed using COMSOL Multiphysics computational fluid dynamics (CFD) software and validated using independent in vitro data sets. Computational simulations were able to predict dextran movement and ibuprofen release, with all of the features of the experimental release profiles being observed in the simulated data. Simulated values for peak concentrations of permeated dextran and ibuprofen released from silicone oil were within 18% of the in vitro results. This model could be used as a predictive tool of drug transport across this important tissue.
    • Self-supervised monocular image depth learning and confidence estimation

      Chen, Long; Tang, Wen; Wan, Tao Ruan; John, Nigel W.; Bournemouth University; University of Bradford; University of Chester
      We present a novel self-supervised framework for monocular image depth learning and confidence estimation. Our framework reduces the amount of ground truth annotation data required for training Convolutional Neural Networks (CNNs), which is often a challenging problem for the fast deployment of CNNs in many computer vision tasks. Our DepthNet adopts a novel fully differential patch-based cost function through the Zero-Mean Normalized Cross Correlation (ZNCC) to take multi-scale patches as matching and learning strategies. This approach greatly increases the accuracy and robustness of the depth learning. Whilst the proposed patch-based cost function naturally provides a 0-to-1 confidence, it is then used to self-supervise the training of a parallel network for confidence map learning and estimation by exploiting the fact that ZNCC is a normalized measure of similarity which can be approximated as the confidence of the depth estimation. Therefore, the proposed corresponding confidence map learning and estimation operate in a self-supervised manner and is a parallel network to the DepthNet. Evaluation on the KITTI depth prediction evaluation dataset and Make3D dataset show that our method outperforms the state-of-the-art results.
    • VRIA: A Web-based Framework for Creating Immersive Analytics Experiences

      Butcher, Peter; John, Nigel W; Ritsos, Panagiotis D.; University of Chester and Bangor University
      We present<VRIA>, a Web-based framework for creating Immersive Analytics (IA) experiences in Virtual Reality.<VRIA>is built upon WebVR, A-Frame, React and D3.js, and offers a visualization creation workflow which enables users, of different levels of expertise, to rapidly develop Immersive Analytics experiences for the Web. The use of these open-standards Web-based technologies allows us to implement VR experiences in a browser and offers strong synergies with popular visualization libraries, through the HTMLDocument Object Model (DOM). This makes<VRIA>ubiquitous and platform-independent. Moreover, by using WebVR’s progressive enhancement, the experiences<VRIA>creates are accessible on a plethora of devices. We elaborate on our motivation for focusing on open-standards Web technologies, present the<VRIA>creation workflow and detail the underlying mechanics of our framework. We also report on techniques and optimizations necessary for implementing Immersive Analytics experiences on the Web, discuss scalability implications of our framework, and present a series of use case applications to demonstrate the various features of <VRIA>. Finally, we discuss current limitations of our framework, the lessons learned from its development, and outline further extensions.
    • Translational Medicine: Challenges and new orthopaedic vision (Mediouni-Model)

      Mediouni, Mohamed; Madiouni, Riadh; Gardner, Michael; Vaughan, Neil; University of Chester, UK
      Background: In North America and three European countries Translational Medicine (TM) funding has taken center stage as the National Institutes of Health (NIH), for example, has come to recognize that delays are common place in completing clinical trials based upon benchside advancements. Recently, there are several illustrative examples whereby the translation of research had untoward outcomes requiring immediate action. Methods: Focus more on three-dimensional (3D) simulation, biomarkers, and Artificial Intelligence may allow orthopaedic surgeons to predict the ideal practices before orthopaedic surgery. Using the best medical imaging techniques may improve the accuracy and precision of tumor resections. Results: This article is directed at the young surgeon scientist and in particular orthopaedic residents and all other junior physicians in training to help them better understand TM and position themselves in career paths and hospital systems that strive for optimal TM. It serves to hasten the movement of knowledge garnered from the benchside and move it quickly to the bedside. Conclusions: Communication is ongoing in a bidirectional format. It is anticipated that more and more medical Centers and institutions will adopt TM models of healthcare delivery.
    • An overview of self-adaptive technologies within virtual reality training

      Vaughan, Neil; Gabrys, Bogdan; Dubey, Venketesh; University of Chester
      This overview presents the current state-of-the-art of self-adaptive technologies within virtual reality (VR) training. Virtual reality training and assessment is increasingly used for five key areas: medical, industrial & commercial training, serious games, rehabilitation and remote training such as Massive Open Online Courses (MOOCs). Adaptation can be applied to five core technologies of VR including haptic devices, stereo graphics, adaptive content, assessment and autonomous agents. Automation of VR training can contribute to automation of actual procedures including remote and robotic assisted surgery which reduces injury and improves accuracy of the procedure. Automated haptic interaction can enable tele-presence and virtual artefact tactile interaction from either remote or simulated environments. Automation, machine learning and data driven features play an important role in providing trainee-specific individual adaptive training content. Data from trainee assessment can form an input to autonomous systems for customised training and automated difficulty levels to match individual requirements. Self-adaptive technology has been developed previously within individual technologies of VR training. One of the conclusions of this research is that while it does not exist, an enhanced portable framework is needed and it would be beneficial to combine automation of core technologies, producing a reusable automation framework for VR training.
    • De-smokeGCN: Generative Cooperative Networks for Joint Surgical Smoke Detection and Removal

      Chen, Long; Tang, Wen; John, Nigel W.; Wan, Tao Ruan; Zhang, Jian Jun; Bournemouth University; University of Chester; University of Bradford (IEEE XPlore, 2019-11-15)
      Surgical smoke removal algorithms can improve the quality of intra-operative imaging and reduce hazards in image-guided surgery, a highly desirable post-process for many clinical applications. These algorithms also enable effective computer vision tasks for future robotic surgery. In this paper, we present a new unsupervised learning framework for high-quality pixel-wise smoke detection and removal. One of the well recognized grand challenges in using convolutional neural networks (CNNs) for medical image processing is to obtain intra-operative medical imaging datasets for network training and validation, but availability and quality of these datasets are scarce. Our novel training framework does not require ground-truth image pairs. Instead, it learns purely from computer-generated simulation images. This approach opens up new avenues and bridges a substantial gap between conventional non-learning based methods and which requiring prior knowledge gained from extensive training datasets. Inspired by the Generative Adversarial Network (GAN), we have developed a novel generative-collaborative learning scheme that decomposes the de-smoke process into two separate tasks: smoke detection and smoke removal. The detection network is used as prior knowledge, and also as a loss function to maximize its support for training of the smoke removal network. Quantitative and qualitative studies show that the proposed training framework outperforms the state-of-the-art de-smoking approaches including the latest GAN framework (such as PIX2PIX). Although trained on synthetic images, experimental results on clinical images have proved the effectiveness of the proposed network for detecting and removing surgical smoke on both simulated and real-world laparoscopic images.
    • Policing the Cyber Threat: Exploring the threat from Cyber Crime and the ability of local Law Enforcement to respond

      Eze, Thaddeus; Hull, Matthew; Speakman, Lee; University of Chester (Proceedings of the IEEE, 2019-07-01)
      The landscape in which UK policing operates today is a dynamic one, and growing threats such as the proliferation of cyber crime are increasing the demand on police resources. The response to cyber crime by national and regional law enforcement agencies has been robust, with significant investment in mitigating against, and tackling cyber threats. However, at a local level, police forces have to deal with an unknown demand, whilst trying to come to terms with new crime types, terminology and criminal techniques which are far from traditional. This paper looks to identify the demand from cyber crime in one police force in the United Kingdom, and whether there is consistency in the recording of crime. As well as this, it looks to understand whether the force can deal with cyber crime from the point of view of the Police Officers and Police Staff in the organisation.
    • Assisting Serious Games Level Design with an Augmented Reality Application and Workflow

      Beever, Lee; John, Nigel W.; Pop, Serban R.; University of Chester (Eurographics Proceedings, 2019-09-13)
      With the rise in popularity of serious games there is an increasing demand for virtual environments based on real-world locations. Emergency evacuation or fire safety training are prime examples of serious games that would benefit from accurate location depiction together with any application involving personal space. However, creating digital indoor models of real-world spaces is a difficult task and the results obtained by applying current techniques are often not suitable for use in real-time virtual environments. To address this problem, we have developed an application called LevelEd AR that makes indoor modelling accessible by utilizing consumer grade technology in the form of Apple’s ARKit and a smartphone. We compared our system to that of a tape measure and a system based on an infra-red depth sensor and application. We evaluated the accuracy and efficiency of each system over four different measuring tasks of increasing complexity. Our results suggest that our application is more accurate than the depth sensor system and as accurate and more time efficient as the tape measure over several tasks. Participants also showed a preference to our LevelEd AR application over the depth sensor system regarding usability. Finally, we carried out a preliminary case study that demonstrates how LevelEd AR can be successfully used as part of current industry workflows for serious games level design.
    • Context-Aware Mixed Reality: A Learning-based Framework for Semantic-level Interaction

      Chen, Long; Tang, Wen; Zhang, Jian Jun; John, Nigel W.; Bournemouth University; University of Chester; University of Bradford
      Mixed Reality (MR) is a powerful interactive technology for new types of user experience. We present a semantic-based interactive MR framework that is beyond current geometry-based approaches, offering a step change in generating high-level context-aware interactions. Our key insight is that by building semantic understanding in MR, we can develop a system that not only greatly enhances user experience through object-specific behaviors, but also it paves the way for solving complex interaction design challenges. In this paper, our proposed framework generates semantic properties of the real-world environment through a dense scene reconstruction and deep image understanding scheme. We demonstrate our approach by developing a material-aware prototype system for context-aware physical interactions between the real and virtual objects. Quantitative and qualitative evaluation results show that the framework delivers accurate and consistent semantic information in an interactive MR environment, providing effective real-time semantic level interactions.
    • Evaluating LevelEd AR: An Indoor Modelling Application for Serious Games Level Design

      Beever, Lee; Pop, Serban R.; John, Nigel W.; University of Chester (IEEE Conference Publications, 2019-09-06)
      We developed an application that makes indoor modelling accessible by utilizing consumer grade technology in the form of Apple’s ARKit and a smartphone to assist with serious games level design. We compared our system to that of a tape measure and a system based on an infra-red depth sensor and application. We evaluated the accuracy and efficiency of each system over four different measuring tasks of increasing complexity. Our results suggest that our application is more accurate than the depth sensor system and as accurate and more time efficient as the tape measure over several tasks. Participants also showed a preference to our LevelEd AR application over the depth sensor system regarding usability.
    • Factors for successful Agile collaboration between UX designers and software developers in a complex organisation

      Avis, Nick; Kerins, John; Jones, Alexander J (University of Chester, 2019-07-23)
      User Centred Design (UCD) and Agile Software Development (ASD) processes have been two extremely successful methods for software development in recent years. However, both have been repeatedly described as frequently putting contradictory demands on people working with the respective processes. The current research addresses this point by focussing on the crucial relationship between a User Experience (UX) designer and a software developer. In-depth interviews, an online survey, a contextual inquiry and a diary study are described from a sample of over 100 designers, developers and their stakeholders (managers) in a large media organisation exploring factors for success in Agile development cycles. The findings from the survey show that organisational separation is challenge for agile collaboration between the two roles and while designers and developers have similar levels of (moderately positive) satisfaction with Agile processes, there are differences between the two roles. While developers are happier with the wider teamwork but want more access to and close collaboration with designers, particularly in an environment set up for Agile practices, the designers’ concern was the quality of the wider teamwork. The respondent’s comments also identified that the two roles saw a close – and ideally co-located – cooperation as essential for improving communication, reducing inefficiencies, and avoiding bad products being released. These results reflected the findings from the in-depth interviews with stakeholders. In particular, it was perceived that co-located pairing helped understanding different role-dependent demands and skills, increased efficiency of prototyping and implementing changes, and enabling localised decision-making. However, organisational processes, the setup of work-environment, and managerial traditions meant that this close collaboration and localised decision-making was often not possible to maintain over extended periods. Despite this, the studies conducted between pairs of designers and developers, found that successful collaboration between designers and developers can be found in a complex organisational setting. From the analysis of the empirical studies, six contributing factors emerged that support this. These factors are 1) Close proximity, 2) Early and frequent communication, 3) Shared ideation and problem solving, 4) Crossover of knowledge and skills, 5) Co-creation and prototyping and 6) Making joint decisions. These factors are crucially determined and empowered by the support from the organisational setting and 3 teams where practitioners work. Specifically, by overcoming key challenges to enable integration between UCD and ASD and thus encouraging close collaboration between UX designers and software developers, these challenges are: 1) Organisational structure and team culture, 2) Location and environmental setup and 3) Decision-making. These challenges along with the six factors that enable successful Agile collaboration between designers and developers provide the main contributions of this research. These contributions can be applied within large complex organisations by adopting the suggested ‘Paired Collaboration Manifesto’ to improve the integration between UCD and ASD. Beyond this, more empirical studies can take place, further extending improvements to the collaborative practices between the design and development roles and their surrounding teams.
    • Virtual Reality Environment for the Cognitive Rehabilitation of Stroke Patients

      John, Nigel W.; Day, Thomas W.; Pop, Serban R.; Chatterjee, Kausik; Cottrell, Katy; Buchanan, Alastair; Roberts, Jonathan; University of Chester; Countess of Chester Hospital NHS Foundation Trust; Cadscan Ltd (IEEE, 2019-10-14)
      We present ongoing work to develop a virtual reality environment for the cognitive rehabilitation of patients as a part of their recovery from a stroke. A stroke causes damage to the brain and problem solving, memory and task sequencing are commonly affected. The brain can recover to some extent, however, and stroke patients have to relearn to carry out activities of daily learning. We have created an application called VIRTUE to enable such activities to be practiced using immersive virtual reality. Gamification techniques enhance the motivation of patients such as by making the level of difficulty of a task increase over time. The design and implementation of VIRTUE is presented together with the results of a small acceptability study.
    • Training Powered Wheelchair Manoeuvres in Mixed Reality

      Day, Thomas W.; John, Nigel W.; University of Chester (IEEE Xplore, 2019-09)
      We describe a mixed reality environment that has been designed as an aid for training driving skills for a powered wheelchair. Our motivation is to provide an improvement on a previous virtual reality wheelchair driving simulator, with a particular aim to remove any cybersickness effects. The results of a validation test are presented that involved 35 able bodied volunteers divided into three groups: mixed reality trained, virtual reality trained, and a control group. No significant differences in improvement was found between the groups but there is a notable trend that both the mixed reality and virtual reality groups improved more than the control group. Whereas the virtual reality group experienced discomfort (as measured using a simulator sickness questionnaire), the mixed reality group experienced no side effects.