Now showing items 41-60 of 117

    • Medical 3D Graphics With eXtensible 3D

      Hamza-Lup, Felix G.; Polys, Nicholas F.; Malamos, Athanasios G.; John, Nigel W. (IGI Global, 2020)
      As the healthcare enterprise is adopting novel imaging and health-assessment technologies, we are facing unprecedented requirements in information sharing, patient empowerment, and care coordination within the system. Medical experts not only within US, but around the world should be empowered through collaboration capabilities on 3D data to enable solutions for complex medical problems that will save lives. The fast-growing number of 3D medical ‘images' and their derivative information must be shared across the healthcare enterprise among stakeholders with vastly different perspectives and different needs. The demand for 3D data visualization is driving the need for increased accessibility and sharing of 3D medical image presentations, including their annotations and their animations. As patients have to make decisions about their health, empowering them with the right tools to understand a medical procedure is essential both in the decision-making process and for knowledge sharing.
    • Talos: a prototype Intrusion Detection and Prevention system for profiling ransomware behaviour

      Wood, Ashley; Eze, Thaddeus; Speakman, Lee; University of Chester (Academic Conferences International, 2021-06-24)
      Abstract: In this paper, we profile the behaviour and functionality of multiple recent variants of WannaCry and CrySiS/Dharma, through static and dynamic malware analysis. We then analyse and detail the commonly occurring behavioural features of ransomware. These features are utilised to develop a prototype Intrusion Detection and Prevention System (IDPS) named Talos, which comprises of several detection mechanisms/components. Benchmarking is later performed to test and validate the performance of the proposed Talos IDPS system and the results discussed in detail. It is established that the Talos system can successfully detect all ransomware variants tested, in an average of 1.7 seconds and instigate remedial action in a timely manner following first detection. The paper concludes with a summarisation of our main findings and discussion of potential future works which may be carried out to allow the effective detection and prevention of ransomware on systems and networks.
    • An Endoscope Interface for Immersive Virtual Reality

      John, Nigel W.; Day, Thomas W.; Wardle, Terrence; University of Chester
      This is a work in progress paper that describes a novel endoscope interface designed for use in an immersive virtual reality surgical simulator. We use an affordable off the shelf head mounted display to recreate the operating theatre environment. A hand held controller has been adapted so that it feels like the trainee is holding an endoscope controller with the same functionality. The simulator allows the endoscope shaft to be inserted into a virtual patient and pushed forward to a target position. The paper describes how we have built this surgical simulator with the intention of carrying out a full clinical study in the near future.
    • ParaVR: A Virtual Reality Training Simulator for Paramedic Skills maintenance

      Rees, Nigel; Dorrington, Keith; Rees, Lloyd; Day, Thomas W.; Vaughan, Neil; John, Nigel W.; Welsh Ambulance Services NHS Trust, University of Chester
      Background, Virtual Reality (VR) technology is emerging as a powerful educational tool which is used in medical training and has potential benefits for paramedic practice education. Aim The aim of this paper is to report development of ParaVR, which utilises VR to address skills maintenance for paramedics. Methods Computer scientists at the University of Chester and the Welsh Ambulance Services NHS Trust (WAST) developed ParaVR in four stages: 1. Identifying requirements and specifications 2. Alpha version development, 3. Beta version development 4. Management: Development of software, further funding and commercialisation. Results Needle Cricothyrotomy and Needle Thoracostomy emerged as candidates for the prototype ParaVR. The Oculus Rift head mounted display (HMD) combined with Novint Falcon haptic device was used, and a virtual environment crafted using 3D modelling software, ported (a computing term meaning transfer (software) from one system or machine to another) onto Oculus Go and Google cardboard VR platform. Conclusion VR is an emerging educational tool with the potential to enhance paramedic skills development and maintenance. The ParaVR program is the first step in our development, testing, and scaling up of this technology.
    • LevelEd VR: A virtual reality level editor and workflow for virtual reality level design

      Beever, Lee; Pop, Serban W.; John, Nigel W.; University of Chester (IEEE, 2020-10-20)
      Virtual reality entertainment and serious games popularity has continued to rise but the processes for level design for VR games has not been adequately researched. Our paper contributes LevelEd VR; a generic runtime virtual reality level editor that supports the level design workflow used by developers and can potentially support user generated content. We evaluated our LevelEd VR application and compared it to an existing workflow of Unity on a desktop. Our current research indicates that users are accepting of such a system, and it has the potential to be preferred over existing workflows for VR level design. We found that the primary benefit of our system is an improved sense of scale and perspective when creating the geometry and implementing gameplay. The paper also contributes some best practices and lessons learned from creating a complex virtual reality tool, such as LevelEd VR.
    • Formal Verification of Astronaut-Rover Teams for Planetary Surface Operations

      Webster, Matt; Dennis, Louise A.; Dixon, Clare; Fisher, Michael; Stocker, Richard; Sierhuis, Maarten; University of Liverpool; University of Chester; Ejenta, inc. (IEEE, 2020-08-21)
      This paper describes an approach to assuring the reliability of autonomous systems for Astronaut-Rover (ASRO) teams using the formal verification of models in the Brahms multi-agent modelling language. Planetary surface rovers have proven essential to several manned and unmanned missions to the moon and Mars. The first rovers were tele- or manuallyoperated, but autonomous systems are increasingly being used to increase the effectiveness and range of rover operations on missions such as the NASA Mars Science Laboratory. It is anticipated that future manned missions to the moon and Mars will use autonomous rovers to assist astronauts during extravehicular activity (EVA), including science, technical and construction operations. These ASRO teams have the potential to significantly increase the safety and efficiency of surface operations. We describe a new Brahms model in which an autonomous rover may perform several different activities including assisting an astronaut during EVA. These activities compete for the autonomous rovers “attention’ and therefore the rover must decide which activity is currently the most important and engage in that activity. The Brahms model also includes an astronaut agent, which models an astronauts predicted behaviour during an EVA. The rover must also respond to the astronauts activities. We show how this Brahms model can be simulated using the Brahms integrated development environment. The model can then also be formally verified with respect to system requirements using the SPIN model checker, through automatic translation from Brahms to PROMELA (the input language for SPIN). We show that such formal verification can be used to determine that mission- and safety critical operations are conducted correctly, and therefore increase the reliability of autonomous systems for planetary rovers in ASRO teams.
    • Towards Cyber-User Awareness: Design and Evaluation

      Oyinloye, Toyosi; Eze, Thaddeus; Speakman, Lee; University of Chester (Academic Conferences and Publishing International, 2020-06-15)
      Human reliance on interconnected devices has given rise to a massive increase in cyber activities. There are about 17 billion interconnected devices in our World of about 8 billion people. Like the physical world, the cyber world is not void of entities whose activities, malicious or not, could be detrimental to other users who remain vulnerable as a result of their existence within cyberspace. Developments such as the introduction of 5G networks which advances communication speed among interconnected devices, undoubtedly proffer solutions for human living as well as adversely impacting systems. Vulnerabilities in applications embedded in devices, hardware deficiencies, user errors, are some of the loopholes that are exploited. Studies have revealed humans as weakest links in the cyber-chain, submitting that consistent implementation of cyber awareness programs would largely impact cybersecurity. Cyber-active systems have goals that compete with the implementation of cyber awareness programs, within limited resources. It is desirable to have cyber awareness systems that can be tailored around specific needs and considerations for important factors. This paper presents a system that aims to promote user awareness through a flexible, accessible, and cost-effective design. The system implements steps in a user awareness cycle, that considers human-factor (HF) and HF related root causes of cyber-attacks. We introduce a new user testing tool, adaptable for administering cybersecurity test questions for varying levels and categories of users. The tool was implemented experimentally by engaging cyber users within UK. Schemes and online documentations by UK Cybersecurity organisations were harnessed for assessing and providing relevant recommendations to participants. Results provided us with values representing each participants’ notional level of awareness which were subjected to a paired-T test for comparison with values derived in an automated assessment. This pilot study provides valuable details for projecting the efficacy of the system towards improving human influence in cybersecurity.
    • The Evolution of Ransomware Variants

      Wood, Ashley; Eze, Thaddeus
      Abstract: This paper investigates how ransomware is continuing to evolve and adapt as time progresses to become more damaging, resilient and sophisticated from one ransomware variant to another. This involves investigating how each ransomware sample including; Petya, WannaCry and CrySiS/Dharma interacts with the underlying system to implicate on both the systems functionality and its underlying data, by utilising several static and dynamic analysis tools. Our analysis shows, whilst ransomware is undoubtedly becoming more sophisticated, fundamental problems exist with its underlying encryption processes which has shown data recovery to be possible across all three samples studied whilst varying aspects of system functionality can be preserved or restored in their entirety.
    • Modelling the effects of glucagon during glucose tolerance testing

      Kelly, Ross A.; Fitches, Molly J.; Webb, Steven D.; Pop, Serban R.; Chidlow, Stewart J.; Liverpool John Moores University; University of Dundee; University of Chester
      Background: Glucose tolerance testing is a tool used to estimate glucose effectiveness and insulin sensitivity in diabetic patients. The importance of such tests has prompted the development and utilisation of mathematical models that describe glucose kinetics as a function of insulin activity. The hormone glucagon, also plays a fundamental role in systemic plasma glucose regulation and is secreted reciprocally to insulin, stimulating catabolic glucose utilisation. However, regulation of glucagon secretion by α-cells is impaired in type-1 and type-2 diabetes through pancreatic islet dysfunction. Despite this, inclusion of glucagon activity when modelling the glucose kinetics during glucose tolerance testing is often overlooked. This study presents two mathematical models of a glucose tolerance test that incorporate glucose-insulin-glucagon dynamics. The first model describes a non-linear relationship between glucagon and glucose, whereas the second model assumes a linear relationship. Results: Both models are validated against insulin-modified and glucose infusion intravenous glucose tolerance test (IVGTT) data, as well as insulin infusion data, and are capable of estimating patient glucose effectiveness (sG) and insulin sensitivity (sI). Inclusion of glucagon dynamics proves to provide a more detailed representation of the metabolic portrait, enabling estimation of two new diagnostic parameters: glucagon effectiveness (sE) and glucagon sensitivity (δ). Conclusions: The models are used to investigate how different degrees of patient glucagon sensitivity and effectiveness affect the concentration of blood glucose and plasma glucagon during IVGTT and insulin infusion tests, providing a platform from which the role of glucagon dynamics during a glucose tolerance test may be investigated and predicted.
    • In vitro and Computational Modelling of Drug Delivery across the Outer Blood-Retinal Barrier

      Davies, Alys E.; Williams, Rachel; Lugano, Gaia; Pop, Serban R.; Kearns, Victoria R.; University of Liverpool; University of Chester
      The ability to produce rapid, cost-effective and human-relevant data has the potential to accelerate development of new drug delivery systems. Intraocular drug delivery is an area undergoing rapid expansion due to the increase in sight-threatening diseases linked to increasing age and lifestyle factors. The outer bloodretinal barrier (OBRB) is important in this area of drug delivery, as it separates the eye from the systemic blood flow. This study reports the development of complementary in vitro and in silico models to study drug transport from silicone oil across the outer blood-retinal barrier. Monolayer cultures of a human retinal pigmented epithelium cell line, ARPE-19, were added to chambers and exposed to a controlled flow to simulate drug clearance across the OBRB. Movement of dextran molecules and release of ibuprofen from silicone oil in this model were measured. Corresponding simulations were developed using COMSOL Multiphysics computational fluid dynamics (CFD) software and validated using independent in vitro data sets. Computational simulations were able to predict dextran movement and ibuprofen release, with all of the features of the experimental release profiles being observed in the simulated data. Simulated values for peak concentrations of permeated dextran and ibuprofen released from silicone oil were within 18% of the in vitro results. This model could be used as a predictive tool of drug transport across this important tissue.
    • Self-supervised monocular image depth learning and confidence estimation

      Chen, Long; Tang, Wen; Wan, Tao R.; John, Nigel W.; Bournemouth University; University of Bradford; University of Chester
      We present a novel self-supervised framework for monocular image depth learning and confidence estimation. Our framework reduces the amount of ground truth annotation data required for training Convolutional Neural Networks (CNNs), which is often a challenging problem for the fast deployment of CNNs in many computer vision tasks. Our DepthNet adopts a novel fully differential patch-based cost function through the Zero-Mean Normalized Cross Correlation (ZNCC) to take multi-scale patches as matching and learning strategies. This approach greatly increases the accuracy and robustness of the depth learning. Whilst the proposed patch-based cost function naturally provides a 0-to-1 confidence, it is then used to self-supervise the training of a parallel network for confidence map learning and estimation by exploiting the fact that ZNCC is a normalized measure of similarity which can be approximated as the confidence of the depth estimation. Therefore, the proposed corresponding confidence map learning and estimation operate in a self-supervised manner and is a parallel network to the DepthNet. Evaluation on the KITTI depth prediction evaluation dataset and Make3D dataset show that our method outperforms the state-of-the-art results.
    • VRIA: A Web-based Framework for Creating Immersive Analytics Experiences

      Butcher, Peter; John, Nigel W.; Ritsos, Panagiotis D.; University of Chester and Bangor University (IEEE, 2020-01-09)
      We present<VRIA>, a Web-based framework for creating Immersive Analytics (IA) experiences in Virtual Reality.<VRIA>is built upon WebVR, A-Frame, React and D3.js, and offers a visualization creation workflow which enables users, of different levels of expertise, to rapidly develop Immersive Analytics experiences for the Web. The use of these open-standards Web-based technologies allows us to implement VR experiences in a browser and offers strong synergies with popular visualization libraries, through the HTMLDocument Object Model (DOM). This makes<VRIA>ubiquitous and platform-independent. Moreover, by using WebVR’s progressive enhancement, the experiences<VRIA>creates are accessible on a plethora of devices. We elaborate on our motivation for focusing on open-standards Web technologies, present the<VRIA>creation workflow and detail the underlying mechanics of our framework. We also report on techniques and optimizations necessary for implementing Immersive Analytics experiences on the Web, discuss scalability implications of our framework, and present a series of use case applications to demonstrate the various features of <VRIA>. Finally, we discuss current limitations of our framework, the lessons learned from its development, and outline further extensions.
    • Translational Medicine: Challenges and new orthopaedic vision (Mediouni-Model)

      Mediouni, Mohamed; Madiouni, Riadh; Gardner, Michael; Vaughan, Neil; University of Chester, UK
      Background: In North America and three European countries Translational Medicine (TM) funding has taken center stage as the National Institutes of Health (NIH), for example, has come to recognize that delays are common place in completing clinical trials based upon benchside advancements. Recently, there are several illustrative examples whereby the translation of research had untoward outcomes requiring immediate action. Methods: Focus more on three-dimensional (3D) simulation, biomarkers, and Artificial Intelligence may allow orthopaedic surgeons to predict the ideal practices before orthopaedic surgery. Using the best medical imaging techniques may improve the accuracy and precision of tumor resections. Results: This article is directed at the young surgeon scientist and in particular orthopaedic residents and all other junior physicians in training to help them better understand TM and position themselves in career paths and hospital systems that strive for optimal TM. It serves to hasten the movement of knowledge garnered from the benchside and move it quickly to the bedside. Conclusions: Communication is ongoing in a bidirectional format. It is anticipated that more and more medical Centers and institutions will adopt TM models of healthcare delivery.
    • An overview of self-adaptive technologies within virtual reality training

      Vaughan, Neil; Gabrys, Bogdan; Dubey, Venketesh; University of Chester
      This overview presents the current state-of-the-art of self-adaptive technologies within virtual reality (VR) training. Virtual reality training and assessment is increasingly used for five key areas: medical, industrial & commercial training, serious games, rehabilitation and remote training such as Massive Open Online Courses (MOOCs). Adaptation can be applied to five core technologies of VR including haptic devices, stereo graphics, adaptive content, assessment and autonomous agents. Automation of VR training can contribute to automation of actual procedures including remote and robotic assisted surgery which reduces injury and improves accuracy of the procedure. Automated haptic interaction can enable tele-presence and virtual artefact tactile interaction from either remote or simulated environments. Automation, machine learning and data driven features play an important role in providing trainee-specific individual adaptive training content. Data from trainee assessment can form an input to autonomous systems for customised training and automated difficulty levels to match individual requirements. Self-adaptive technology has been developed previously within individual technologies of VR training. One of the conclusions of this research is that while it does not exist, an enhanced portable framework is needed and it would be beneficial to combine automation of core technologies, producing a reusable automation framework for VR training.
    • De-smokeGCN: Generative Cooperative Networks for Joint Surgical Smoke Detection and Removal

      Chen, Long; Tang, Wen; John, Nigel W.; Wan, Tao R.; Zhang, Jian Jun; Bournemouth University; University of Chester; University of Bradford (IEEE, 2019-11-15)
      Surgical smoke removal algorithms can improve the quality of intra-operative imaging and reduce hazards in image-guided surgery, a highly desirable post-process for many clinical applications. These algorithms also enable effective computer vision tasks for future robotic surgery. In this paper, we present a new unsupervised learning framework for high-quality pixel-wise smoke detection and removal. One of the well recognized grand challenges in using convolutional neural networks (CNNs) for medical image processing is to obtain intra-operative medical imaging datasets for network training and validation, but availability and quality of these datasets are scarce. Our novel training framework does not require ground-truth image pairs. Instead, it learns purely from computer-generated simulation images. This approach opens up new avenues and bridges a substantial gap between conventional non-learning based methods and which requiring prior knowledge gained from extensive training datasets. Inspired by the Generative Adversarial Network (GAN), we have developed a novel generative-collaborative learning scheme that decomposes the de-smoke process into two separate tasks: smoke detection and smoke removal. The detection network is used as prior knowledge, and also as a loss function to maximize its support for training of the smoke removal network. Quantitative and qualitative studies show that the proposed training framework outperforms the state-of-the-art de-smoking approaches including the latest GAN framework (such as PIX2PIX). Although trained on synthetic images, experimental results on clinical images have proved the effectiveness of the proposed network for detecting and removing surgical smoke on both simulated and real-world laparoscopic images.
    • Policing the Cyber Threat: Exploring the threat from Cyber Crime and the ability of local Law Enforcement to respond

      Eze, Thaddeus; Hull, Matthew; Speakman, Lee; University of Chester (IEEE, 2019-07-01)
      The landscape in which UK policing operates today is a dynamic one, and growing threats such as the proliferation of cyber crime are increasing the demand on police resources. The response to cyber crime by national and regional law enforcement agencies has been robust, with significant investment in mitigating against, and tackling cyber threats. However, at a local level, police forces have to deal with an unknown demand, whilst trying to come to terms with new crime types, terminology and criminal techniques which are far from traditional. This paper looks to identify the demand from cyber crime in one police force in the United Kingdom, and whether there is consistency in the recording of crime. As well as this, it looks to understand whether the force can deal with cyber crime from the point of view of the Police Officers and Police Staff in the organisation.
    • Assisting Serious Games Level Design with an Augmented Reality Application and Workflow

      Beever, Lee; John, Nigel W.; Pop, Serban R.; University of Chester (Eurographics Proceedings, 2019-09-13)
      With the rise in popularity of serious games there is an increasing demand for virtual environments based on real-world locations. Emergency evacuation or fire safety training are prime examples of serious games that would benefit from accurate location depiction together with any application involving personal space. However, creating digital indoor models of real-world spaces is a difficult task and the results obtained by applying current techniques are often not suitable for use in real-time virtual environments. To address this problem, we have developed an application called LevelEd AR that makes indoor modelling accessible by utilizing consumer grade technology in the form of Apple’s ARKit and a smartphone. We compared our system to that of a tape measure and a system based on an infra-red depth sensor and application. We evaluated the accuracy and efficiency of each system over four different measuring tasks of increasing complexity. Our results suggest that our application is more accurate than the depth sensor system and as accurate and more time efficient as the tape measure over several tasks. Participants also showed a preference to our LevelEd AR application over the depth sensor system regarding usability. Finally, we carried out a preliminary case study that demonstrates how LevelEd AR can be successfully used as part of current industry workflows for serious games level design.
    • Context-Aware Mixed Reality: A Learning-based Framework for Semantic-level Interaction

      Chen, Long; Tang, Wen; Zhang, Jian Jun; John, Nigel W.; Bournemouth University; University of Chester; University of Bradford (Wiley, 2019-11-14)
      Mixed Reality (MR) is a powerful interactive technology for new types of user experience. We present a semantic-based interactive MR framework that is beyond current geometry-based approaches, offering a step change in generating high-level context-aware interactions. Our key insight is that by building semantic understanding in MR, we can develop a system that not only greatly enhances user experience through object-specific behaviors, but also it paves the way for solving complex interaction design challenges. In this paper, our proposed framework generates semantic properties of the real-world environment through a dense scene reconstruction and deep image understanding scheme. We demonstrate our approach by developing a material-aware prototype system for context-aware physical interactions between the real and virtual objects. Quantitative and qualitative evaluation results show that the framework delivers accurate and consistent semantic information in an interactive MR environment, providing effective real-time semantic level interactions.
    • Evaluating LevelEd AR: An Indoor Modelling Application for Serious Games Level Design

      Beever, Lee; Pop, Serban R.; John, Nigel W.; University of Chester (IEEE, 2019-09-06)
      We developed an application that makes indoor modelling accessible by utilizing consumer grade technology in the form of Apple’s ARKit and a smartphone to assist with serious games level design. We compared our system to that of a tape measure and a system based on an infra-red depth sensor and application. We evaluated the accuracy and efficiency of each system over four different measuring tasks of increasing complexity. Our results suggest that our application is more accurate than the depth sensor system and as accurate and more time efficient as the tape measure over several tasks. Participants also showed a preference to our LevelEd AR application over the depth sensor system regarding usability.
    • Virtual Reality Environment for the Cognitive Rehabilitation of Stroke Patients

      John, Nigel W.; Day, Thomas W.; Pop, Serban R.; Chatterjee, Kausik; Cottrell, Katy; Buchanan, Alastair; Roberts, Jonathan; University of Chester; Countess of Chester Hospital NHS Foundation Trust; Cadscan Ltd (IEEE, 2019-10-14)
      We present ongoing work to develop a virtual reality environment for the cognitive rehabilitation of patients as a part of their recovery from a stroke. A stroke causes damage to the brain and problem solving, memory and task sequencing are commonly affected. The brain can recover to some extent, however, and stroke patients have to relearn to carry out activities of daily learning. We have created an application called VIRTUE to enable such activities to be practiced using immersive virtual reality. Gamification techniques enhance the motivation of patients such as by making the level of difficulty of a task increase over time. The design and implementation of VIRTUE is presented together with the results of a small acceptability study.