Staff within the Department of Computer Science have research interests in Visualization, Interaction & Computer Graphics (with a particular focus on Medical Graphics), Cyber Security and Discrete Optimisation.

Recent Submissions

  • Dynamic pricing-driven load optimization in islanded microgrid for home energy management systems

    Ahmad, Nabila; Sultan, Kiran; Khalid, Hassan Abdullah; Abbasi, Ayesha; Hossain, Jakir; National University of Sciences and Technology (NUST), Islamabad; University of Chester; International Islamic University Islamabad (Taylor & Francis, 2025-11-30)
    In order to maximize residential energy use and reduce electricity costs, home energy management systems (HEMS), are crucial. A distinct dynamic pricing-driven load optimization technique for HEMS running in islanded mode where grid access is either limited or nonexistent is presented in this paper. Incorporating important distributed energy resources like photovoltaic (PV) systems, electric vehicles (EVs), battery energy storage systems (BESS), and limited grid access, the optimal schedule is also used for real-time dynamic pricing analysis. A planned short-duration outage and a full 24-hour islanding scenario are the two different outage scenarios that are assessed. In short-duration outages, grid dependency decreases to roughly 10% in the spring and approximately 50% in the summer and winter, according to a thorough seasonal analysis conducted in the spring, summer, and winter. The suggested approach guarantees total self-sufficiency in the 24-hour outage scenario, with local resources satisfying load demands in full.
  • Banal deception and human-AI ecosystems: A study of people’s perceptions of LLM-generated deceptive behaviour

    Zhan, Xiao; Xu, Yifan; Abdi, Noura; Collenette, Joe; Sarkadi, Stefan; King’s College London; University of Manchester; Liverpool John Moores University; University of Chester (AI Access Foundation, 2025-10-09)
    Large language models (LLMs) can provide users with false, inaccurate, or misleading information, and we consider the output of this type of information as what Natale calls ‘banal’ deceptive behaviour [53]. Here, we investigate peoples’ perceptions of ChatGPT-generated deceptive behaviour and how this affects people’s behaviour and trust. To do this, we use a mixed-methods approach comprising of (i) an online survey with 220 participants and (ii) semi-structured interviews with 12 participants. Our results show that (i) the most common types of deceptive information encountered were over-simplifications and outdated information; (ii) humans’ perceptions of trust and chat-worthiness of ChatGPT are impacted by ‘banal’ deceptive behaviour; (iii) the perceived responsibility for deception is influenced by education level and the perceived frequency of deceptive information; and (iv) users become more cautious after encountering deceptive information, but they come to trust the technology more when they identify advantages of using it. Our findings contribute to understanding human-AI interaction dynamics in the context of Deceptive AI Ecosystems and highlight the importance of user-centric approaches to mitigating the potential harms of deceptive AI technologies.
  • Correction to: Visualization for epidemiological modelling: challenges, solutions, reflections and recommendations (2022) by Dykes et al.

    Dykes, Jason; Abdul-Rahman, Alfie; Archambault, Daniel; Bach, Benjamin; Borgo, Rita; Chen, Min; Enright, Jessica; Fang, Hui; Firat, Elif E.; Freeman, Euan; et al. (The Royal Society, 2022-09-12)
    In the original version of this article, references 113–120, 123–140 and 143 were incorrectly numbered. This has been corrected on the publisher’s website.
  • Same structures, different settings: exploring computing capital and participation across cultural contexts

    Kunkeler, Thom; Barr, Matthew; Kallia, Maria; Andrei, Oona; Li, Xiaohan; Muncey, Andrew; Nylén, Aletta; Venn-Wycherley, Megan; Uppsala University; University of Glasgow; University of Southampton; University of Chester; Swansea University (Association for Computing Machinery, 2025-11)
    The number of people choosing to study computing in higher education remains low. Previous research has developed a research instrument to identify factors underlying student participation grounded in Bourdieu’s sociocultural theory. This study replicates and extends the original study, which identified key social, cultural, and psychological factors linked to computing education participation in Sweden. Using the validated research instrument, we distributed a survey across 11 UK universities, gathering responses from 131 students. Through Confirmatory Factor Analysis, we assessed the robustness of the original study’s constructs — career interest, subject-specific interest, influence from family and friends, confidence, and sense of belonging — and their relationship to subject choice in computing. After model refinements, the replication confirmed and validated the factor structure, supporting the stability of these constructs and their relationship to computing subject choice across cultural contexts. In addition, the current study adds additional open-ended questions to the research instrument to help explain the quantitative results. A thematic analysis further explains the correlation between previous experience, social influence, confidence, and gender, and how that relates to participation in the field. By replicating and extending the original study’s methodology, this research evaluates the reliability and generalisability of its conclusions, contributing to the evidence base needed to design interventions that broaden participation in computing education.
  • The effect of multiplayer game modes on inter-player data for player experience modelling

    Brooke, Alexander; Crossley, Matthew; Lloyd, Huw; Cunningham, Stuart; Manchester Metropolitan University; University of Chester (IEEE, 2025-09-25)
    Research into social compliance, emotional contagion and behavioural synchronicity shows promise for various avenues of work concerning human-computer interaction, and a wider understanding of emotion. Despite their relevance, few studies have applied findings from these domains to player experience modelling in a multiplayer game, in itself having applications in entertainment, education and healthcare. Further to this, of the little work making use of inter-player data to model aspects of player experience, none considers the differences that may be found across common multiplayer game modes. This work therefore makes use of data collected across players in a series of common multiplayer game modes, considering the utility of inter-player data for predictive modelling using artificial neural networks in each. Results suggest that approaches modelling measures of players' experiences in terms of discrete emotion intensities are best made using their own facial expressions in nearly all circumstances, but past this, facial expression data from team based and competitive game modes shows the greatest promise. Considering the additional data separations available to team-based gameplay, we find that data collected from players on an opposing team shows greater utility for prediction of target player experience than data collected from a player on the same team. Regarding this, we make suggestions for the most applicable avenues for future research into the utilisation of inter-player data for emotional modelling.
  • Designing Value-Aligned Traffic Agents through Conflict Sensitivity

    Rakow, Astrid; Collenette, Joe; Schwammberger, Maike; Slavkovik, Marija; Vaz Alves, Gleifer; German Aerospace Center, Institute of Systems Engineering for Future Mobility; University of Chester; Karlsruhe Institute of Technology (KIT); Universidade Tecnologica Federal do Paraná (ArXiv, 2025-07-25)
    Autonomous traffic agents (ATAs) are expected to act in ways tat are not only safe, but also aligned with stakeholder values across legal, social, and moral dimensions. In this paper, we adopt an established formal model of conflict from epistemic game theory to support the development of such agents. We focus on value conflicts-situations in which agents face competing goals rooted in value-laden situations and show how conflict analysis can inform key phases of the design process. This includes value elicitation, capability specification, explanation, and adaptive system refinement. We elaborate and apply the concept of Value-Aligned Operational Design Domains (VODDs) to structure autonomy in accordance with contextual value priorities. Our approach shifts the emphasis from solving moral dilemmas at runtime to anticipating and structuring value-sensitive behaviour during development.
  • Designing Value-Aligned Traffic Agents through Conflict Sensitivity

    Rakow, Astrid; Collenette, Joe; Schwammberger, Maike; Slavkovik, Marija; Vaz Alves, Gleifer; German Aerospace Center, Institute of Systems Engineering for Future Mobility; University of Chester; Karlsruhe Institute of Technology (KIT); Universidade Tecnologica Federal do Paraná (Springer Nature, 2025)
    Abstract Autonomous traffic agents (ATAs)-automated systems with high level of autonomy in traffic environments must not only guarantee safety but also act in accordance with legal, social, and moral values. In this short version, we adopt the epistemic game-theoretic conflict model of Damm et al to characterise value conflicts-situations where competing, value-laden goals cannot all be satisfied. As a mean to align the decision making of an ATA with stakeholder preferences, we introduce Value-Aligned Operational Design Domains (VODDs). They represent autonomous decision making scopes that guide an agent's conflict resolution and specify handover rules
  • From Data-Compliance to Model-Introspection: Challenges in AV Rule Compliance Monitoring

    Rakow, Astrid; Gil Gasiola, Gustavo; Collenette, Joe; Grundt, Dominik; Möhlmann, Eike; Schwammberger, Maike; German Aerospace Center, Institute of Systems Engineering for Future Mobility; Karlsruhe Institute of Technology; University of Chester (IEEE, 2025)
    Autonomous vehicles (AVs) are expected to comply with traffic laws, ensure safety, and provide transparent explanations of their decisions. Achieving these goals requires monitoring architectures that pro- cess large volumes of sensor, control, and contextual data. While real-time perception and decision-making are functionally indispensable, storing and using this data for auditing or improvement raises unresolved legal and technical challenges. Data protection regulations—such as the GDPR—mandate that personal data processing be limited to what is strictly necessary for specified purposes (Art. 5(1)(b), (c), and (e)). Yet, in practice, what counts as “necessary” remains ambiguous. This tension gives rise to the data-justification gap: the lack of systematic methods to determine which logged data is both sufficient to support compliance assessments and minimal under data protection constraints. At the same time, aligning formalized rules with their legal intent poses a separate but interrelated challenge—the alignment problem. Legal norms are often ambiguous or context-dependent, and existing monitoring frameworks rarely guarantee that formal specifications faithfully reflect legal meaning. This paper outlines a research agenda for bridging these gaps. We propose an integrated approach com- bining formal methods, legal reasoning, and runtime monitoring to develop data-justification frameworks. Such frameworks would enable developers to generate interpretable rule formalizations, synthesize minimally sufficient monitors, and justify data collection in a transparent and legally defensible manner.
  • Efficient Spectrum Sharing in Cognitive Radio Networks With NOMA Using Computational Intelligence

    Sultan, Kiran; University of Chester (Wiley, 2025-09-09)
    The integration of Cognitive Radio Networks (CRNs) with Non-Orthogonal Multiple Access (NOMA) offers great potential for improving spectral efficiency in 5G and Beyond-5G (B5G) networks. This study proposes an efficient spectrum-sharing technique for dual-hop CRNs using NOMA, optimized by an Improved Artificial Bee Colony (IABC) algorithm and guided by a Single Input Single Output Fuzzy Rule-Based (SISO-FRBS) System. In this setup, a distant primary transmitter communicates with the primary receiver via a secondary NOMA relay. The objective is to maximize the sum data rate of secondary users (SUs) while minimizing total transmission power. SISO-FRBS enhances IABC search process by dynamically guiding the search agents, improving both optimization quality and convergence. Simulation results show that the proposed scheme achieves the primary data rate benchmark of 5bit/s/Hz at a transmit power of 19mW, compared to 23mW with traditional ABC, achieving a 19.04% improvement in power efficiency.
  • MSAF: A cardiac 3D image segmentation network based on Multiscale Collaborative Attention and Multiscale Feature Fusion

    Zhang, Guodong; Li, He; Xie, Wanying; Yang, Bin; Gong, Zhaoxuan; Guo, Wei; Ju, Ronghui; Shenyang Aerospace University; University of Chester; The People's Hospital of Liaoning Province (Wiley, 2025-08-21)
    Accurate segmentation of cardiac structures is essential for clinical diagnosis and treatment of cardiovascular diseases. Existing Transformer‐based cardiac segmentation methods mostly rely on single‐scale token‐wise attention mechanisms that emphasize global feature modeling, but they lack sufficient sensitivity to local spatial structures, such as myocardial boundaries in cardiac 3D images, resulting in ineffective multiscale feature capturing and a loss of local spatial details, thereby negatively impacting the accuracy of cardiac anatomical segmentation. To address the above issues, this paper proposes a cardiac 3D image segmentation network named MSAF, which integrates Multiscale Collaborative Attention (MSCA) and Multiscale Feature Fusion (MSFF) modules to enhance the multiscale feature perception capability at both microscopic and macroscopic levels, thereby improving segmentation accuracy for complex cardiac structures. Within the MSCA module, a Collaborative Attention (CoA) module combined with hierarchical residual‐like connections is designed, enabling the model to effectively capture interactive information across spatial and channel dimensions at various receptive fields and facilitating finer‐grained feature extraction. In the MSFF module, a gradient‐based feature importance weighting mechanism dynamically adjusts feature contributions from different hierarchical levels, effectively fusing high‐level abstract semantic information with low‐level spatial details, thereby enhancing cross‐scale feature representation and optimizing both global completeness and local boundary precision in segmentation results. Experimental validation of MSAF was conducted on four publicly available medical image segmentation datasets, including ACDC, FlARE21, and MM‐WHS (MRI and CT modalities), yielding average Dice values of 93.27%, 88.16%, 92.23%, and 91.22%, respectively. These experimental results demonstrate the effectiveness of MSAF in segmenting detailed cardiac structures.
  • Inter-player data for the prediction of emotional intensity in a multiplayer game

    Brooke, Alexander; Crossley, Matthew; Lloyd, Huw; Cunningham, Stuart; Manchester Metropolitan University; University of Chester (IEEE, 2025-08-19)
    This work assesses the feasibility of predicting emotional intensities for a given player in a testbed multiplayer game, using facial expression data collected from other players in the multiplayer group. Whilst there is significant literature on the utilisation of affect detection to build models of player experience, little research considers the additional data provided from other players in a multiplayer setting, despite the inherently shared experiences that they provide. A dataset describing 24 participants is collected, detailing ten levels of a testbed game, Colour Rush, with data collected describing facial expression activity and responses to the Discrete Emotions Questionnaire. The viability of modelling uncaptured player experiences is tested using artificial neural networks trained on facial expression data from target players, non-target players and a combination of both. Findings indicate that multiplayer data can be beneficial in the prediction of a target player’s emotional responses, although this holds true only in a minority of cases, and for specific groups of players.
  • FireLite: Leveraging Transfer Learning for Efficient Fire Detection in Resource-Constrained Environments

    Hasan, Mahamudul; Al Hossain Prince, Md Maruf; Ansari, Mohammad Samar; Jahan, Sabrina; Musa Miah, Abu Saleh; Shin, Jungpil (arXiv (Cornell University), 2024-12-20)
    Fire hazards are extremely dangerous, particularly in sectors such the transportation industry where political unrest increases the likelihood of their occurring. By employing IP cam eras to facilitate the setup of fire detection systems on transport vehicles losses from fire events may be prevented proactively. However, the development of lightweight fire detection models is required due to the computational constraints of the em bedded systems within these cameras. We introduce ”FireLite,” a low-parameter convolutional neural network (CNN) designed for quick fire detection in contexts with limited resources, in answer to this difficulty. With an accuracy of 98.77%, our model—which has just 34,978 trainable parameters—achieves remarkable performance numbers. It also shows a validation loss of 8.74 and peaks at 98.77 for precision, recall, and F1-score measures. Because of its precision and efficiency, FireLite is a promising.
  • Strong convergence for efficient full discretization of the stochastic Allen-Cahn equation with multiplicative noise

    Qi, Xiao; Wang, Lihua; Yan, Yubin; Jianghan University; University of Chester (Elsevier, 2025-04-25)
    In this paper, we study the strong convergence of the full discretization based on a semi-implicit tamed approach in time and the finite element method with truncated noise in space for the stochastic Allen-Cahn equation driven by multiplicative noise. The proposed fully discrete scheme is efficient thanks to its low computational complexity and mean-square unconditional stability. The low regularity of the solution due to the multiplicative infinite-dimensional driving noise and the non-global Lipschitz difficulty intruduced by the cubic nonlinear drift term make the strong convergence analysis of the fully discrete solution considerably complicated. By constructing an appropriate auxiliary procedure, the full discretization error can be cleverly decomposed, and the spatio-temporal strong convergence order is successfully derived under certain weak assumptions. Numerical experiments are finally reported to validate the theoretical result.
  • EffUnet-SpaGen: An efficient and spatial generative approach to glaucoma detection

    Adithya, Venkatesh Krishna; Williams, Bryan M.; Czanner, Silvester; Kavitha, Srinivasan; Friedman, David S.; Willoughby, Colin E.; Venkatesh, Rengaraj; Czanner, Gabriela; Aravind Eye Care System; Lancaster University; Liverpool John Moores University; Harvard Medical School; Ulster University (MDPI, 2021-05-30)
    Current research in automated disease detection focuses on making algorithms "slimmer" reducing the need for large training datasets and accelerating recalibration for new data while achieving high accuracy. The development of slimmer models has become a hot research topic in medical imaging. In this work, we develop a two-phase model for glaucoma detection, identifying and exploiting a redundancy in fundus image data relating particularly to the geometry. We propose a novel algorithm for the cup and disc segmentation "EffUnet" with an efficient convolution block and combine this with an extended spatial generative approach for geometry modelling and classification, termed "SpaGen" We demonstrate the high accuracy achievable by EffUnet in detecting the optic disc and cup boundaries and show how our algorithm can be quickly trained with new data by recalibrating the EffUnet layer only. Our resulting glaucoma detection algorithm, "EffUnet-SpaGen", is optimized to significantly reduce the computational burden while at the same time surpassing the current state-of-art in glaucoma detection algorithms with AUROC 0.997 and 0.969 in the benchmark online datasets ORIGA and DRISHTI, respectively. Our algorithm also allows deformed areas of the optic rim to be displayed and investigated, providing explainability, which is crucial to successful adoption and implementation in clinical settings.
  • Automatic detection of glaucoma via fundus imaging and artificial intelligence: A review

    Coan, Lauren J.; Williiams, Bryan M.; Krishna Adithya, Venkatesh; Upadhyaya, Swati; Alkafri, Ala; Czanner, Silvester; Venkatesh, Rengaraj; Willoughby, Colin E.; Kavitha, Srinivasan; Czanner, Gabriela; et al. (Elsevier, 2022-08-17)
    Glaucoma is a leading cause of irreversible vision impairment globally, and cases are continuously rising worldwide. Early detection is crucial, allowing timely intervention that can prevent further visual field loss. To detect glaucoma an examination of the optic nerve head via fundus imaging can be performed, at the center of which is the assessment of the optic cup and disc boundaries. Fundus imaging is noninvasive and low-cost; however, image examination relies on subjective, time-consuming, and costly expert assessments. A timely question to ask is: "Can artificial intelligence mimic glaucoma assessments made by experts?" Specifically, can artificial intelligence automatically find the boundaries of the optic cup and disc (providing a so-called segmented fundus image) and then use the segmented image to identify glaucoma with high accuracy? We conducted a comprehensive review on artificial intelligence-enabled glaucoma detection frameworks that produce and use segmented fundus images and summarized the advantages and disadvantages of such frameworks. We identified 36 relevant papers from 2011 to 2021 and 2 main approaches: 1) logical rule-based frameworks, based on a set of rules; and 2) machine learning/statistical modeling-based frameworks. We critically evaluated the state-of-art of the 2 approaches, identified gaps in the literature and pointed at areas for future research.
  • Evaluating the severity of trust to Identity-Management-as-a-Service

    Mpofu, Nkosinathi; Van Staden, Wynand J. C.; University of South Africa; University of Chester (IEEE, 2017-08)
    The benefits of a c loud service have been well documented as reduced cost in both staff and computing infrastructure, rapid deployment, and scalabilit y and location independence. Despite all these benefits, Identity-management-as- a-Service's (IdMaaS) is struggling to gain a market presence due to an array of factors one of which is trust. In IdMaaS, trust may either be borne within the relationships amongst the actors (relying party, identity manager, identity owner, or end user) or may be actor specific. This paper will focus on trust between the identity owner and the identi ty manager within the context of third party identity management. A great effort in identifying trust issues by other researchers is acknowledged, however they did not go to the ext ent of measuring th e severity of t rust specifically related to I dMaaS. Our research shows that availability of the identity management system and security of identities are more critical concerns when compared to the cost of managing identities and fear of vendor lock-in. Above all, the research revealed that trust in IdMaaS is less than 40% at a 95% level of con fidence. Establishing the severity of tru st and its trusting factors is a more valuable input to the refinement of the IdMaaS approach. The success of IdMaaS will add to the domain of anything-as-a-service (XaaS) at the same time opening up an additional entrepreneurial avenue.
  • IoT embedded software manipulation

    Underhill, Paul; University of Chester (2023-03)
    The Internet of Things (IoT) has raised cybersecurity and privacy issues, notably about altering embedded software. This poster investigates the feasibility of using Read-Only Memory (ROM) at a low level to modify IoT devices while remaining undetectable to users and security systems. The study explores the vulnerabilities in embedded code and firmware, which are frequently proprietary and inaccessible, making them challenging to safeguard efficiently. The methodology uses a black-box forensic technique to acquire software, identify functions, and create test cases to assess potential alterations. The findings aim to contribute to a better understanding of IoT security concerns, emphasising the importance of upgraded firmware protection methods. This research highlights the challenges of detecting low-level attacks on IoT devices and provides insights into improving embedded system security.
  • Usability testing of VR reconstructions for museums and heritage sites: A case study from 14th century Chester (UK)

    Southall, Helen; University of Chester (2025-10-22)
    This paper reports research on the usability of a 3D Virtual Reality (VR) model of the interior of St. John’s Church, Chester, as it probably appeared in the 14th Century. A VR visualization was created in Unity, based on archive data and historical records. This was adapted for use with Oculus Quest 2 VR headsets. Participants took part in usability tests of the experience, providing both qualitative and quantitative usability data. Although created with modest time and financial resources, the experience received a good overall usability rating, and numerous positive comments, including from novice VR users. Negative comments mainly related to the experience of wearing a VR headset. This paper concludes by suggesting further work, with thoughts on highly immersive VR in heritage contexts, especially combined with recent developments in generative artificial intelligence.
  • Unlocking trust: Advancing activity recognition in video processing – Say no to bans!

    Yousuf, Muhammad Jehanzaib; Lee, Brian; Asghar, Mamoona Naveed; Ansari, Mohammad Samar; Kanwal, Nadia; Technological University of the Shannon; University of Galway; University of Chester; Keele University (IEEE, 2024-11-20)
    Anonymous activity recognition is pivotal in addressing privacy concerns amidst the widespread use of facial recognition technologies (FRTs). While FRTs enhance security and efficiency, they raise significant privacy issues. Anonymous activity recognition circumvents these concerns by focusing on identifying and analysing activities without individual identification. It preserves privacy while extracting valuable insights and patterns. This approach ensures a balance between security and privacy in surveillance-heavy environments such as public spaces and workplaces. It detects anomalies and suspicious behaviours without compromising individual identities. Moreover, it promotes fairness by avoiding biases inherent in FRTs, thus mitigating discriminatory outcomes. Here we propose a privacy-preserved activity recognition framework to augment the facial recognition technologies. The goal of this framework is to provide activity recognition of individuals without violating their privacy. Our approach is based on extracting Regions of Interest (ROI) using YOLOv7-based instance segmentation and selective encryption of ROIs using the AES encryption algorithm. Furthermore, we investigate training deep learning models on privacy-preserved video datasets, utilising the previously mentioned privacy protection scheme. We developed and trained a CNN-LSTM based activity recognition model, achieving a classification accuracy of 94 %. The outcomes from training and testing deep learning algorithms on encrypted data illustrate significant classification and detection accuracy, even when dealing with privacy-protected data. Furthermore, we establish the trustworthiness and explainability of our activity recognition model by using Grad-CAM analysis and assessing it against the Trustworthy Artificial Intelligence (ALTAI) assessment list.
  • Immersive haptic simulation for training nurses in emergency medical procedures

    Gutiérrez-Fernández, Alexis; Fernández-Llamas, Camino; Vázquez-Casares, Ana M.; Mauriz, Elba; Riego-del-Castillo, Virginia; John, Nigel W.; University of León; University of Chester (Springer Nature, 2024-01-24)
    The use of haptic simulation for emergency procedures in nursing training presents a viable, versatile and affordable alternative to traditional mannequin environments. In this paper, an evaluation is performed in a virtual environment with a head-mounted display and haptic devices, and also with a mannequin. We focus on a chest decompression, a life-saving invasive procedure used for trauma-associated cardiopulmonary resuscitation (and other causes) that every emergency physician and/or nurse needs to master. Participants’ heart rate and blood pressure were monitored to measure their stress level. In addition, the NASA Task Load Index questionnaire was used. The results show the approved usability of the VR environment and that it provides a higher level of immersion compared to the mannequin, with no statistically significant difference in terms of cognitive load, although the use of VR is perceived as a more difficult task. We can conclude that the use of haptic-enabled virtual reality simulators has the potential to provide an experience as stressful as the real one while training in a safe and controlled environment.

View more