Staff within the Department of Computer Science have research interests in Visualization, Interaction & Computer Graphics (with a particular focus on Medical Graphics), Cyber Security and Discrete Optimisation. This collection is licenced under a Creative Commons licence. The collection may be reproduced for non-commerical use and without modification, providing that copyright is acknowledged.

Collections in this community

Recent Submissions

  • Correction to: Visualization for epidemiological modelling: challenges, solutions, reflections and recommendations (2022) by Dykes et al.

    Dykes, Jason; Abdul-Rahman, Alfie; Archambault, Daniel; Bach, Benjamin; Borgo, Rita; Chen, Min; Enright, Jessica; Fang, Hui; Firat, Elif E.; Freeman, Euan; et al. (The Royal Society, 2022-09-12)
    In the original version of this article, references 113–120, 123–140 and 143 were incorrectly numbered. This has been corrected on the publisher’s website.
  • Same structures, different settings: exploring computing capital and participation across cultural contexts

    Kunkeler, Thom; Barr, Matthew; Kallia, Maria; Andrei, Oona; Li, Xiaohan; Muncey, Andrew; Nylén, Aletta; Venn-Wycherley, Megan; Uppsala University; University of Glasgow; University of Southampton; University of Chester; Swansea University (Association for Computing Machinery, 2025-11)
    The number of people choosing to study computing in higher education remains low. Previous research has developed a research instrument to identify factors underlying student participation grounded in Bourdieu’s sociocultural theory. This study replicates and extends the original study, which identified key social, cultural, and psychological factors linked to computing education participation in Sweden. Using the validated research instrument, we distributed a survey across 11 UK universities, gathering responses from 131 students. Through Confirmatory Factor Analysis, we assessed the robustness of the original study’s constructs — career interest, subject-specific interest, influence from family and friends, confidence, and sense of belonging — and their relationship to subject choice in computing. After model refinements, the replication confirmed and validated the factor structure, supporting the stability of these constructs and their relationship to computing subject choice across cultural contexts. In addition, the current study adds additional open-ended questions to the research instrument to help explain the quantitative results. A thematic analysis further explains the correlation between previous experience, social influence, confidence, and gender, and how that relates to participation in the field. By replicating and extending the original study’s methodology, this research evaluates the reliability and generalisability of its conclusions, contributing to the evidence base needed to design interventions that broaden participation in computing education.
  • The effect of multiplayer game modes on inter-player data for player experience modelling

    Brooke, Alexander; Crossley, Matthew; Lloyd, Huw; Cunningham, Stuart; Manchester Metropolitan University; University of Chester (IEEE, 2025-09-25)
    Research into social compliance, emotional contagion and behavioural synchronicity shows promise for various avenues of work concerning human-computer interaction, and a wider understanding of emotion. Despite their relevance, few studies have applied findings from these domains to player experience modelling in a multiplayer game, in itself having applications in entertainment, education and healthcare. Further to this, of the little work making use of inter-player data to model aspects of player experience, none considers the differences that may be found across common multiplayer game modes. This work therefore makes use of data collected across players in a series of common multiplayer game modes, considering the utility of inter-player data for predictive modelling using artificial neural networks in each. Results suggest that approaches modelling measures of players' experiences in terms of discrete emotion intensities are best made using their own facial expressions in nearly all circumstances, but past this, facial expression data from team based and competitive game modes shows the greatest promise. Considering the additional data separations available to team-based gameplay, we find that data collected from players on an opposing team shows greater utility for prediction of target player experience than data collected from a player on the same team. Regarding this, we make suggestions for the most applicable avenues for future research into the utilisation of inter-player data for emotional modelling.
  • Study on modelling and optimising controllers for heating systems in buildings

    Counsell, John; Yang, Bin; Downing, Cameron P. D. (University of Chester, 2025-08-18)
    The UK, like the rest of the world, is working toward net-zero carbon emissions by 2050. A significant contributor to national emissions (17%) is domestic space heating, making efficient building design and retrofitting crucial. This thesis presents methodologies for simulating, controlling, and analysing domestic heating systems to reduce energy use while maintaining thermal comfort. The core framework, Inverse Dynamics-based Energy Assessment and Simulation (IDEAS), was enhanced into IDEAS+ to better align with the UK’s Standard Assessment Procedure (SAP) for building regulations. Key improvements include: • A new thermal comfort algorithm for more accurate modelling of human heat perception and support for niche heating systems. • Dynamic free heat gain calculations, improving precision and SAP compliance. • An updated optimum start method to optimize heating schedules based on system capacity. IDEAS+ was first calibrated using direct electric heating, then applied to Gas Condensing Boilers (GCBs) and Air Source Heat Pumps (ASHPs). To further reduce emissions, optimizing control systems for dynamic energy markets was explored. Traditional optimization methods are slow, requiring iterative simulations. Instead, this thesis introduces a new method - OPTimal Inverse Control (OPTIC), which embeds cost-function optimization directly into IDEAS+’s inverse dynamics control. This allows real-time optimization alongside system operation, improving performance. OPTIC was tested with battery storage, dynamically adjusting to fluctuating energy prices and carbon intensity while ensuring thermal comfort. Two case studies demonstrated simulation based and practical based applications. One was a block of flats in Eastbourne, modelled in IDEAS+ for retrofit analysis, then simulated with a heat pump network and OPTIC-controlled storage as a simulation only based study. Whilst, the other was a commercial property with PV arrays, heat pumps, and battery storage, controlled via an Industrial PC running OPTIC-based C++/C# code at the property in a practical application. These methods provide scalable solutions for reducing emissions in residential and commercial heating systems, supporting the UK’s net-zero targets.
  • Designing Value-Aligned Traffic Agents through Conflict Sensitivity

    Rakow, Astrid; Collenette, Joe; Schwammberger, Maike; Slavkovik, Marija; Vaz Alves, Gleifer; German Aerospace Center, Institute of Systems Engineering for Future Mobility; University of Chester; Karlsruhe Institute of Technology (KIT); Universidade Tecnologica Federal do Paraná (ArXiv, 2025-07-25)
    Autonomous traffic agents (ATAs) are expected to act in ways tat are not only safe, but also aligned with stakeholder values across legal, social, and moral dimensions. In this paper, we adopt an established formal model of conflict from epistemic game theory to support the development of such agents. We focus on value conflicts-situations in which agents face competing goals rooted in value-laden situations and show how conflict analysis can inform key phases of the design process. This includes value elicitation, capability specification, explanation, and adaptive system refinement. We elaborate and apply the concept of Value-Aligned Operational Design Domains (VODDs) to structure autonomy in accordance with contextual value priorities. Our approach shifts the emphasis from solving moral dilemmas at runtime to anticipating and structuring value-sensitive behaviour during development.
  • Designing Value-Aligned Traffic Agents through Conflict Sensitivity

    Rakow, Astrid; Collenette, Joe; Schwammberger, Maike; Slavkovik, Marija; Vaz Alves, Gleifer; German Aerospace Center, Institute of Systems Engineering for Future Mobility; University of Chester; Karlsruhe Institute of Technology (KIT); Universidade Tecnologica Federal do Paraná (Springer Nature, 2025)
    Abstract Autonomous traffic agents (ATAs)-automated systems with high level of autonomy in traffic environments must not only guarantee safety but also act in accordance with legal, social, and moral values. In this short version, we adopt the epistemic game-theoretic conflict model of Damm et al to characterise value conflicts-situations where competing, value-laden goals cannot all be satisfied. As a mean to align the decision making of an ATA with stakeholder preferences, we introduce Value-Aligned Operational Design Domains (VODDs). They represent autonomous decision making scopes that guide an agent's conflict resolution and specify handover rules
  • From Data-Compliance to Model-Introspection: Challenges in AV Rule Compliance Monitoring

    Rakow, Astrid; Gil Gasiola, Gustavo; Collenette, Joe; Grundt, Dominik; Möhlmann, Eike; Schwammberger, Maike; German Aerospace Center, Institute of Systems Engineering for Future Mobility; Karlsruhe Institute of Technology; University of Chester (IEEE, 2025)
    Autonomous vehicles (AVs) are expected to comply with traffic laws, ensure safety, and provide transparent explanations of their decisions. Achieving these goals requires monitoring architectures that pro- cess large volumes of sensor, control, and contextual data. While real-time perception and decision-making are functionally indispensable, storing and using this data for auditing or improvement raises unresolved legal and technical challenges. Data protection regulations—such as the GDPR—mandate that personal data processing be limited to what is strictly necessary for specified purposes (Art. 5(1)(b), (c), and (e)). Yet, in practice, what counts as “necessary” remains ambiguous. This tension gives rise to the data-justification gap: the lack of systematic methods to determine which logged data is both sufficient to support compliance assessments and minimal under data protection constraints. At the same time, aligning formalized rules with their legal intent poses a separate but interrelated challenge—the alignment problem. Legal norms are often ambiguous or context-dependent, and existing monitoring frameworks rarely guarantee that formal specifications faithfully reflect legal meaning. This paper outlines a research agenda for bridging these gaps. We propose an integrated approach com- bining formal methods, legal reasoning, and runtime monitoring to develop data-justification frameworks. Such frameworks would enable developers to generate interpretable rule formalizations, synthesize minimally sufficient monitors, and justify data collection in a transparent and legally defensible manner.
  • Efficient Spectrum Sharing in Cognitive Radio Networks With NOMA Using Computational Intelligence

    Sultan, Kiran; University of Chester (Wiley, 2025-09-09)
    The integration of Cognitive Radio Networks (CRNs) with Non-Orthogonal Multiple Access (NOMA) offers great potential for improving spectral efficiency in 5G and Beyond-5G (B5G) networks. This study proposes an efficient spectrum-sharing technique for dual-hop CRNs using NOMA, optimized by an Improved Artificial Bee Colony (IABC) algorithm and guided by a Single Input Single Output Fuzzy Rule-Based (SISO-FRBS) System. In this setup, a distant primary transmitter communicates with the primary receiver via a secondary NOMA relay. The objective is to maximize the sum data rate of secondary users (SUs) while minimizing total transmission power. SISO-FRBS enhances IABC search process by dynamically guiding the search agents, improving both optimization quality and convergence. Simulation results show that the proposed scheme achieves the primary data rate benchmark of 5bit/s/Hz at a transmit power of 19mW, compared to 23mW with traditional ABC, achieving a 19.04% improvement in power efficiency.
  • MSAF: A cardiac 3D image segmentation network based on Multiscale Collaborative Attention and Multiscale Feature Fusion

    Zhang, Guodong; Li, He; Xie, Wanying; Yang, Bin; Gong, Zhaoxuan; Guo, Wei; Ju, Ronghui; Shenyang Aerospace University; University of Chester; The People's Hospital of Liaoning Province (Wiley, 2025-08-21)
    Accurate segmentation of cardiac structures is essential for clinical diagnosis and treatment of cardiovascular diseases. Existing Transformer‐based cardiac segmentation methods mostly rely on single‐scale token‐wise attention mechanisms that emphasize global feature modeling, but they lack sufficient sensitivity to local spatial structures, such as myocardial boundaries in cardiac 3D images, resulting in ineffective multiscale feature capturing and a loss of local spatial details, thereby negatively impacting the accuracy of cardiac anatomical segmentation. To address the above issues, this paper proposes a cardiac 3D image segmentation network named MSAF, which integrates Multiscale Collaborative Attention (MSCA) and Multiscale Feature Fusion (MSFF) modules to enhance the multiscale feature perception capability at both microscopic and macroscopic levels, thereby improving segmentation accuracy for complex cardiac structures. Within the MSCA module, a Collaborative Attention (CoA) module combined with hierarchical residual‐like connections is designed, enabling the model to effectively capture interactive information across spatial and channel dimensions at various receptive fields and facilitating finer‐grained feature extraction. In the MSFF module, a gradient‐based feature importance weighting mechanism dynamically adjusts feature contributions from different hierarchical levels, effectively fusing high‐level abstract semantic information with low‐level spatial details, thereby enhancing cross‐scale feature representation and optimizing both global completeness and local boundary precision in segmentation results. Experimental validation of MSAF was conducted on four publicly available medical image segmentation datasets, including ACDC, FlARE21, and MM‐WHS (MRI and CT modalities), yielding average Dice values of 93.27%, 88.16%, 92.23%, and 91.22%, respectively. These experimental results demonstrate the effectiveness of MSAF in segmenting detailed cardiac structures.
  • Inter-player data for the prediction of emotional intensity in a multiplayer game

    Brooke, Alexander; Crossley, Matthew; Lloyd, Huw; Cunningham, Stuart; Manchester Metropolitan University; University of Chester (IEEE, 2025-08-19)
    This work assesses the feasibility of predicting emotional intensities for a given player in a testbed multiplayer game, using facial expression data collected from other players in the multiplayer group. Whilst there is significant literature on the utilisation of affect detection to build models of player experience, little research considers the additional data provided from other players in a multiplayer setting, despite the inherently shared experiences that they provide. A dataset describing 24 participants is collected, detailing ten levels of a testbed game, Colour Rush, with data collected describing facial expression activity and responses to the Discrete Emotions Questionnaire. The viability of modelling uncaptured player experiences is tested using artificial neural networks trained on facial expression data from target players, non-target players and a combination of both. Findings indicate that multiplayer data can be beneficial in the prediction of a target player’s emotional responses, although this holds true only in a minority of cases, and for specific groups of players.
  • Sensorimotor Synchronisation and Entrainment in Musical Timekeeping: Metronome Configurations and Preliminary Implications for Music Education

    Woolley, Jason; Cunningham, Stuart; Owens, Steffan Rhys (University of Chester, 2024-01)
    Timekeeping whilst playing music is a skill all musicians, especially drummers, require. Following a review of the literature on this subject, this thesis explores methods of measuring timekeeping accuracy of individuals and groups, and offers recommendations for approaches for future training. Using equipment and techniques accessible to musicians and non-musicians alike, the researcher has investigated how individual timekeeping, specifically the measure of Inter-tap Interval (ITI), is influenced by the presence, absence, and reintroduction of metronomes of various designs. The thesis also investigates how the influence of these differing metronome states interact with tempo and with the type of metronome (audio and visual). Similarly, the dynamics of group timekeeping and the interaction (or entrainment) between individuals in the group is also investigated. Participants were asked to report their perception of their individual performances under the different conditions of the experiments. The results show that tempo influenced the accuracy of timekeeping and the presence, absence and reintroduction of the metronome also had effects on accuracy. Individuals thought their timekeeping to be more accurate when the metronome was present and that they performed better as individuals as opposed to being part of a group. Detailed analysis of the results showed that the reintroduction of the metronome proved to have a significant effect on average ITI produced by participants, as did tempo. Metronome type had no significant influence on ITI in an individual or group setting. In the conclusion of the thesis, the author provides recommendations for future assessment and training of musicians in the skill of timekeeping, with respect to the measure of ITI.
  • Sustainable manufacturing of a Conformal Load-bearing Antenna Structure (CLAS) using advanced printing technologies and fibre-reinforced composites for aerospace applications

    Powell-Turner, Julieanna; Hu, Yanting; Xie, PengHeng (University of Chester, 2025-01)
    Conformal load-bearing antenna structures (CLAS) offer significant advantages in aerospace by reducing drag and weight through highly integrated designs. However, challenges remain in manufacturing, as traditional PCB methods create discontinuous arrays, while directly printed antennas on flexible substrates often lack mechanical strength. Additionally, neither approach integrates well with fibre-reinforced composites, which are widely used in modern aircraft. To address this, the next generation of CLAS must employ continuous surface substrates to maintain aerodynamic profiles and embed antenna systems within composite structures. This research introduces an innovative CLAS manufacturing method that integrates inkjet-printed silver nanoparticle antennas with composite fabrication. The antenna is printed onto Kapton film, which is then co-cured with woven glass fibre composites to ensure mechanical robustness and compatibility with aerospace materials. Flat and 100mm curvature samples were fabricated to investigate electromagnetic performance, with curvature effects analysed. Results confirm that the proposed method achieves both reliability and sustainability, producing smoothly curved CLAS with embedded antenna elements. However, frequency shifts and impedance mismatches were observed, attributed to discrepancies in dielectric constants and substrate volume variations. The conformality study revealed that curvature lowers resonant frequencies due to extended effective electric fields. This research establishes a promising CLAS fabrication approach, integrating sustainable printing with composites. The findings provide a benchmark for future conformal antenna studies and support industry-level advancements in high-integration aerospace antenna systems.
  • FireLite: Leveraging Transfer Learning for Efficient Fire Detection in Resource-Constrained Environments

    Hasan, Mahamudul; Al Hossain Prince, Md Maruf; Ansari, Mohammad Samar; Jahan, Sabrina; Musa Miah, Abu Saleh; Shin, Jungpil (arXiv (Cornell University), 2024-12-20)
    Fire hazards are extremely dangerous, particularly in sectors such the transportation industry where political unrest increases the likelihood of their occurring. By employing IP cam eras to facilitate the setup of fire detection systems on transport vehicles losses from fire events may be prevented proactively. However, the development of lightweight fire detection models is required due to the computational constraints of the em bedded systems within these cameras. We introduce ”FireLite,” a low-parameter convolutional neural network (CNN) designed for quick fire detection in contexts with limited resources, in answer to this difficulty. With an accuracy of 98.77%, our model—which has just 34,978 trainable parameters—achieves remarkable performance numbers. It also shows a validation loss of 8.74 and peaks at 98.77 for precision, recall, and F1-score measures. Because of its precision and efficiency, FireLite is a promising.
  • Strong convergence for efficient full discretization of the stochastic Allen-Cahn equation with multiplicative noise

    Qi, Xiao; Wang, Lihua; Yan, Yubin; Jianghan University; University of Chester (Elsevier, 2025-04-25)
    In this paper, we study the strong convergence of the full discretization based on a semi-implicit tamed approach in time and the finite element method with truncated noise in space for the stochastic Allen-Cahn equation driven by multiplicative noise. The proposed fully discrete scheme is efficient thanks to its low computational complexity and mean-square unconditional stability. The low regularity of the solution due to the multiplicative infinite-dimensional driving noise and the non-global Lipschitz difficulty intruduced by the cubic nonlinear drift term make the strong convergence analysis of the fully discrete solution considerably complicated. By constructing an appropriate auxiliary procedure, the full discretization error can be cleverly decomposed, and the spatio-temporal strong convergence order is successfully derived under certain weak assumptions. Numerical experiments are finally reported to validate the theoretical result.
  • EffUnet-SpaGen: An efficient and spatial generative approach to glaucoma detection

    Adithya, Venkatesh Krishna; Williams, Bryan M.; Czanner, Silvester; Kavitha, Srinivasan; Friedman, David S.; Willoughby, Colin E.; Venkatesh, Rengaraj; Czanner, Gabriela; Aravind Eye Care System; Lancaster University; Liverpool John Moores University; Harvard Medical School; Ulster University (MDPI, 2021-05-30)
    Current research in automated disease detection focuses on making algorithms "slimmer" reducing the need for large training datasets and accelerating recalibration for new data while achieving high accuracy. The development of slimmer models has become a hot research topic in medical imaging. In this work, we develop a two-phase model for glaucoma detection, identifying and exploiting a redundancy in fundus image data relating particularly to the geometry. We propose a novel algorithm for the cup and disc segmentation "EffUnet" with an efficient convolution block and combine this with an extended spatial generative approach for geometry modelling and classification, termed "SpaGen" We demonstrate the high accuracy achievable by EffUnet in detecting the optic disc and cup boundaries and show how our algorithm can be quickly trained with new data by recalibrating the EffUnet layer only. Our resulting glaucoma detection algorithm, "EffUnet-SpaGen", is optimized to significantly reduce the computational burden while at the same time surpassing the current state-of-art in glaucoma detection algorithms with AUROC 0.997 and 0.969 in the benchmark online datasets ORIGA and DRISHTI, respectively. Our algorithm also allows deformed areas of the optic rim to be displayed and investigated, providing explainability, which is crucial to successful adoption and implementation in clinical settings.
  • Automatic detection of glaucoma via fundus imaging and artificial intelligence: A review

    Coan, Lauren J.; Williiams, Bryan M.; Krishna Adithya, Venkatesh; Upadhyaya, Swati; Alkafri, Ala; Czanner, Silvester; Venkatesh, Rengaraj; Willoughby, Colin E.; Kavitha, Srinivasan; Czanner, Gabriela; et al. (Elsevier, 2022-08-17)
    Glaucoma is a leading cause of irreversible vision impairment globally, and cases are continuously rising worldwide. Early detection is crucial, allowing timely intervention that can prevent further visual field loss. To detect glaucoma an examination of the optic nerve head via fundus imaging can be performed, at the center of which is the assessment of the optic cup and disc boundaries. Fundus imaging is noninvasive and low-cost; however, image examination relies on subjective, time-consuming, and costly expert assessments. A timely question to ask is: "Can artificial intelligence mimic glaucoma assessments made by experts?" Specifically, can artificial intelligence automatically find the boundaries of the optic cup and disc (providing a so-called segmented fundus image) and then use the segmented image to identify glaucoma with high accuracy? We conducted a comprehensive review on artificial intelligence-enabled glaucoma detection frameworks that produce and use segmented fundus images and summarized the advantages and disadvantages of such frameworks. We identified 36 relevant papers from 2011 to 2021 and 2 main approaches: 1) logical rule-based frameworks, based on a set of rules; and 2) machine learning/statistical modeling-based frameworks. We critically evaluated the state-of-art of the 2 approaches, identified gaps in the literature and pointed at areas for future research.
  • Evaluating the severity of trust to Identity-Management-as-a-Service

    Mpofu, Nkosinathi; Van Staden, Wynand J. C.; University of South Africa; University of Chester (IEEE, 2017-08)
    The benefits of a c loud service have been well documented as reduced cost in both staff and computing infrastructure, rapid deployment, and scalabilit y and location independence. Despite all these benefits, Identity-management-as- a-Service's (IdMaaS) is struggling to gain a market presence due to an array of factors one of which is trust. In IdMaaS, trust may either be borne within the relationships amongst the actors (relying party, identity manager, identity owner, or end user) or may be actor specific. This paper will focus on trust between the identity owner and the identi ty manager within the context of third party identity management. A great effort in identifying trust issues by other researchers is acknowledged, however they did not go to the ext ent of measuring th e severity of t rust specifically related to I dMaaS. Our research shows that availability of the identity management system and security of identities are more critical concerns when compared to the cost of managing identities and fear of vendor lock-in. Above all, the research revealed that trust in IdMaaS is less than 40% at a 95% level of con fidence. Establishing the severity of tru st and its trusting factors is a more valuable input to the refinement of the IdMaaS approach. The success of IdMaaS will add to the domain of anything-as-a-service (XaaS) at the same time opening up an additional entrepreneurial avenue.
  • IoT embedded software manipulation

    Underhill, Paul; University of Chester (2023-03)
    The Internet of Things (IoT) has raised cybersecurity and privacy issues, notably about altering embedded software. This poster investigates the feasibility of using Read-Only Memory (ROM) at a low level to modify IoT devices while remaining undetectable to users and security systems. The study explores the vulnerabilities in embedded code and firmware, which are frequently proprietary and inaccessible, making them challenging to safeguard efficiently. The methodology uses a black-box forensic technique to acquire software, identify functions, and create test cases to assess potential alterations. The findings aim to contribute to a better understanding of IoT security concerns, emphasising the importance of upgraded firmware protection methods. This research highlights the challenges of detecting low-level attacks on IoT devices and provides insights into improving embedded system security.
  • Usability testing of VR reconstructions for museums and heritage sites: A case study from 14th century Chester (UK)

    Southall, Helen; University of Chester (2025-10-22)
    This paper reports research on the usability of a 3D Virtual Reality (VR) model of the interior of St. John’s Church, Chester, as it probably appeared in the 14th Century. A VR visualization was created in Unity, based on archive data and historical records. This was adapted for use with Oculus Quest 2 VR headsets. Participants took part in usability tests of the experience, providing both qualitative and quantitative usability data. Although created with modest time and financial resources, the experience received a good overall usability rating, and numerous positive comments, including from novice VR users. Negative comments mainly related to the experience of wearing a VR headset. This paper concludes by suggesting further work, with thoughts on highly immersive VR in heritage contexts, especially combined with recent developments in generative artificial intelligence.
  • Building Decompanion: A step towards standardisation and the enhancement of inter- and trans-disciplinary research in forensic taphonomy

    Tynan, Verity Paige (University of ChesterWrexham University, 2025-01)
    This thesis introduces Decompanion, an innovative online platform designed to standardise and enhance inter- and trans-disciplinary research within the field of forensic taphonomy. Forensic taphonomy, a subfield of forensic science, focuses on understanding postmortem processes to aid legal investigations. Despite its importance, the field faces significant challenges, including a lack of standardised methodologies and terminologies, limited interdisciplinary collaboration, and insufficient data sharing. This research addresses these challenges by developing a tool that standardises forensic taphonomy practices, integrates emerging technologies, and fosters global collaboration. The study employs a mixed-methods approach, combining empirical research with analytical techniques to assess the need for and impact of Decompanion. Key findings demonstrate the tool's potential to significantly improve the consistency and reliability of forensic taphonomy data by standardising methodologies and terminologies across the field. Additionally, the integration of advanced technologies such as 3D scanning and Forward Looking Infrared imaging within Decompanion has the potential to enhance the accuracy and efficiency of data collection and analysis, offering new insights into decomposition processes. A contribution of this thesis is the focus on decomposition in a humid temperate climate, specifically within the context of the United Kingdom. The research documents and analyses decomposition using pig carcasses as human analogues, capturing high-resolution data through advanced imaging technologies. This regional focus fills a critical gap in the literature, providing essential baseline data for forensic investigations in similar climatic regions. Moreover, the thesis underscores the importance of interdisciplinary collaboration in advancing forensic taphonomy. Decompanion facilitates the sharing of research designs, protocols, and data, promoting a more cohesive and integrated approach to forensic investigations. The platform's user base, which reached six of the seven continents within just four weeks of its launch, demonstrates its global relevance and the widespread need for such a tool. Despite its significant contributions, the study acknowledges certain limitations, including the geographical specificity of the research and the challenges associated with using pig carcasses as human analogues. Future work is recommended to expand on the study by comparing different climates, incorporating human cadavers, and integrating more advanced technological tools such as machine learning algorithms. This thesis fills critical gaps in forensic taphonomy, offering practical solutions to longstanding challenges in the field. Decompanion not only sets a new standard for data standardisation and interdisciplinary collaboration but also serves as a valuable resource for forensic researchers and practitioners worldwide. The research has far-reaching implications for both the academic community and policy within forensic investigations.

View more