Computer Science
Staff within the Department of Computer Science have research interests in Visualization, Interaction & Computer Graphics (with a particular focus on Medical Graphics), Cyber Security and Discrete Optimisation. This collection is licenced under a Creative Commons licence. The collection may be reproduced for non-commerical use and without modification, providing that copyright is acknowledged.
Collections in this community
Recent Submissions
-
Sustainable manufacturing of a Conformal Load-bearing Antenna Structure (CLAS) using advanced printing technologies and fibre-reinforced composites for aerospace applicationsConformal load-bearing antenna structures (CLAS) offer significant advantages in aerospace by reducing drag and weight through highly integrated designs. However, challenges remain in manufacturing, as traditional PCB methods create discontinuous arrays, while directly printed antennas on flexible substrates often lack mechanical strength. Additionally, neither approach integrates well with fibre-reinforced composites, which are widely used in modern aircraft. To address this, the next generation of CLAS must employ continuous surface substrates to maintain aerodynamic profiles and embed antenna systems within composite structures. This research introduces an innovative CLAS manufacturing method that integrates inkjet-printed silver nanoparticle antennas with composite fabrication. The antenna is printed onto Kapton film, which is then co-cured with woven glass fibre composites to ensure mechanical robustness and compatibility with aerospace materials. Flat and 100mm curvature samples were fabricated to investigate electromagnetic performance, with curvature effects analysed. Results confirm that the proposed method achieves both reliability and sustainability, producing smoothly curved CLAS with embedded antenna elements. However, frequency shifts and impedance mismatches were observed, attributed to discrepancies in dielectric constants and substrate volume variations. The conformality study revealed that curvature lowers resonant frequencies due to extended effective electric fields. This research establishes a promising CLAS fabrication approach, integrating sustainable printing with composites. The findings provide a benchmark for future conformal antenna studies and support industry-level advancements in high-integration aerospace antenna systems.
-
FireLite: Leveraging Transfer Learning for Efficient Fire Detection in Resource-Constrained EnvironmentsFire hazards are extremely dangerous, particularly in sectors such the transportation industry where political unrest increases the likelihood of their occurring. By employing IP cam eras to facilitate the setup of fire detection systems on transport vehicles losses from fire events may be prevented proactively. However, the development of lightweight fire detection models is required due to the computational constraints of the em bedded systems within these cameras. We introduce ”FireLite,” a low-parameter convolutional neural network (CNN) designed for quick fire detection in contexts with limited resources, in answer to this difficulty. With an accuracy of 98.77%, our model—which has just 34,978 trainable parameters—achieves remarkable performance numbers. It also shows a validation loss of 8.74 and peaks at 98.77 for precision, recall, and F1-score measures. Because of its precision and efficiency, FireLite is a promising.
-
Strong convergence for efficient full discretization of the stochastic Allen-Cahn equation with multiplicative noiseIn this paper, we study the strong convergence of the full discretization based on a semi-implicit tamed approach in time and the finite element method with truncated noise in space for the stochastic Allen-Cahn equation driven by multiplicative noise. The proposed fully discrete scheme is efficient thanks to its low computational complexity and mean-square unconditional stability. The low regularity of the solution due to the multiplicative infinite-dimensional driving noise and the non-global Lipschitz difficulty intruduced by the cubic nonlinear drift term make the strong convergence analysis of the fully discrete solution considerably complicated. By constructing an appropriate auxiliary procedure, the full discretization error can be cleverly decomposed, and the spatio-temporal strong convergence order is successfully derived under certain weak assumptions. Numerical experiments are finally reported to validate the theoretical result.
-
EffUnet-SpaGen: An efficient and spatial generative approach to glaucoma detectionCurrent research in automated disease detection focuses on making algorithms "slimmer" reducing the need for large training datasets and accelerating recalibration for new data while achieving high accuracy. The development of slimmer models has become a hot research topic in medical imaging. In this work, we develop a two-phase model for glaucoma detection, identifying and exploiting a redundancy in fundus image data relating particularly to the geometry. We propose a novel algorithm for the cup and disc segmentation "EffUnet" with an efficient convolution block and combine this with an extended spatial generative approach for geometry modelling and classification, termed "SpaGen" We demonstrate the high accuracy achievable by EffUnet in detecting the optic disc and cup boundaries and show how our algorithm can be quickly trained with new data by recalibrating the EffUnet layer only. Our resulting glaucoma detection algorithm, "EffUnet-SpaGen", is optimized to significantly reduce the computational burden while at the same time surpassing the current state-of-art in glaucoma detection algorithms with AUROC 0.997 and 0.969 in the benchmark online datasets ORIGA and DRISHTI, respectively. Our algorithm also allows deformed areas of the optic rim to be displayed and investigated, providing explainability, which is crucial to successful adoption and implementation in clinical settings.
-
Automatic detection of glaucoma via fundus imaging and artificial intelligence: A reviewGlaucoma is a leading cause of irreversible vision impairment globally, and cases are continuously rising worldwide. Early detection is crucial, allowing timely intervention that can prevent further visual field loss. To detect glaucoma an examination of the optic nerve head via fundus imaging can be performed, at the center of which is the assessment of the optic cup and disc boundaries. Fundus imaging is noninvasive and low-cost; however, image examination relies on subjective, time-consuming, and costly expert assessments. A timely question to ask is: "Can artificial intelligence mimic glaucoma assessments made by experts?" Specifically, can artificial intelligence automatically find the boundaries of the optic cup and disc (providing a so-called segmented fundus image) and then use the segmented image to identify glaucoma with high accuracy? We conducted a comprehensive review on artificial intelligence-enabled glaucoma detection frameworks that produce and use segmented fundus images and summarized the advantages and disadvantages of such frameworks. We identified 36 relevant papers from 2011 to 2021 and 2 main approaches: 1) logical rule-based frameworks, based on a set of rules; and 2) machine learning/statistical modeling-based frameworks. We critically evaluated the state-of-art of the 2 approaches, identified gaps in the literature and pointed at areas for future research.
-
Evaluating the severity of trust to Identity-Management-as-a-ServiceThe benefits of a c loud service have been well documented as reduced cost in both staff and computing infrastructure, rapid deployment, and scalabilit y and location independence. Despite all these benefits, Identity-management-as- a-Service's (IdMaaS) is struggling to gain a market presence due to an array of factors one of which is trust. In IdMaaS, trust may either be borne within the relationships amongst the actors (relying party, identity manager, identity owner, or end user) or may be actor specific. This paper will focus on trust between the identity owner and the identi ty manager within the context of third party identity management. A great effort in identifying trust issues by other researchers is acknowledged, however they did not go to the ext ent of measuring th e severity of t rust specifically related to I dMaaS. Our research shows that availability of the identity management system and security of identities are more critical concerns when compared to the cost of managing identities and fear of vendor lock-in. Above all, the research revealed that trust in IdMaaS is less than 40% at a 95% level of con fidence. Establishing the severity of tru st and its trusting factors is a more valuable input to the refinement of the IdMaaS approach. The success of IdMaaS will add to the domain of anything-as-a-service (XaaS) at the same time opening up an additional entrepreneurial avenue.
-
IoT embedded software manipulationThe Internet of Things (IoT) has raised cybersecurity and privacy issues, notably about altering embedded software. This poster investigates the feasibility of using Read-Only Memory (ROM) at a low level to modify IoT devices while remaining undetectable to users and security systems. The study explores the vulnerabilities in embedded code and firmware, which are frequently proprietary and inaccessible, making them challenging to safeguard efficiently. The methodology uses a black-box forensic technique to acquire software, identify functions, and create test cases to assess potential alterations. The findings aim to contribute to a better understanding of IoT security concerns, emphasising the importance of upgraded firmware protection methods. This research highlights the challenges of detecting low-level attacks on IoT devices and provides insights into improving embedded system security.
-
Usability testing of VR reconstructions for museums and heritage sites: A case study from 14th century Chester (UK)This paper reports research on the usability of a 3D Virtual Reality (VR) model of the interior of St. John’s Church, Chester, as it probably appeared in the 14th Century. A VR visualization was created in Unity, based on archive data and historical records. This was adapted for use with Oculus Quest 2 VR headsets. Participants took part in usability tests of the experience, providing both qualitative and quantitative usability data. Although created with modest time and financial resources, the experience received a good overall usability rating, and numerous positive comments, including from novice VR users. Negative comments mainly related to the experience of wearing a VR headset. This paper concludes by suggesting further work, with thoughts on highly immersive VR in heritage contexts, especially combined with recent developments in generative artificial intelligence.
-
Building Decompanion: A step towards standardisation and the enhancement of inter- and trans-disciplinary research in forensic taphonomyThis thesis introduces Decompanion, an innovative online platform designed to standardise and enhance inter- and trans-disciplinary research within the field of forensic taphonomy. Forensic taphonomy, a subfield of forensic science, focuses on understanding postmortem processes to aid legal investigations. Despite its importance, the field faces significant challenges, including a lack of standardised methodologies and terminologies, limited interdisciplinary collaboration, and insufficient data sharing. This research addresses these challenges by developing a tool that standardises forensic taphonomy practices, integrates emerging technologies, and fosters global collaboration. The study employs a mixed-methods approach, combining empirical research with analytical techniques to assess the need for and impact of Decompanion. Key findings demonstrate the tool's potential to significantly improve the consistency and reliability of forensic taphonomy data by standardising methodologies and terminologies across the field. Additionally, the integration of advanced technologies such as 3D scanning and Forward Looking Infrared imaging within Decompanion has the potential to enhance the accuracy and efficiency of data collection and analysis, offering new insights into decomposition processes. A contribution of this thesis is the focus on decomposition in a humid temperate climate, specifically within the context of the United Kingdom. The research documents and analyses decomposition using pig carcasses as human analogues, capturing high-resolution data through advanced imaging technologies. This regional focus fills a critical gap in the literature, providing essential baseline data for forensic investigations in similar climatic regions. Moreover, the thesis underscores the importance of interdisciplinary collaboration in advancing forensic taphonomy. Decompanion facilitates the sharing of research designs, protocols, and data, promoting a more cohesive and integrated approach to forensic investigations. The platform's user base, which reached six of the seven continents within just four weeks of its launch, demonstrates its global relevance and the widespread need for such a tool. Despite its significant contributions, the study acknowledges certain limitations, including the geographical specificity of the research and the challenges associated with using pig carcasses as human analogues. Future work is recommended to expand on the study by comparing different climates, incorporating human cadavers, and integrating more advanced technological tools such as machine learning algorithms. This thesis fills critical gaps in forensic taphonomy, offering practical solutions to longstanding challenges in the field. Decompanion not only sets a new standard for data standardisation and interdisciplinary collaboration but also serves as a valuable resource for forensic researchers and practitioners worldwide. The research has far-reaching implications for both the academic community and policy within forensic investigations.
-
Unlocking trust: Advancing activity recognition in video processing – Say no to bans!Anonymous activity recognition is pivotal in addressing privacy concerns amidst the widespread use of facial recognition technologies (FRTs). While FRTs enhance security and efficiency, they raise significant privacy issues. Anonymous activity recognition circumvents these concerns by focusing on identifying and analysing activities without individual identification. It preserves privacy while extracting valuable insights and patterns. This approach ensures a balance between security and privacy in surveillance-heavy environments such as public spaces and workplaces. It detects anomalies and suspicious behaviours without compromising individual identities. Moreover, it promotes fairness by avoiding biases inherent in FRTs, thus mitigating discriminatory outcomes. Here we propose a privacy-preserved activity recognition framework to augment the facial recognition technologies. The goal of this framework is to provide activity recognition of individuals without violating their privacy. Our approach is based on extracting Regions of Interest (ROI) using YOLOv7-based instance segmentation and selective encryption of ROIs using the AES encryption algorithm. Furthermore, we investigate training deep learning models on privacy-preserved video datasets, utilising the previously mentioned privacy protection scheme. We developed and trained a CNN-LSTM based activity recognition model, achieving a classification accuracy of 94 %. The outcomes from training and testing deep learning algorithms on encrypted data illustrate significant classification and detection accuracy, even when dealing with privacy-protected data. Furthermore, we establish the trustworthiness and explainability of our activity recognition model by using Grad-CAM analysis and assessing it against the Trustworthy Artificial Intelligence (ALTAI) assessment list.
-
Immersive haptic simulation for training nurses in emergency medical proceduresThe use of haptic simulation for emergency procedures in nursing training presents a viable, versatile and affordable alternative to traditional mannequin environments. In this paper, an evaluation is performed in a virtual environment with a head-mounted display and haptic devices, and also with a mannequin. We focus on a chest decompression, a life-saving invasive procedure used for trauma-associated cardiopulmonary resuscitation (and other causes) that every emergency physician and/or nurse needs to master. Participants’ heart rate and blood pressure were monitored to measure their stress level. In addition, the NASA Task Load Index questionnaire was used. The results show the approved usability of the VR environment and that it provides a higher level of immersion compared to the mannequin, with no statistically significant difference in terms of cognitive load, although the use of VR is perceived as a more difficult task. We can conclude that the use of haptic-enabled virtual reality simulators has the potential to provide an experience as stressful as the real one while training in a safe and controlled environment.
-
Designing a Quantum-Dot Cellular Automata-Based Half-Adder Circuit Using Partially Reversible Majority GatesDeveloping quantum-dot cellular automata (QCA) digital circuits reversibly leads to substantial reductions in energy dissipation. However, this is usually accompanied by time delays and accompanying increases in the circuit cost metric. In this study, an innovative, partially reversible design method is presented to address the latency and circuit cost limitations of reversible design methods. The proposed partially reversible design method serves as a middle ground between fully reversible and conventional irreversible design methodologies. Compared with irreversible design methods, the partially reversible design method still optimises energy efficiency. Moreover, the partially reversible design method improves the speed and decreases the circuit cost in comparison with fully reversible design techniques. The key ingredient of the proposed partially reversible design methodology is the introduction of a partially reversible majority gate element building block. To validate the effectiveness of the proposed partially reversible design approach, a novel partially reversible half-adder circuit is designed and simulated using the QCADesigner-E 2.2 simulation tool. This tool provides numerical results for the circuit input/output response and heat dissipation at the physical level, within a microscopic quantum mechanical model.
-
Development and Validation of Embedded Device for Electrocardiogram Arrhythmia Empowered with Transfer LearningWith the emergence of the Internet of Things (IoT), investigation of different diseases in healthcare improved, and cloud computing helped to centralize the data and to access patient records throughout the world. In this way, the electrocardiogram (ECG) is used to diagnose heart diseases or abnormalities. The machine learning techniques have been used previously but are feature-based and not as accurate as transfer learning; the proposed development and validation of embedded device prove ECG arrhythmia by using the transfer learning (DVEEA-TL) model. This model is the combination of hardware, software, and two datasets that are augmented and fused and further finds the accuracy results in high proportion as compared to the previous work and research. In the proposed model, a new dataset is made by the combination of the Kaggle dataset and the other, which is made by taking the real-time healthy and unhealthy datasets, and later, the AlexNet transfer learning approach is applied to get a more accurate reading in terms of ECG signals. In this proposed research, the DVEEA-TL model diagnoses the heart abnormality in respect of accuracy during the training and validation stages as 99.9% and 99.8%, respectively, which is the best and more reliable approach as compared to the previous research in this field.
-
Assessing Type Agreeability in the Unified Model of Personality and Play StylesClassifying players into well defined groups can be useful when designing games and gamified systems, with many models relating to player or personality ‘type’. The Unified Model of Personality and Play Styles groups together many player and personality taxonomies, but whilst similarities have been noted in previous work, the overlap between models has not been analysed ahead of its use. This study provides evidence both for and against aspects of the Unified Model, with model agreeability assessed through comparison of participant classifications. Results show that representations of types related by the Unified Model do correlate significantly greater than types unrelated by the model, but do so with only weak-to-moderate correlation coefficients. Ranking classifications leads to results better mapping to the Unified Model, but also reduces the overall strength of correla tions between types. The Unified Model is therefore considered fit for purpose as an explanatory tool, but without additional study should be used with caution in further use cases.
-
Comparing the performance of statistical, machine learning, and deep learning algorithms to predict time-to-event: A simulation study for conversion to mild cognitive impairmentMild Cognitive Impairment (MCI) is a condition characterized by a decline in cognitive abilities, specifically in memory, language, and attention, that is beyond what is expected due to normal aging. Detection of MCI is crucial for providing appropriate interventions and slowing down the progression of dementia. There are several automated predictive algorithms for prediction using time-to-event data, but it is not clear which is best to predict the time to conversion to MCI. There is also confusion if algorithms with fewer training weights are less accurate. We compared three algorithms, from smaller to large numbers of training weights: a statistical predictive model (Cox proportional hazards model, CoxPH), a machine learning model (Random Survival Forest, RSF), and a deep learning model (DeepSurv). To compare the algorithms under different scenarios, we created a simulated dataset based on the Alzheimer NACC dataset. We found that the CoxPH model was among the best-performing models, in all simulated scenarios. In a larger sample size (n = 6,000), the deep learning algorithm (DeepSurv) exhibited comparable accuracy (73.1%) to the CoxPH model (73%). In the past, ignoring heterogeneity in the CoxPH model led to the conclusion that deep learning methods are superior. We found that when using the CoxPH model with heterogeneity, its accuracy is comparable to that of DeepSurv and RSF. Furthermore, when unobserved heterogeneity is present, such as missing features in the training, all three models showed a similar drop in accuracy. This simulation study suggests that in some applications an algorithm with a smaller number of training weights is not disadvantaged in terms of accuracy. Since algorithms with fewer weights are inherently easier to explain, this study can help artificial intelligence research develop a principled approach to comparing statistical, machine learning, and deep learning algorithms for time-to-event predictions.
-
Developing a framework for enhancing security testing of android applicationsMobile applications have advanced a lot and now offer several features that help make our lives easier. Android is currently the most popular mobile operating system, and it is susceptible to exploitation attempts by malicious entities. This has led to an increased focus on the security of Android applications. This dissertation proposed the development of a framework which provides a systematic approach to testing the security of Android applications. This framework was developed based on a comprehensive review of existing security testing methodologies and tools. In achieving the study objectives, a test application was run on an emulator, Burp Suite was used as a proxy tool to capture HTTP and HTTPS traffic for analysis, reverse engineering was carried out, static and dynamic analysis were executed, network traffic was captured and analysed with tcpdump and Wireshark, intent sniffing was carried out, fuzz testing was discussed, and a proof-of-concept tool (automation script) was developed. This work covers various aspects of Android applications’ security testing, and the proposed framework provides developers with a practical and effective approach to testing the security of their Android applications, thereby improving the overall security of the Android application ecosystem.
-
The Affective Audio Dataset (AAD) for Non-musical, Non-vocalized, Audio Emotion ResearchThe Affective Audio Dataset (AAD) is a new and novel dataset of non-musical, non-anthropomorphic sounds intended for use in affective research. Sounds are annotated for their affective qualities by sets of human participants. The dataset was created in response to a lack of suitable datasets within the domain of audio emotion recognition. A total of 780 sounds are selected from the BBC Sounds Library. Participants are recruited online and asked to rate a subset of sounds based on how they make them feel. Each sound is rated for arousal and valence. It was found that while evenly distributed, there was bias towards the low-valence, high-arousal quadrant, and displayed a greater range of ratings in comparison to others. The AAD is compared with existing datasets to check its consistency and validity, with differences in data collection methods and intended use-cases highlighted. Using a subset of the data, the online ratings were validated against an in-person data collection experiment with findings strongly correlating. The AAD is used to train a basic affect-prediction model and results are discussed. Uses of this dataset include, human-emotion research, cultural studies, other affect-based research, and industry use such as audio post-production, gaming, and user-interface design.
-
Meaningful automated feedback on Objected-Oriented program development tasks in JavaAutomation has been used to assess student programming tasks for over 60 years. As well as assessing work, it can also be used in the provision of feedback, commonly though the utilisation of unit tests or evaluation of program output. This typically requires a structure to be provided, for example provision of a method stub or programming to an interface. This scaffolded approach is required in statically typed, object-oriented languages such as Java, as if tests rely on non-existent code, compilation will fail. Previous studies identified that for many tools, feedback is limited to a comparison of the student’s solution with a reference, the results of unit tests, or how actual output compares with that which is expected. This paper discusses a tool that provides automated textual feedback on programming tasks. This tool, the “Java Object-Oriented Feedback Tool” (JOOFT), allows the instructor to write unit tests for as yet unwritten code, with their own feedback, almost as easily as writing a standard unit test. JOOFT also provides additional, customisable, feedback for student errors that might occur in the process of writing code, such as specifying an incorrect parameter type for a method. A randomised trial of the tool was carried out with novice student programmers (n=109), who completed a lab task on the design of a class, 52 of them having assistance from the tool. Whilst students provided positive feedback on tool usage, performance in a later assessment of class creation, suggests student outcomes are not affected.
-
Hybrid Quantum-Dot Cellular Automata Nanocomputing CircuitsQuantum-dot cellular automata (QCA) is an emerging transistor-less field-coupled nanocomputing (FCN) approach to ultra-scale ‘nanochip’ integration. In QCA, to represent digital circuitry, electrostatic repulsion between electrons and the mechanism of electron tunnelling in quantum dots are used. QCA technology can surpass conventional complementary metal oxide semiconductor (CMOS) technology in terms of clock speed, reduced occupied chip area, and energy efficiency. To develop QCA circuits, irreversible majority gates are typically used as the primary components. Recently, some studies have introduced reversible design techniques, using reversible majority gates as the main building block, to develop ultra-energy-efficient QCA circuits. However, this approach resulted in time delays, an increase in the number of QCA cells used, and an increase in the chip area occupied. This work introduces a novel hybrid design strategy employing irreversible, reversible, and partially reversible QCA gates to establish an optimal balance between power consumption, delay time, and occupied area. This hybrid technique allows the designer to have more control over the circuit characteristics to meet different system needs. A combination of reversible, irreversible, and innovative partially reversible majority gates is used in the proposed hybrid design method. We evaluated the hybrid design method by examining the half-adder circuit as a case study. We developed four hybrid QCA half-adder circuits, each of which simultaneously incorporates various types of majority gates. The QCADesigner-E 2.2 simulation tool was used to simulate the performance and energy efficiency of the half-adders. This tool provides numerical results for the circuit input/output response and heat dissipation at the physical level within a microscopic quantum mechanical model.
-
Using Voice Input to Control and Interact With a Narrative Video GameWith the advancement of artificial intelligence (AI) over recent years, especially the breakthrough in technology that OpenAI achieved with the natural language generative model of ChatGPT, virtual assistants and voice interactive devices such as Amazon’s Alexa or Apple’s Siri, have become popular with the general public. This is due to their ease of use, accessibility, and ability to be used without physical interaction. When it comes to the video games industry, there have been attempts to implement voice input as a core mechanic, with various levels of success. Ultimately, voice input has been mostly used as a separate mechanic or as an alternative to traditional input methods. This project will investigate different methods of using voice input to control and interact with a narrative video game. The research will analyse which method is most effective in facilitating player control of the game and identify challenges related to implementation. This paper also includes a work-in-progress demonstration of a voice-activated game made in Unreal Engine.