Computer Science
Staff within the Department of Computer Science have research interests in Visualization, Interaction & Computer Graphics (with a particular focus on Medical Graphics), Cyber Security and Discrete Optimisation. This collection is licenced under a Creative Commons licence. The collection may be reproduced for non-commerical use and without modification, providing that copyright is acknowledged.
Collections in this community
Recent Submissions
-
IoT embedded software manipulationThe Internet of Things (IoT) has raised cybersecurity and privacy issues, notably about altering embedded software. This poster investigates the feasibility of using Read-Only Memory (ROM) at a low level to modify IoT devices while remaining undetectable to users and security systems. The study explores the vulnerabilities in embedded code and firmware, which are frequently proprietary and inaccessible, making them challenging to safeguard efficiently. The methodology uses a black-box forensic technique to acquire software, identify functions, and create test cases to assess potential alterations. The findings aim to contribute to a better understanding of IoT security concerns, emphasising the importance of upgraded firmware protection methods. This research highlights the challenges of detecting low-level attacks on IoT devices and provides insights into improving embedded system security.
-
Usability testing of VR reconstructions for museums and heritage sites: A case study from 14th century Chester (UK)This paper reports research on the usability of a 3D Virtual Reality (VR) model of the interior of St. John’s Church, Chester, as it probably appeared in the 14th Century. A VR visualization was created in Unity, based on archive data and historical records. This was adapted for use with Oculus Quest 2 VR headsets. Participants took part in usability tests of the experience, providing both qualitative and quantitative usability data. Although created with modest time and financial resources, the experience received a good overall usability rating, and numerous positive comments, including from novice VR users. Negative comments mainly related to the experience of wearing a VR headset. This paper concludes by suggesting further work, with thoughts on highly immersive VR in heritage contexts, especially combined with recent developments in generative artificial intelligence.
-
Building Decompanion: A step towards standardisation and the enhancement of inter- and trans-disciplinary research in forensic taphonomyThis thesis introduces Decompanion, an innovative online platform designed to standardise and enhance inter- and trans-disciplinary research within the field of forensic taphonomy. Forensic taphonomy, a subfield of forensic science, focuses on understanding postmortem processes to aid legal investigations. Despite its importance, the field faces significant challenges, including a lack of standardised methodologies and terminologies, limited interdisciplinary collaboration, and insufficient data sharing. This research addresses these challenges by developing a tool that standardises forensic taphonomy practices, integrates emerging technologies, and fosters global collaboration. The study employs a mixed-methods approach, combining empirical research with analytical techniques to assess the need for and impact of Decompanion. Key findings demonstrate the tool's potential to significantly improve the consistency and reliability of forensic taphonomy data by standardising methodologies and terminologies across the field. Additionally, the integration of advanced technologies such as 3D scanning and Forward Looking Infrared imaging within Decompanion has the potential to enhance the accuracy and efficiency of data collection and analysis, offering new insights into decomposition processes. A contribution of this thesis is the focus on decomposition in a humid temperate climate, specifically within the context of the United Kingdom. The research documents and analyses decomposition using pig carcasses as human analogues, capturing high-resolution data through advanced imaging technologies. This regional focus fills a critical gap in the literature, providing essential baseline data for forensic investigations in similar climatic regions. Moreover, the thesis underscores the importance of interdisciplinary collaboration in advancing forensic taphonomy. Decompanion facilitates the sharing of research designs, protocols, and data, promoting a more cohesive and integrated approach to forensic investigations. The platform's user base, which reached six of the seven continents within just four weeks of its launch, demonstrates its global relevance and the widespread need for such a tool. Despite its significant contributions, the study acknowledges certain limitations, including the geographical specificity of the research and the challenges associated with using pig carcasses as human analogues. Future work is recommended to expand on the study by comparing different climates, incorporating human cadavers, and integrating more advanced technological tools such as machine learning algorithms. This thesis fills critical gaps in forensic taphonomy, offering practical solutions to longstanding challenges in the field. Decompanion not only sets a new standard for data standardisation and interdisciplinary collaboration but also serves as a valuable resource for forensic researchers and practitioners worldwide. The research has far-reaching implications for both the academic community and policy within forensic investigations.
-
Unlocking trust: Advancing activity recognition in video processing – Say no to bans!Anonymous activity recognition is pivotal in addressing privacy concerns amidst the widespread use of facial recognition technologies (FRTs). While FRTs enhance security and efficiency, they raise significant privacy issues. Anonymous activity recognition circumvents these concerns by focusing on identifying and analysing activities without individual identification. It preserves privacy while extracting valuable insights and patterns. This approach ensures a balance between security and privacy in surveillance-heavy environments such as public spaces and workplaces. It detects anomalies and suspicious behaviours without compromising individual identities. Moreover, it promotes fairness by avoiding biases inherent in FRTs, thus mitigating discriminatory outcomes. Here we propose a privacy-preserved activity recognition framework to augment the facial recognition technologies. The goal of this framework is to provide activity recognition of individuals without violating their privacy. Our approach is based on extracting Regions of Interest (ROI) using YOLOv7-based instance segmentation and selective encryption of ROIs using the AES encryption algorithm. Furthermore, we investigate training deep learning models on privacy-preserved video datasets, utilising the previously mentioned privacy protection scheme. We developed and trained a CNN-LSTM based activity recognition model, achieving a classification accuracy of 94 %. The outcomes from training and testing deep learning algorithms on encrypted data illustrate significant classification and detection accuracy, even when dealing with privacy-protected data. Furthermore, we establish the trustworthiness and explainability of our activity recognition model by using Grad-CAM analysis and assessing it against the Trustworthy Artificial Intelligence (ALTAI) assessment list.
-
Immersive haptic simulation for training nurses in emergency medical proceduresThe use of haptic simulation for emergency procedures in nursing training presents a viable, versatile and affordable alternative to traditional mannequin environments. In this paper, an evaluation is performed in a virtual environment with a head-mounted display and haptic devices, and also with a mannequin. We focus on a chest decompression, a life-saving invasive procedure used for trauma-associated cardiopulmonary resuscitation (and other causes) that every emergency physician and/or nurse needs to master. Participants’ heart rate and blood pressure were monitored to measure their stress level. In addition, the NASA Task Load Index questionnaire was used. The results show the approved usability of the VR environment and that it provides a higher level of immersion compared to the mannequin, with no statistically significant difference in terms of cognitive load, although the use of VR is perceived as a more difficult task. We can conclude that the use of haptic-enabled virtual reality simulators has the potential to provide an experience as stressful as the real one while training in a safe and controlled environment.
-
Designing a Quantum-Dot Cellular Automata-Based Half-Adder Circuit Using Partially Reversible Majority GatesDeveloping quantum-dot cellular automata (QCA) digital circuits reversibly leads to substantial reductions in energy dissipation. However, this is usually accompanied by time delays and accompanying increases in the circuit cost metric. In this study, an innovative, partially reversible design method is presented to address the latency and circuit cost limitations of reversible design methods. The proposed partially reversible design method serves as a middle ground between fully reversible and conventional irreversible design methodologies. Compared with irreversible design methods, the partially reversible design method still optimises energy efficiency. Moreover, the partially reversible design method improves the speed and decreases the circuit cost in comparison with fully reversible design techniques. The key ingredient of the proposed partially reversible design methodology is the introduction of a partially reversible majority gate element building block. To validate the effectiveness of the proposed partially reversible design approach, a novel partially reversible half-adder circuit is designed and simulated using the QCADesigner-E 2.2 simulation tool. This tool provides numerical results for the circuit input/output response and heat dissipation at the physical level, within a microscopic quantum mechanical model.
-
Development and Validation of Embedded Device for Electrocardiogram Arrhythmia Empowered with Transfer LearningWith the emergence of the Internet of Things (IoT), investigation of different diseases in healthcare improved, and cloud computing helped to centralize the data and to access patient records throughout the world. In this way, the electrocardiogram (ECG) is used to diagnose heart diseases or abnormalities. The machine learning techniques have been used previously but are feature-based and not as accurate as transfer learning; the proposed development and validation of embedded device prove ECG arrhythmia by using the transfer learning (DVEEA-TL) model. This model is the combination of hardware, software, and two datasets that are augmented and fused and further finds the accuracy results in high proportion as compared to the previous work and research. In the proposed model, a new dataset is made by the combination of the Kaggle dataset and the other, which is made by taking the real-time healthy and unhealthy datasets, and later, the AlexNet transfer learning approach is applied to get a more accurate reading in terms of ECG signals. In this proposed research, the DVEEA-TL model diagnoses the heart abnormality in respect of accuracy during the training and validation stages as 99.9% and 99.8%, respectively, which is the best and more reliable approach as compared to the previous research in this field.
-
Assessing Type Agreeability in the Unified Model of Personality and Play StylesClassifying players into well defined groups can be useful when designing games and gamified systems, with many models relating to player or personality ‘type’. The Unified Model of Personality and Play Styles groups together many player and personality taxonomies, but whilst similarities have been noted in previous work, the overlap between models has not been analysed ahead of its use. This study provides evidence both for and against aspects of the Unified Model, with model agreeability assessed through comparison of participant classifications. Results show that representations of types related by the Unified Model do correlate significantly greater than types unrelated by the model, but do so with only weak-to-moderate correlation coefficients. Ranking classifications leads to results better mapping to the Unified Model, but also reduces the overall strength of correla tions between types. The Unified Model is therefore considered fit for purpose as an explanatory tool, but without additional study should be used with caution in further use cases.
-
Comparing the performance of statistical, machine learning, and deep learning algorithms to predict time-to-event: A simulation study for conversion to mild cognitive impairmentMild Cognitive Impairment (MCI) is a condition characterized by a decline in cognitive abilities, specifically in memory, language, and attention, that is beyond what is expected due to normal aging. Detection of MCI is crucial for providing appropriate interventions and slowing down the progression of dementia. There are several automated predictive algorithms for prediction using time-to-event data, but it is not clear which is best to predict the time to conversion to MCI. There is also confusion if algorithms with fewer training weights are less accurate. We compared three algorithms, from smaller to large numbers of training weights: a statistical predictive model (Cox proportional hazards model, CoxPH), a machine learning model (Random Survival Forest, RSF), and a deep learning model (DeepSurv). To compare the algorithms under different scenarios, we created a simulated dataset based on the Alzheimer NACC dataset. We found that the CoxPH model was among the best-performing models, in all simulated scenarios. In a larger sample size (n = 6,000), the deep learning algorithm (DeepSurv) exhibited comparable accuracy (73.1%) to the CoxPH model (73%). In the past, ignoring heterogeneity in the CoxPH model led to the conclusion that deep learning methods are superior. We found that when using the CoxPH model with heterogeneity, its accuracy is comparable to that of DeepSurv and RSF. Furthermore, when unobserved heterogeneity is present, such as missing features in the training, all three models showed a similar drop in accuracy. This simulation study suggests that in some applications an algorithm with a smaller number of training weights is not disadvantaged in terms of accuracy. Since algorithms with fewer weights are inherently easier to explain, this study can help artificial intelligence research develop a principled approach to comparing statistical, machine learning, and deep learning algorithms for time-to-event predictions.
-
Developing a framework for enhancing security testing of android applicationsMobile applications have advanced a lot and now offer several features that help make our lives easier. Android is currently the most popular mobile operating system, and it is susceptible to exploitation attempts by malicious entities. This has led to an increased focus on the security of Android applications. This dissertation proposed the development of a framework which provides a systematic approach to testing the security of Android applications. This framework was developed based on a comprehensive review of existing security testing methodologies and tools. In achieving the study objectives, a test application was run on an emulator, Burp Suite was used as a proxy tool to capture HTTP and HTTPS traffic for analysis, reverse engineering was carried out, static and dynamic analysis were executed, network traffic was captured and analysed with tcpdump and Wireshark, intent sniffing was carried out, fuzz testing was discussed, and a proof-of-concept tool (automation script) was developed. This work covers various aspects of Android applications’ security testing, and the proposed framework provides developers with a practical and effective approach to testing the security of their Android applications, thereby improving the overall security of the Android application ecosystem.
-
The Affective Audio Dataset (AAD) for Non-musical, Non-vocalized, Audio Emotion ResearchThe Affective Audio Dataset (AAD) is a new and novel dataset of non-musical, non-anthropomorphic sounds intended for use in affective research. Sounds are annotated for their affective qualities by sets of human participants. The dataset was created in response to a lack of suitable datasets within the domain of audio emotion recognition. A total of 780 sounds are selected from the BBC Sounds Library. Participants are recruited online and asked to rate a subset of sounds based on how they make them feel. Each sound is rated for arousal and valence. It was found that while evenly distributed, there was bias towards the low-valence, high-arousal quadrant, and displayed a greater range of ratings in comparison to others. The AAD is compared with existing datasets to check its consistency and validity, with differences in data collection methods and intended use-cases highlighted. Using a subset of the data, the online ratings were validated against an in-person data collection experiment with findings strongly correlating. The AAD is used to train a basic affect-prediction model and results are discussed. Uses of this dataset include, human-emotion research, cultural studies, other affect-based research, and industry use such as audio post-production, gaming, and user-interface design.
-
Meaningful automated feedback on Objected-Oriented program development tasks in JavaAutomation has been used to assess student programming tasks for over 60 years. As well as assessing work, it can also be used in the provision of feedback, commonly though the utilisation of unit tests or evaluation of program output. This typically requires a structure to be provided, for example provision of a method stub or programming to an interface. This scaffolded approach is required in statically typed, object-oriented languages such as Java, as if tests rely on non-existent code, compilation will fail. Previous studies identified that for many tools, feedback is limited to a comparison of the student’s solution with a reference, the results of unit tests, or how actual output compares with that which is expected. This paper discusses a tool that provides automated textual feedback on programming tasks. This tool, the “Java Object-Oriented Feedback Tool” (JOOFT), allows the instructor to write unit tests for as yet unwritten code, with their own feedback, almost as easily as writing a standard unit test. JOOFT also provides additional, customisable, feedback for student errors that might occur in the process of writing code, such as specifying an incorrect parameter type for a method. A randomised trial of the tool was carried out with novice student programmers (n=109), who completed a lab task on the design of a class, 52 of them having assistance from the tool. Whilst students provided positive feedback on tool usage, performance in a later assessment of class creation, suggests student outcomes are not affected.
-
Hybrid Quantum-Dot Cellular Automata Nanocomputing CircuitsQuantum-dot cellular automata (QCA) is an emerging transistor-less field-coupled nanocomputing (FCN) approach to ultra-scale ‘nanochip’ integration. In QCA, to represent digital circuitry, electrostatic repulsion between electrons and the mechanism of electron tunnelling in quantum dots are used. QCA technology can surpass conventional complementary metal oxide semiconductor (CMOS) technology in terms of clock speed, reduced occupied chip area, and energy efficiency. To develop QCA circuits, irreversible majority gates are typically used as the primary components. Recently, some studies have introduced reversible design techniques, using reversible majority gates as the main building block, to develop ultra-energy-efficient QCA circuits. However, this approach resulted in time delays, an increase in the number of QCA cells used, and an increase in the chip area occupied. This work introduces a novel hybrid design strategy employing irreversible, reversible, and partially reversible QCA gates to establish an optimal balance between power consumption, delay time, and occupied area. This hybrid technique allows the designer to have more control over the circuit characteristics to meet different system needs. A combination of reversible, irreversible, and innovative partially reversible majority gates is used in the proposed hybrid design method. We evaluated the hybrid design method by examining the half-adder circuit as a case study. We developed four hybrid QCA half-adder circuits, each of which simultaneously incorporates various types of majority gates. The QCADesigner-E 2.2 simulation tool was used to simulate the performance and energy efficiency of the half-adders. This tool provides numerical results for the circuit input/output response and heat dissipation at the physical level within a microscopic quantum mechanical model.
-
Using Voice Input to Control and Interact With a Narrative Video GameWith the advancement of artificial intelligence (AI) over recent years, especially the breakthrough in technology that OpenAI achieved with the natural language generative model of ChatGPT, virtual assistants and voice interactive devices such as Amazon’s Alexa or Apple’s Siri, have become popular with the general public. This is due to their ease of use, accessibility, and ability to be used without physical interaction. When it comes to the video games industry, there have been attempts to implement voice input as a core mechanic, with various levels of success. Ultimately, voice input has been mostly used as a separate mechanic or as an alternative to traditional input methods. This project will investigate different methods of using voice input to control and interact with a narrative video game. The research will analyse which method is most effective in facilitating player control of the game and identify challenges related to implementation. This paper also includes a work-in-progress demonstration of a voice-activated game made in Unreal Engine.
-
Audience perceptions of Foley footsteps and 3D realism designed to convey walker characteristicsFoley artistry is an essential part of the audio post-production process for film, television, games, and animation. By extension, it is as crucial in emergent media such as virtual, mixed, and augmented reality. Footsteps are a core activity that a Foley artist must undertake and convey information about the characters and environment presented on-screen. This study sought to identify if characteristics of age, gender, weight, health, and confidence could be conveyed, using sounds created by a professional Foley artist, in three different 3D humanoid models, following a single walk cycle. An experiment was conducted with human participants (n=100) and found that Foley manipulations could convey all the intended characteristics with varying degrees of contextual success. It was shown that the abstract 3D models were capable of communicating characteristics of age, gender, and weight. A discussion of the literature and inspection of related audio features with the Foley clips suggest signal parameters of frequency, envelope, and novelty may be a subset of markers of those perceived characteristics. The findings are relevant to researchers and practitioners in linear and interactive media and demonstrate mechanisms by which Foley can contribute useful information and concepts about on-screen characters.
-
Lossless Encoding of Time-Aggregated Neuromorphic Vision Sensor Data Based on Point-Cloud CompressionNeuromorphic Vision Sensors (NVSs) are emerging sensors that acquire visual information asynchronously when changes occur in the scene. Their advantages versus synchronous capturing (frame-based video) include a low power consumption, a high dynamic range, an extremely high temporal resolution, and lower data rates. Although the acquisition strategy already results in much lower data rates than conventional video, NVS data can be further compressed. For this purpose, we recently proposed Time Aggregation-based Lossless Video Encoding for Neuromorphic Vision Sensor Data (TALVEN), consisting in the time aggregation of NVS events in the form of pixel-based event histograms, arrangement of the data in a specific format, and lossless compression inspired by video encoding. In this paper, we still leverage time aggregation but, rather than performing encoding inspired by frame-based video coding, we encode an appropriate representation of the time-aggregated data via point-cloud compression (similar to another one of our previous works, where time aggregation was not used). The proposed strategy, Time-Aggregated Lossless Encoding of Events based on Point-Cloud Compression (TALEN-PCC), outperforms the originally proposed TALVEN encoding strategy for the content in the considered dataset. The gain in terms of the compression ratio is the highest for low-event rate and low-complexity scenes, whereas the improvement is minimal for high-complexity and high-event rate scenes. According to experiments on outdoor and indoor spike event data, TALEN-PCC achieves higher compression gains for time aggregation intervals of more than 5 ms. However, the compression gains are lower when compared to state-of-the-art approaches for time aggregation intervals of less than 5 ms.
-
Software Exploitation and Software Protection Measures Enhancing Software Protection via Inter-Process Control Flow IntegrityComputer technologies hinge on the effective functionality of the software component. Unfortunately, software code may have flaws that cause them to be vulnerable and exploitable by attackers. Software exploitation could involve a hijack of the application and deviation of the flow of its execution. Whenever this occurs, the integrity of the software and the underlying system could be compromised. For this reason, there is a need to continually develop resilient software protection tools and techniques. This report details an in-depth study of software exploitation and software protection measures. Efforts in the research were geared towards finding new protection tools for vulnerable software. The main focus of the study is on the problem of Control Flow Hijacks (CFH) against vulnerable software, particularly for software that was built and executed on the RISC-V architecture. Threat models that were addressed are buffer overflow, stack overflow, return-to-libc, and Return Oriented Programming (ROP). Whilst the primary focus for developing the new protection was on RISC-V-based binaries, programs that were built on the more widespread x86 architecture were also explored comparatively in the course of this study. The concept of Control Flow Integrity (CFI) was explored in the study and a proof-of-concept for mitigating ROP attacks that result in Denial of Service is presented. The concept of CFI involves the enforcement of the intended flow of execution of a vulnerable program. The novel protection is based on the CFI concept combined with Inter-process signalling (named Inter-Process Control Flow Integrity (IP-CFI)). This technique is orthogonal to well-practised software maintenance such as patching/updates and is complementary to it providing integrity regardless of exploitation path/vector. In evaluating the tool, it was applied to vulnerable programs and found to promptly identify deviations in vulnerable programs when ROP attacks lead to DoS with an average runtime overhead of 0.95%. The system on which the software is embedded is also protected as a result of the watchdog in the IP-CFI where this kind of attack would have progressed unnoticed. Unlike previous CFI models, IP-CFI extends protection outside the vulnerable program by setting up a mutual collaboration between the protected program and a newly written monitoring program. Products derived in this study are software tools in the form of various Linux scripts that can be used to automate several functionalities, two RISC-V ROP gadget finders (RETGadgets & JALRGadget), and the software protection tool IP-CFI. In this report, software is also referred to as binary, executable, application, program or process.
-
An Ultra-Energy-Efficient Reversible Quantum-Dot Cellular Automata 8:1 Multiplexer CircuitEnergy efficiency considerations in terms of reduced power dissipation are a significant issue in the design of digital circuits for very large-scale integration (VLSI) systems. Quantum-dot cellular automata (QCA) is an emerging ultralow power dissipation approach, distinct from traditional, complementary metal-oxide semiconductor (CMOS) technology, for building digital computing circuits. Developing fully reversible QCA circuits has the potential to significantly reduce energy dissipation. Multiplexers are fundamental elements in the construction of useful digital circuits. In this paper, a novel, multilayer, fully reversible QCA 8:1 multiplexer circuit with ultralow energy dissipation is introduced. The power dissipation of the proposed multiplexer is simulated using the QCADesigner-E version 2.2 tool, describing the microscopic physical mechanisms underlying the QCA operation. The results show that the proposed reversible QCA 8:1 multiplexer consumes 89% less energy than the most energy-efficient 8:1 multiplexer circuit previously presented in the literature.
-
Exploring Mixed Reality Level Design WorkflowsThe past decade has seen a continual increase in quality and capability of augmented reality (AR) and virtual reality (VR) devices. Due to this greater capability, there have been an influx of entertainment and serious games that have been developed for these systems. Yet, the current workflows for developing VR game levels for entertainment or serious games have remained the same, with developers using a game engine presented on a 2D screen with a traditional mouse and keyboard for input. This thesis explores the use of AR and VR technologies as part of level design workflows used to develop both entertainment and serious VR game levels. Two existing workflows were identified as areas that could be improved by integrating AR and VR technologies as part of the workflow. Whilst a third new workflow was developed which focused on enabling new experiences for players: Workflow 1: This workflow explored using AR to help create a digital map of an existing space to help improve realism and presence of a VR serious game environment. The initial focus was on improving the workflow for developers of serious game levels. Workflow 2: This workflow focused on improving entertainment VR game level creation through the development of a VR level editor. The focus was on improving the entertainment VR level design process for professional level designers. Workflow 3: This workflow enables new experiences by supporting substitutional reality (SR) level design for players through a mix of both AR and VR technologies. It enables players to develop their own entertainment game levels that support SR using consumer technology. Each of the three workflows are presented in this thesis along with results from multiple studies. Results from the studies show positive outcomes supporting each of the workflows.
-
Reversible Quantum-dot Cellular Automata-based arithmetic logic unitQuantum-dot cellular automata (QCA) are a promising nanoscale computing technology that exploits the quantum mechanical tunneling of electrons between quantum dots in a cell andelectrostatic interaction between dots in neighboring cells. QCA can achieve higher speed, lowerpower, and smaller areas than conventional, complementary metal-oxide semiconductor (CMOS) technology. Developing QCA circuits in a logically and physically reversible manner can provide exceptional reductions in energy dissipation. The main challenge is to maintain reversibility down to the physical level. A crucial component of a computer’s central processing unit (CPU) is the arithmetic logic unit (ALU), which executes multiple logical and arithmetic functions on the data processed by the CPU. Current QCA ALU designs are either irreversible or logically reversible; however, they lack physical reversibility, a crucial requirement to increase energy efficiency. This paper shows a new multilayer design for a QCA ALU that can carry out 16 different operations and is both logically and physically reversible. The design is based on reversible majority gates, which are the key building blocks. We use QCA Designer-E software to simulate and evaluate energy dissipation. The proposed logically and physically reversible QCA ALU offers an improvement of 88.8% in energy efficiency. Compared to the next most efficient 16-operation QCA ALU, this ALU uses 51% fewer QCA cells and 47% less area.