Staff within the Department of Computer Science have research interests in Visualization, Interaction & Computer Graphics (with a particular focus on Medical Graphics), Cyber Security and Discrete Optimisation.

Recent Submissions

  • Strong convergence for efficient full discretization of the stochastic Allen-Cahn equation with multiplicative noise

    Qi, Xiao; Wang, Lihua; Yan, Yubin; Jianghan University; University of Chester (Elsevier, 2025-04-25)
    In this paper, we study the strong convergence of the full discretization based on a semi-implicit tamed approach in time and the finite element method with truncated noise in space for the stochastic Allen-Cahn equation driven by multiplicative noise. The proposed fully discrete scheme is efficient thanks to its low computational complexity and mean-square unconditional stability. The low regularity of the solution due to the multiplicative infinite-dimensional driving noise and the non-global Lipschitz difficulty intruduced by the cubic nonlinear drift term make the strong convergence analysis of the fully discrete solution considerably complicated. By constructing an appropriate auxiliary procedure, the full discretization error can be cleverly decomposed, and the spatio-temporal strong convergence order is successfully derived under certain weak assumptions. Numerical experiments are finally reported to validate the theoretical result.
  • EffUnet-SpaGen: An efficient and spatial generative approach to glaucoma detection

    Adithya, Venkatesh Krishna; Williams, Bryan M.; Czanner, Silvester; Kavitha, Srinivasan; Friedman, David S.; Willoughby, Colin E.; Venkatesh, Rengaraj; Czanner, Gabriela; Aravind Eye Care System; Lancaster University; Liverpool John Moores University; Harvard Medical School; Ulster University (MDPI, 2021-05-30)
    Current research in automated disease detection focuses on making algorithms "slimmer" reducing the need for large training datasets and accelerating recalibration for new data while achieving high accuracy. The development of slimmer models has become a hot research topic in medical imaging. In this work, we develop a two-phase model for glaucoma detection, identifying and exploiting a redundancy in fundus image data relating particularly to the geometry. We propose a novel algorithm for the cup and disc segmentation "EffUnet" with an efficient convolution block and combine this with an extended spatial generative approach for geometry modelling and classification, termed "SpaGen" We demonstrate the high accuracy achievable by EffUnet in detecting the optic disc and cup boundaries and show how our algorithm can be quickly trained with new data by recalibrating the EffUnet layer only. Our resulting glaucoma detection algorithm, "EffUnet-SpaGen", is optimized to significantly reduce the computational burden while at the same time surpassing the current state-of-art in glaucoma detection algorithms with AUROC 0.997 and 0.969 in the benchmark online datasets ORIGA and DRISHTI, respectively. Our algorithm also allows deformed areas of the optic rim to be displayed and investigated, providing explainability, which is crucial to successful adoption and implementation in clinical settings.
  • Automatic detection of glaucoma via fundus imaging and artificial intelligence: A review

    Coan, Lauren J.; Williiams, Bryan M.; Krishna Adithya, Venkatesh; Upadhyaya, Swati; Alkafri, Ala; Czanner, Silvester; Venkatesh, Rengaraj; Willoughby, Colin E.; Kavitha, Srinivasan; Czanner, Gabriela; et al. (Elsevier, 2022-08-17)
    Glaucoma is a leading cause of irreversible vision impairment globally, and cases are continuously rising worldwide. Early detection is crucial, allowing timely intervention that can prevent further visual field loss. To detect glaucoma an examination of the optic nerve head via fundus imaging can be performed, at the center of which is the assessment of the optic cup and disc boundaries. Fundus imaging is noninvasive and low-cost; however, image examination relies on subjective, time-consuming, and costly expert assessments. A timely question to ask is: "Can artificial intelligence mimic glaucoma assessments made by experts?" Specifically, can artificial intelligence automatically find the boundaries of the optic cup and disc (providing a so-called segmented fundus image) and then use the segmented image to identify glaucoma with high accuracy? We conducted a comprehensive review on artificial intelligence-enabled glaucoma detection frameworks that produce and use segmented fundus images and summarized the advantages and disadvantages of such frameworks. We identified 36 relevant papers from 2011 to 2021 and 2 main approaches: 1) logical rule-based frameworks, based on a set of rules; and 2) machine learning/statistical modeling-based frameworks. We critically evaluated the state-of-art of the 2 approaches, identified gaps in the literature and pointed at areas for future research.
  • Evaluating the severity of trust to Identity-Management-as-a-Service

    Mpofu, Nkosinathi; Van Staden, Wynand J. C.; University of South Africa; University of Chester (IEEE, 2017-08)
    The benefits of a c loud service have been well documented as reduced cost in both staff and computing infrastructure, rapid deployment, and scalabilit y and location independence. Despite all these benefits, Identity-management-as- a-Service's (IdMaaS) is struggling to gain a market presence due to an array of factors one of which is trust. In IdMaaS, trust may either be borne within the relationships amongst the actors (relying party, identity manager, identity owner, or end user) or may be actor specific. This paper will focus on trust between the identity owner and the identi ty manager within the context of third party identity management. A great effort in identifying trust issues by other researchers is acknowledged, however they did not go to the ext ent of measuring th e severity of t rust specifically related to I dMaaS. Our research shows that availability of the identity management system and security of identities are more critical concerns when compared to the cost of managing identities and fear of vendor lock-in. Above all, the research revealed that trust in IdMaaS is less than 40% at a 95% level of con fidence. Establishing the severity of tru st and its trusting factors is a more valuable input to the refinement of the IdMaaS approach. The success of IdMaaS will add to the domain of anything-as-a-service (XaaS) at the same time opening up an additional entrepreneurial avenue.
  • IoT embedded software manipulation

    Underhill, Paul; University of Chester (2023-03)
    The Internet of Things (IoT) has raised cybersecurity and privacy issues, notably about altering embedded software. This poster investigates the feasibility of using Read-Only Memory (ROM) at a low level to modify IoT devices while remaining undetectable to users and security systems. The study explores the vulnerabilities in embedded code and firmware, which are frequently proprietary and inaccessible, making them challenging to safeguard efficiently. The methodology uses a black-box forensic technique to acquire software, identify functions, and create test cases to assess potential alterations. The findings aim to contribute to a better understanding of IoT security concerns, emphasising the importance of upgraded firmware protection methods. This research highlights the challenges of detecting low-level attacks on IoT devices and provides insights into improving embedded system security.
  • Usability testing of VR reconstructions for museums and heritage sites: A case study from 14th century Chester (UK)

    Southall, Helen; University of Chester (tbc, 2025)
    This paper reports research on the usability of a 3D Virtual Reality (VR) model of the interior of St. John’s Church, Chester, as it probably appeared in the 14th Century. A VR visualization was created in Unity, based on archive data and historical records. This was adapted for use with Oculus Quest 2 VR headsets. Participants took part in usability tests of the experience, providing both qualitative and quantitative usability data. Although created with modest time and financial resources, the experience received a good overall usability rating, and numerous positive comments, including from novice VR users. Negative comments mainly related to the experience of wearing a VR headset. This paper concludes by suggesting further work, with thoughts on highly immersive VR in heritage contexts, especially combined with recent developments in generative artificial intelligence.
  • Unlocking trust: Advancing activity recognition in video processing – Say no to bans!

    Yousuf, Muhammad Jehanzaib; Lee, Brian; Asghar, Mamoona Naveed; Ansari, Mohammad Samar; Kanwal, Nadia; Technological University of the Shannon; University of Galway; University of Chester; Keele University (IEEE, 2024-11-20)
    Anonymous activity recognition is pivotal in addressing privacy concerns amidst the widespread use of facial recognition technologies (FRTs). While FRTs enhance security and efficiency, they raise significant privacy issues. Anonymous activity recognition circumvents these concerns by focusing on identifying and analysing activities without individual identification. It preserves privacy while extracting valuable insights and patterns. This approach ensures a balance between security and privacy in surveillance-heavy environments such as public spaces and workplaces. It detects anomalies and suspicious behaviours without compromising individual identities. Moreover, it promotes fairness by avoiding biases inherent in FRTs, thus mitigating discriminatory outcomes. Here we propose a privacy-preserved activity recognition framework to augment the facial recognition technologies. The goal of this framework is to provide activity recognition of individuals without violating their privacy. Our approach is based on extracting Regions of Interest (ROI) using YOLOv7-based instance segmentation and selective encryption of ROIs using the AES encryption algorithm. Furthermore, we investigate training deep learning models on privacy-preserved video datasets, utilising the previously mentioned privacy protection scheme. We developed and trained a CNN-LSTM based activity recognition model, achieving a classification accuracy of 94 %. The outcomes from training and testing deep learning algorithms on encrypted data illustrate significant classification and detection accuracy, even when dealing with privacy-protected data. Furthermore, we establish the trustworthiness and explainability of our activity recognition model by using Grad-CAM analysis and assessing it against the Trustworthy Artificial Intelligence (ALTAI) assessment list.
  • Immersive haptic simulation for training nurses in emergency medical procedures

    Gutiérrez-Fernández, Alexis; Fernández-Llamas, Camino; Vázquez-Casares, Ana M.; Mauriz, Elba; Riego-del-Castillo, Virginia; John, Nigel W.; University of León; University of Chester (Springer Nature, 2024-01-24)
    The use of haptic simulation for emergency procedures in nursing training presents a viable, versatile and affordable alternative to traditional mannequin environments. In this paper, an evaluation is performed in a virtual environment with a head-mounted display and haptic devices, and also with a mannequin. We focus on a chest decompression, a life-saving invasive procedure used for trauma-associated cardiopulmonary resuscitation (and other causes) that every emergency physician and/or nurse needs to master. Participants’ heart rate and blood pressure were monitored to measure their stress level. In addition, the NASA Task Load Index questionnaire was used. The results show the approved usability of the VR environment and that it provides a higher level of immersion compared to the mannequin, with no statistically significant difference in terms of cognitive load, although the use of VR is perceived as a more difficult task. We can conclude that the use of haptic-enabled virtual reality simulators has the potential to provide an experience as stressful as the real one while training in a safe and controlled environment.
  • Designing a Quantum-Dot Cellular Automata-Based Half-Adder Circuit Using Partially Reversible Majority Gates

    Alharbi, Mohammed; Edwards, Gerard; Stocker, Richard; Liverpool John Moores University; University of Chester (IEEE, 2024-09-16)
    Developing quantum-dot cellular automata (QCA) digital circuits reversibly leads to substantial reductions in energy dissipation. However, this is usually accompanied by time delays and accompanying increases in the circuit cost metric. In this study, an innovative, partially reversible design method is presented to address the latency and circuit cost limitations of reversible design methods. The proposed partially reversible design method serves as a middle ground between fully reversible and conventional irreversible design methodologies. Compared with irreversible design methods, the partially reversible design method still optimises energy efficiency. Moreover, the partially reversible design method improves the speed and decreases the circuit cost in comparison with fully reversible design techniques. The key ingredient of the proposed partially reversible design methodology is the introduction of a partially reversible majority gate element building block. To validate the effectiveness of the proposed partially reversible design approach, a novel partially reversible half-adder circuit is designed and simulated using the QCADesigner-E 2.2 simulation tool. This tool provides numerical results for the circuit input/output response and heat dissipation at the physical level, within a microscopic quantum mechanical model.
  • Development and Validation of Embedded Device for Electrocardiogram Arrhythmia Empowered with Transfer Learning

    Asif, Rizwana Naz; Abbas, Sagheer; Khan, Muhammad Adnan; Rahman, Atta-ur; Sultan, Kiran; Mahmud, Maqsood; Mosavi, Amir; Rehman, Ateeq Ur; National College of Business Administration and Economics (Pakistan); Gachon University; University of Chester; University of Bahrain; Slovak University of Technology in Bratislava (Hindawi, 2022-10-07)
    With the emergence of the Internet of Things (IoT), investigation of different diseases in healthcare improved, and cloud computing helped to centralize the data and to access patient records throughout the world. In this way, the electrocardiogram (ECG) is used to diagnose heart diseases or abnormalities. The machine learning techniques have been used previously but are feature-based and not as accurate as transfer learning; the proposed development and validation of embedded device prove ECG arrhythmia by using the transfer learning (DVEEA-TL) model. This model is the combination of hardware, software, and two datasets that are augmented and fused and further finds the accuracy results in high proportion as compared to the previous work and research. In the proposed model, a new dataset is made by the combination of the Kaggle dataset and the other, which is made by taking the real-time healthy and unhealthy datasets, and later, the AlexNet transfer learning approach is applied to get a more accurate reading in terms of ECG signals. In this proposed research, the DVEEA-TL model diagnoses the heart abnormality in respect of accuracy during the training and validation stages as 99.9% and 99.8%, respectively, which is the best and more reliable approach as compared to the previous research in this field.
  • Assessing Type Agreeability in the Unified Model of Personality and Play Styles

    Brooke, Alexander; Crossley, Matthew; Lloyd, Huw; Cunningham, Stuart; Manchester Metropolitan University; University of Chester (IEEE, 2024-08-08)
    Classifying players into well defined groups can be useful when designing games and gamified systems, with many models relating to player or personality ‘type’. The Unified Model of Personality and Play Styles groups together many player and personality taxonomies, but whilst similarities have been noted in previous work, the overlap between models has not been analysed ahead of its use. This study provides evidence both for and against aspects of the Unified Model, with model agreeability assessed through comparison of participant classifications. Results show that representations of types related by the Unified Model do correlate significantly greater than types unrelated by the model, but do so with only weak-to-moderate correlation coefficients. Ranking classifications leads to results better mapping to the Unified Model, but also reduces the overall strength of correla tions between types. The Unified Model is therefore considered fit for purpose as an explanatory tool, but without additional study should be used with caution in further use cases.
  • Comparing the performance of statistical, machine learning, and deep learning algorithms to predict time-to-event: A simulation study for conversion to mild cognitive impairment

    Billichová, Martina; Coan, Lauren Joyce; Czanner, Silvester; Kováčová, Monika; Sharifian, Fariba; Czanner, Gabriela; Slovak University of Technology in Bratislava; Liverpool John Moores University; University of Chester; (Public Library of Science, 2024-01-22)
    Mild Cognitive Impairment (MCI) is a condition characterized by a decline in cognitive abilities, specifically in memory, language, and attention, that is beyond what is expected due to normal aging. Detection of MCI is crucial for providing appropriate interventions and slowing down the progression of dementia. There are several automated predictive algorithms for prediction using time-to-event data, but it is not clear which is best to predict the time to conversion to MCI. There is also confusion if algorithms with fewer training weights are less accurate. We compared three algorithms, from smaller to large numbers of training weights: a statistical predictive model (Cox proportional hazards model, CoxPH), a machine learning model (Random Survival Forest, RSF), and a deep learning model (DeepSurv). To compare the algorithms under different scenarios, we created a simulated dataset based on the Alzheimer NACC dataset. We found that the CoxPH model was among the best-performing models, in all simulated scenarios. In a larger sample size (n = 6,000), the deep learning algorithm (DeepSurv) exhibited comparable accuracy (73.1%) to the CoxPH model (73%). In the past, ignoring heterogeneity in the CoxPH model led to the conclusion that deep learning methods are superior. We found that when using the CoxPH model with heterogeneity, its accuracy is comparable to that of DeepSurv and RSF. Furthermore, when unobserved heterogeneity is present, such as missing features in the training, all three models showed a similar drop in accuracy. This simulation study suggests that in some applications an algorithm with a smaller number of training weights is not disadvantaged in terms of accuracy. Since algorithms with fewer weights are inherently easier to explain, this study can help artificial intelligence research develop a principled approach to comparing statistical, machine learning, and deep learning algorithms for time-to-event predictions.
  • Developing a framework for enhancing security testing of android applications

    Lamina, Adedeji Olaniyi; Yussuf, Moshood Folawiyo; Oyinloye, Toyosi; Oladokun, Pelumi; Brown, Victor Kamalu; University of Chester; Western Illinois University; Southeast Missouri State University; University of East London (GSC Online Press, 2024-09)
    Mobile applications have advanced a lot and now offer several features that help make our lives easier. Android is currently the most popular mobile operating system, and it is susceptible to exploitation attempts by malicious entities. This has led to an increased focus on the security of Android applications. This dissertation proposed the development of a framework which provides a systematic approach to testing the security of Android applications. This framework was developed based on a comprehensive review of existing security testing methodologies and tools. In achieving the study objectives, a test application was run on an emulator, Burp Suite was used as a proxy tool to capture HTTP and HTTPS traffic for analysis, reverse engineering was carried out, static and dynamic analysis were executed, network traffic was captured and analysed with tcpdump and Wireshark, intent sniffing was carried out, fuzz testing was discussed, and a proof-of-concept tool (automation script) was developed. This work covers various aspects of Android applications’ security testing, and the proposed framework provides developers with a practical and effective approach to testing the security of their Android applications, thereby improving the overall security of the Android application ecosystem.
  • The Affective Audio Dataset (AAD) for Non-musical, Non-vocalized, Audio Emotion Research

    Ridley, Harrison; Cunningham, Stuart; Darby, John; Henry, John; Stocker, Richard; University of Chester; Manchester Metropolitan University (IEEE, 2024-08-02)
    The Affective Audio Dataset (AAD) is a new and novel dataset of non-musical, non-anthropomorphic sounds intended for use in affective research. Sounds are annotated for their affective qualities by sets of human participants. The dataset was created in response to a lack of suitable datasets within the domain of audio emotion recognition. A total of 780 sounds are selected from the BBC Sounds Library. Participants are recruited online and asked to rate a subset of sounds based on how they make them feel. Each sound is rated for arousal and valence. It was found that while evenly distributed, there was bias towards the low-valence, high-arousal quadrant, and displayed a greater range of ratings in comparison to others. The AAD is compared with existing datasets to check its consistency and validity, with differences in data collection methods and intended use-cases highlighted. Using a subset of the data, the online ratings were validated against an in-person data collection experiment with findings strongly correlating. The AAD is used to train a basic affect-prediction model and results are discussed. Uses of this dataset include, human-emotion research, cultural studies, other affect-based research, and industry use such as audio post-production, gaming, and user-interface design.
  • Meaningful automated feedback on Objected-Oriented program development tasks in Java

    Muncey, Andrew; Morgan, Mike; Cunningham, Stuart; University of Chester (Association for Computing Machinery (ACM), 2024-11-06)
    Automation has been used to assess student programming tasks for over 60 years. As well as assessing work, it can also be used in the provision of feedback, commonly though the utilisation of unit tests or evaluation of program output. This typically requires a structure to be provided, for example provision of a method stub or programming to an interface. This scaffolded approach is required in statically typed, object-oriented languages such as Java, as if tests rely on non-existent code, compilation will fail. Previous studies identified that for many tools, feedback is limited to a comparison of the student’s solution with a reference, the results of unit tests, or how actual output compares with that which is expected. This paper discusses a tool that provides automated textual feedback on programming tasks. This tool, the “Java Object-Oriented Feedback Tool” (JOOFT), allows the instructor to write unit tests for as yet unwritten code, with their own feedback, almost as easily as writing a standard unit test. JOOFT also provides additional, customisable, feedback for student errors that might occur in the process of writing code, such as specifying an incorrect parameter type for a method. A randomised trial of the tool was carried out with novice student programmers (n=109), who completed a lab task on the design of a class, 52 of them having assistance from the tool. Whilst students provided positive feedback on tool usage, performance in a later assessment of class creation, suggests student outcomes are not affected.
  • Hybrid Quantum-Dot Cellular Automata Nanocomputing Circuits

    Alharbi, Mohammed; Edwards, Gerard; Stocker, Richard; Liverpool John Moores University; University of Chester (MDPI, 2024-07-13)
    Quantum-dot cellular automata (QCA) is an emerging transistor-less field-coupled nanocomputing (FCN) approach to ultra-scale ‘nanochip’ integration. In QCA, to represent digital circuitry, electrostatic repulsion between electrons and the mechanism of electron tunnelling in quantum dots are used. QCA technology can surpass conventional complementary metal oxide semiconductor (CMOS) technology in terms of clock speed, reduced occupied chip area, and energy efficiency. To develop QCA circuits, irreversible majority gates are typically used as the primary components. Recently, some studies have introduced reversible design techniques, using reversible majority gates as the main building block, to develop ultra-energy-efficient QCA circuits. However, this approach resulted in time delays, an increase in the number of QCA cells used, and an increase in the chip area occupied. This work introduces a novel hybrid design strategy employing irreversible, reversible, and partially reversible QCA gates to establish an optimal balance between power consumption, delay time, and occupied area. This hybrid technique allows the designer to have more control over the circuit characteristics to meet different system needs. A combination of reversible, irreversible, and innovative partially reversible majority gates is used in the proposed hybrid design method. We evaluated the hybrid design method by examining the half-adder circuit as a case study. We developed four hybrid QCA half-adder circuits, each of which simultaneously incorporates various types of majority gates. The QCADesigner-E 2.2 simulation tool was used to simulate the performance and energy efficiency of the half-adders. This tool provides numerical results for the circuit input/output response and heat dissipation at the physical level within a microscopic quantum mechanical model.
  • Using Voice Input to Control and Interact With a Narrative Video Game

    Copaceanu, Andrei; Weinel, Jonathan; Cunningham, Stuart; University of Greenwich; University of Chester (BCS Learning and Development, 2024-07)
    With the advancement of artificial intelligence (AI) over recent years, especially the breakthrough in technology that OpenAI achieved with the natural language generative model of ChatGPT, virtual assistants and voice interactive devices such as Amazon’s Alexa or Apple’s Siri, have become popular with the general public. This is due to their ease of use, accessibility, and ability to be used without physical interaction. When it comes to the video games industry, there have been attempts to implement voice input as a core mechanic, with various levels of success. Ultimately, voice input has been mostly used as a separate mechanic or as an alternative to traditional input methods. This project will investigate different methods of using voice input to control and interact with a narrative video game. The research will analyse which method is most effective in facilitating player control of the game and identify challenges related to implementation. This paper also includes a work-in-progress demonstration of a voice-activated game made in Unreal Engine.
  • Audience perceptions of Foley footsteps and 3D realism designed to convey walker characteristics

    Cunningham, Stuart; McGregor, Iain; University of Chester; Edinburgh Napier University (Springer, 2024-06-11)
    Foley artistry is an essential part of the audio post-production process for film, television, games, and animation. By extension, it is as crucial in emergent media such as virtual, mixed, and augmented reality. Footsteps are a core activity that a Foley artist must undertake and convey information about the characters and environment presented on-screen. This study sought to identify if characteristics of age, gender, weight, health, and confidence could be conveyed, using sounds created by a professional Foley artist, in three different 3D humanoid models, following a single walk cycle. An experiment was conducted with human participants (n=100) and found that Foley manipulations could convey all the intended characteristics with varying degrees of contextual success. It was shown that the abstract 3D models were capable of communicating characteristics of age, gender, and weight. A discussion of the literature and inspection of related audio features with the Foley clips suggest signal parameters of frequency, envelope, and novelty may be a subset of markers of those perceived characteristics. The findings are relevant to researchers and practitioners in linear and interactive media and demonstrate mechanisms by which Foley can contribute useful information and concepts about on-screen characters.
  • Lossless Encoding of Time-Aggregated Neuromorphic Vision Sensor Data Based on Point-Cloud Compression

    Adhuran, Jayasingam; Khan, Nabeel; Martini, Maria; Kingston University London; University of Chester (MDPI, 2024-02-21)
    Neuromorphic Vision Sensors (NVSs) are emerging sensors that acquire visual information asynchronously when changes occur in the scene. Their advantages versus synchronous capturing (frame-based video) include a low power consumption, a high dynamic range, an extremely high temporal resolution, and lower data rates. Although the acquisition strategy already results in much lower data rates than conventional video, NVS data can be further compressed. For this purpose, we recently proposed Time Aggregation-based Lossless Video Encoding for Neuromorphic Vision Sensor Data (TALVEN), consisting in the time aggregation of NVS events in the form of pixel-based event histograms, arrangement of the data in a specific format, and lossless compression inspired by video encoding. In this paper, we still leverage time aggregation but, rather than performing encoding inspired by frame-based video coding, we encode an appropriate representation of the time-aggregated data via point-cloud compression (similar to another one of our previous works, where time aggregation was not used). The proposed strategy, Time-Aggregated Lossless Encoding of Events based on Point-Cloud Compression (TALEN-PCC), outperforms the originally proposed TALVEN encoding strategy for the content in the considered dataset. The gain in terms of the compression ratio is the highest for low-event rate and low-complexity scenes, whereas the improvement is minimal for high-complexity and high-event rate scenes. According to experiments on outdoor and indoor spike event data, TALEN-PCC achieves higher compression gains for time aggregation intervals of more than 5 ms. However, the compression gains are lower when compared to state-of-the-art approaches for time aggregation intervals of less than 5 ms.
  • An Ultra-Energy-Efficient Reversible Quantum-Dot Cellular Automata 8:1 Multiplexer Circuit

    Alharbi, Mohammed; Edwards, Gerard; Stocker, Richard; Liverpool John Moores University; University of Chester (MDPI, 2024-01-16)
    Energy efficiency considerations in terms of reduced power dissipation are a significant issue in the design of digital circuits for very large-scale integration (VLSI) systems. Quantum-dot cellular automata (QCA) is an emerging ultralow power dissipation approach, distinct from traditional, complementary metal-oxide semiconductor (CMOS) technology, for building digital computing circuits. Developing fully reversible QCA circuits has the potential to significantly reduce energy dissipation. Multiplexers are fundamental elements in the construction of useful digital circuits. In this paper, a novel, multilayer, fully reversible QCA 8:1 multiplexer circuit with ultralow energy dissipation is introduced. The power dissipation of the proposed multiplexer is simulated using the QCADesigner-E version 2.2 tool, describing the microscopic physical mechanisms underlying the QCA operation. The results show that the proposed reversible QCA 8:1 multiplexer consumes 89% less energy than the most energy-efficient 8:1 multiplexer circuit previously presented in the literature.

View more