https://publikasi.dinus.ac.id/jcta/issue/feedJournal of Computing Theories and Applications2025-11-30T00:00:00+00:00JTCA Editorialeditorial.jcta@dinus.idOpen Journal Systems<div style="border: 3px #086338 Dashed; padding: 10px; background-color: #ffffff; text-align: left;"> <ol> <li><strong>Journal Title </strong>: Journal of Computing Theories and Applications</li> <li><strong>Online ISSN </strong>: <a href="https://portal.issn.org/resource/ISSN/3024-9104">3024-9104</a> </li> <li><strong>Frequency </strong>: Quarterly (February, May, August, and November) </li> <li><strong>DOI Prefix</strong>: 10.62411/jcta</li> <li><strong>Publisher </strong>: Universitas Dian Nuswantoro</li> </ol> </div> <div id="focusAndScope"> <p><strong data-start="133" data-end="190">Journal of Computing Theories and Applications (JCTA)</strong> is a peer-reviewed international journal that covers all aspects of foundations, theories, and practical applications in computer science. All accepted articles are published online, assigned a <strong data-start="527" data-end="547">DOI via Crossref</strong>, and made <strong data-start="558" data-end="593" data-is-only-node="">freely accessible (Open Access)</strong>. The journal follows a <strong>rapid peer-review</strong> process, with the first decision typically provided within two to four weeks. JCTA welcomes original research papers in, but not limited to:</p> <p>Artificial Intelligence<br />Big Data<br />Bioinformatics<br />Biometrics<br />Cloud Computing<br />Computer Graphics<br />Computer Vision<br />Cryptography<br />Data Mining<br />Fuzzy Systems<br />Game Technology<br />Image Processing<br />Information Security<br />Internet of Things<br />Intelligent Systems<br />Machine Learning<br />Mobile Computing<br />Multimedia Technology<br />Natural Language Processing<br />Network Security<br />Pattern Recognition<br />Quantum Informatics<br />Signal Processing<br />Soft Computing<br />Speech Processing</p> <p><br />Special emphasis is given to recent trends related to cutting-edge research within the domain.</p> </div> <div id="peerReviewProcess"> </div> <div id="sponsors"> <p> </p> </div>https://publikasi.dinus.ac.id/jcta/article/view/14235Fake News Detection Using Bi-LSTM Architecture: A Deep Learning Approach on the ISOT Dataset2025-08-17T15:05:20+00:00Maaruf M. Lawalmmlawal80@gmail.comAbdulrashid Abdulraufabdulrashid.abdulrauf2022@nda.edu.ng<p>The proliferation of fake news across digital platforms has raised critical concerns about information reliability. A notable example is the viral rumour falsely claiming that the Nigerian Minister of the Federal Capital Territory, Nyesom Wike, had collapsed at an event and was rushed to an undisclosed hospital an entirely fabricated claim that caused public confusion. While both traditional machine learning and deep learning approaches have been explored for automated fake news detection, many existing models have been limited to topic-specific datasets and often suffer from overfitting, especially on smaller datasets like ISOT. This study addresses these challenges by proposing a standalone Bidirectional Long Short-Term Memory (BiLSTM) model for fake news classification using the ISOT dataset. Unlike multi-modal frameworks such as the MM-FND model by state-of-the-art model, which achieved 96.3% accuracy, the proposed BiLSTM model achieved superior results with 98.98% accuracy, 98.22% precision, 99.65% recall, and a 98.93% F1-score. The model demonstrated balanced classification across both fake and real news and exhibited strong generalization capabilities. However, training and validation performance plots revealed signs of overfitting after epoch 2, suggesting the need for regularization in future work. This study contributes to the growing body of research on fake news detection by showcasing the efficacy of a focused, sequential deep learning model over more complex architectures, offering a practical, scalable, and robust solution to misinformation detection</p>2025-09-03T00:00:00+00:00Copyright (c) 2025 Maaruf M. Lawal, Abdulrashid Abdulraufhttps://publikasi.dinus.ac.id/jcta/article/view/14455Hybrid Dynamic Programming Healthcare Cloud-Based Quality of Service Optimization2025-08-25T14:53:04+00:00Nengak I. Sitlongiliya_sitlong@yahoo.comAbraham E. Evwiekpaefeaeevwiekpaefe@nda.edu.ngMartins E. Irhebhudemirhebhude@nda.edu.ng<p>The integration of Internet of Things (IoT) with cloud computing has revolutionized healthcare systems, offering scalable and real-time patient monitoring. However, optimizing response times and energy consumption remains crucial for efficient healthcare delivery. This research evaluates various algorithmic approaches for workload migration and resource management within IoT cloud-based healthcare systems. The performance of the implemented algorithm in this research, Hybrid Dynamic Programming and Long Short-Term Memory (Hybrid DP+LSTM), was analyzed against other six key algorithms, namely Gradient Optimization with Back Propagation to Input (GOBI), Deep Reinforcement Learning (DRL), improved GOBI (GOBI2), Predictive Offloading for Network Devices (POND), Mixed Integer Linear Programming (MILP), and Genetic Algorithm (GA) based on their average response time and energy consumption. Hybrid DP+LSTM achieves the lowest response time (82.91ms) with an energy consumption of 2,835,048 joules per container. The outcome of the analysis showed that Hybrid DP+LSTM have significant response times improvement, with percentage increases of 89.3%, 79.0%, 83.8%, 97.0%, 99.8%, and 99.94% against GOBI, GOBI2, DRL, POND, MILP, and GA, respectively. In terms of energy consumption, Hybrid DP+LSTM outperforms other approaches, with GOBI2 (3,664,337 joules) consuming 29.3% more energy, DRL (2,973,238 joules) consuming 4.9% more, GOBI (4,463,010 joules) consuming 57.4% more, POND (3,310,966 joules) consuming 16.8% more, MILP (3,005,498 joules) consuming 6.0% more, and the GA (3,959,935 joules) consuming 39.7% more. The result of ablation of the Hybrid DP+LSTM model achieves a 47.05% improvement over DP-only (156.57ms) and a 70.64% improvement over LSTM-only (282.41ms) in response time. On the energy efficiency side, Hybrid DP+LSTM shows 22.80% improvement over LSTM-only (3,671,51 joules), but 7.34% underperformance compared to DP-only (2,640,93). These research findings indicate that the Hybrid DP+LSTM technique provides the best trade-off between response time and energy efficiency. Future research should further explore hybrid approaches to optimize these metrics in IoT cloud-based healthcare systems.</p>2025-09-26T00:00:00+00:00Copyright (c) 2025 Nengak I. Sitlong, Abraham E. Evwiekpaefe, Martins E. Irhebhudehttps://publikasi.dinus.ac.id/jcta/article/view/14043Predicting First-Year Student Performance with SMOTE-Enhanced Stacking Ensemble and Association Rule Mining for University Success Profiling2025-08-08T00:40:19+00:00Philippe Boribo Kikundakikunda.boribo@ucbukavu.ac.cdIssa Tasho Kasongotasho.issa@gmail.comThierry Nsabimanathierry.nsabimana@ub.edu.biJérémie Ndikumagengejeremie.ndikumagenge@ub.edu.biLongin Ndayisabalongin.ndayisaba@ub.edu.biElie Zihindula Mushengezieliezihindula@yahoo.frJules Raymond Kalaraymondkala1@gmail.com<p>This study examines the application of Educational Data Mining (EDM) to predict the academic per-formance of first-year students at the Catholic University of Bukavu and the Higher Institute of Edu-cation (ISP) in the Democratic Republic of Congo. The primary objective is to develop a model that can identify at-risk students early, providing the university with a tool to enhance student support and academic guidance. To address the challenges posed by data imbalance (where successful cases outnumber failures), the study adopts a hybrid methodological approach. First, the SMOTE algorithm was applied to balance the dataset. Then, a stacking classification model was developed to combine the predictive power of multiple algorithms. The variables used for prediction include the National Exam score (PEx), the secondary school track (Humanities), and the type of prior institution (public, private, or religious-affiliated schools), as well as age and sex. The results demonstrate that this approach is highly effective. The model is not only capable of predicting success or failure but also of forecasting students' performance levels (e.g., honors or distinctions). Moreover, the use of the Apriori association rule mining algorithm allowed the identification of faculty-specific success profiles, transforming prediction into an interpretable decision-support tool. This research makes several significant contributions. Practically, it provides the University of Bukavu with a tool for student orientation and early risk detection. Methodologically, it illustrates the effectiveness of a combined approach to EDM in an African context. However, the study acknowledges certain limitations, including the non-public nature of the data and the geographical specificity of the sample. It therefore proposes avenues for future research, such as the integration of Explainable AI (XAI) techniques for more refined and transparent analysis of the results.</p>2025-09-30T00:00:00+00:00Copyright (c) 2025 Philippe Boribo Kikunda, Issa Tasho Kasongo, Thierry Nsabimana, Jérémie Ndikumagenge, Longin Ndayisaba, Elie Zihindula Mushengezi, Jules Raymond Kalahttps://publikasi.dinus.ac.id/jcta/article/view/14472Investigating a SMOTE-Tomek Boosted Stacked Learning Scheme for Phishing Website Detection: A Pilot Study2025-09-25T00:13:18+00:00Eferhire Valentine Ugbotueferhire.ugbotu@gmail.comFrances Uchechukwu Emordiemordi.frances@dou.edu.ngEmeke Ugbohugboh1972@gmail.comKizito Eluemunor Anaziaanaziake@dsust.edu.ngChristopher Chukwufunaya Odiakaoseosegalaxy@gmail.comPaul Avwerosuoghene Onomakenbridge14@gmail.comRebecca Okeoghene Idamaidamaro@dsust.edu.ngArnold Adimabua Ojugoojugo.arnold@fupre.edu.ngVictor Ochuko Getelomageteloma.victor@fupre.edu.ngAmanda Enaodona Oweimieotuoweimieotuamanda@edwinclarkuniversity.edu.ngTabitha Chukwudi Aghaunortabitha.aghaunor@gmail.comAmaka Patience Binitieamaka.binitie@fcetasaba.edu.ngAnne Odohaodoh@pau.edu.ngChris Chukwudi Onochiextoline2@gmail.comPeace Oguguo Ezzehpeace.ezzeh@fcetasaba.edu.ngAndrew Okonji Ebokaebokaandrew@gmail.comJoy Agboiagboijoy0@gmail.comPatrick Ogholuwarami Ejehpatrick.ejeh@dou.edu.ng<p>The daily exchange of informatics over the Internet has both eased the widespread proliferation of resources to ease accessibility, availability and interoperability of accompanying devices. In addition, the recent widespread proliferation of smartphones alongside other computing devices has continued to advance features such as miniaturization, portability, data access ease, mobility, and other merits. It has also birthed adversarial attacks targeted at network infrastructures and aimed at exploiting interconnected cum shared resources. These exploits seek to compromise an unsuspecting user device cum unit. Increased susceptibility and success rate of these attacks have been traced to user's personality traits and behaviours, which renders them repeatedly vulnerable to such exploits especially those rippled across spoofed websites as malicious contents. Our study posits a stacked, transfer learning approach that seeks to classify malicious contents as explored by adversaries over a spoofed, phishing websites. Our stacked approach explores 3-base classifiers namely Cultural Genetic Algorithm, Random Forest, and Korhonen Modular Neural Network – whose output is utilized as input for XGBoost meta-learner. A major challenge with learning scheme(s) is the flexibility with the selection of appropriate features for estimation, and the imbalanced nature of the explored dataset for which the target class often lags behind. Our study resolved dataset imbalance challenge using the SMOTE-Tomek mode; while, the selected predictors was resolved using the relief rank feature selection. Results shows that our hybrid yields F1 0.995, Accuracy 0.997, Recall 0.998, Precision 1.000, AUC-ROC 0.997, and Specificity 1.000 – to accurately classify all 2,764 cases of its held-out test dataset. Results affirm that it outperformed bench-mark ensembles. Result shows the proposed model explored UCI Phishing Website dataset, and effectively classified phishing (cues and lures) contents on websites.</p>2025-10-01T00:00:00+00:00Copyright (c) 2025 Eferhire Valentine Ugbotu, Frances Uchechukwu Emordi, Emeke Ugboh, Kizito Eluemunor Anazia, Christopher Chukwufunaya Odiakaose, Paul Avwerosuoghene Onoma, Rebecca Okeoghene Idama, Arnold Adimabua Ojugo, Victor Ochuko Geteloma, Amanda Enaodona Oweimieotu, Tabitha Chukwudi Aghaunor, Amaka Patience Binitie, Anne Odoh, Chris Chukwudi Onochie, Peace Oguguo Ezzeh, Andrew Okonji Eboka, Joy Agboi, Patrick Ogholuwarami Ejehhttps://publikasi.dinus.ac.id/jcta/article/view/14620EDANet: A Novel Architecture Combining Depthwise Separable Convolutions and Hybrid Attention for Efficient Tomato Disease Recognition2025-09-03T17:21:29+00:00Yusuf Ibrahimyibrahim@abu.edu.ngMuyideen O. Momohmomuyadeen@gmail.comKafayat O. Shobowalekshobowale@afit.edu.ngZainab Mukhtar Abubakarzmabubakar@abu.edu.ngBasira Yahayabyahaya@abu.edu.ng<p>Tomato crop yields face significant threats from plant diseases, with existing deep learning solutions often computationally prohibitive for resource-constrained agricultural settings; to address this gap, we propose Efficient Disease Attention Network (EDANet), a novel lightweight architecture combining depthwise separable convolutions with hybrid attention mechanisms for efficient Tomato disease recognition. Our approach integrates channel and spatial attention within hierarchical blocks to prioritize symptomatic regions while utilizing depthwise decomposition to reduce parameters to only 104,043 (multiple times smaller than MobileNet and EfficientNet). Evaluated on ten tomato disease classes from PlantVillage, EDANet achieves 97.32% accuracy and exceptional (~1.00) micro-AUC, with perfect recognition of Mosaic virus (100% F1-score) and robust performance on challenging cases like Early blight (93.2% F1) and Target Spot (93.6% F1). The architecture processes 128×128 RGB images in ~23ms on standard CPUs, enabling real-time field diagnostics without GPU dependencies. This work bridges laboratory AI and practical farm deployment by optimizing the accuracy-efficiency tradeoff, providing farmers with an accessible tool for early disease intervention in resource-limited environments.</p>2025-10-02T00:00:00+00:00Copyright (c) 2025 Yusuf Ibrahim, Muyideen O. Momoh, Kafayat O. Shobowale, Zainab Mukhtar Abubakar, Basira Yahayahttps://publikasi.dinus.ac.id/jcta/article/view/14873Integrating Quantum, Deep, and Classic Features with Attention-Guided AdaBoost for Medical Risk Prediction2025-10-08T01:59:57+00:00Muh Galuh Surya Putra Kusumamuhgaluhspk@gmail.comDe Rosal Ignatius Moses Setiadimoses@dsn.dinus.ac.idWise Herowatiwise.herowati@dsn.dinus.ac.idT. Sutojosutojo@dsn.dinus.ac.idPrajanto Wahyu Adiprajanto@live.undip.ac.idPushan Kumar Duttapkdutta@kol.amity.eduMinh T. Nguyennguyentuanminh@tnut.edu.vn<p>Chronic diseases such as chronic kidney disease (CKD), diabetes, and heart disease remain major causes of mortality worldwide, highlighting the need for accurate and interpretable diagnostic models. However, conventional machine learning methods often face challenges of limited generalization, feature redundancy, and class imbalance in medical datasets. This study proposes an integrated classification framework that unifies three complementary feature paradigms: classical tabular attributes, deep latent features extracted through an unsupervised Long Short-Term Memory (LSTM) encoder, and quantum-inspired features derived from a five-qubit circuit implemented in PennyLane. These heterogeneous features are fused using a feature-wise attention mechanism combined with an AdaBoost classifier to dynamically weight feature contributions and enhance decision boundaries. Experiments were conducted on three benchmark medical datasets—CKD, early-stage diabetes, and heart disease—under both balanced and imbalanced configurations using stratified five-fold cross-validation. All preprocessing and feature extraction steps were carefully isolated within each fold to ensure fair evaluation. The proposed hybrid model consistently outperformed conventional and ensemble baselines, achieving peak accuracies of 99.75% (CKD), 96.73% (diabetes), and 91.40% (heart disease) with corresponding ROC AUCs up to 1.00. Ablation analyses confirmed that attention-based fusion substantially improved both accuracy and recall, particularly under imbalanced conditions, while SMOTE contributed minimally once feature-level optimization was applied. Overall, the attention-guided AdaBoost framework provides a robust and interpretable approach for clinical risk prediction, demonstrating that integrating diverse quantum, deep, and classical representations can significantly enhance feature discriminability and model reliability in structured medical data.</p>2025-10-11T00:00:00+00:00Copyright (c) 2025 Muh Galuh Surya Putra Kusuma, De Rosal Ignatius Moses Setiadi, Wise Herowati, T. Sutojo, Prajanto Wahyu Adi, Minh T. Nguyen, Pushan Kumar Duttahttps://publikasi.dinus.ac.id/jcta/article/view/14661Transformer-Augmented Deep Learning Ensemble for Multi-Modal Neuroimaging-Based Diagnosis of Amyotrophic Lateral Sclerosis2025-09-12T03:24:07+00:00Clive Asuaiasuaiebomagune@gmail.comMayor Andrewmayor.andrew@ogharapoly.edu.ngAyigbe Prince Arinomorayigbe.prince@ogharapoly.edu.ngDaniel Ezekiel Ogheneochukodaniel.ezekiel@ogharapoly.edu.ngAghoghovia Agajere Joseph-Brownagajere.brown@ogharapoly.edu.ngIghere Meritighere.merit@ogharapoly.edu.ngAtumah Collinsatumah.collins@ogharapoly.edu.ng<p>Amyotrophic Lateral Sclerosis (ALS) is a progressive neurodegenerative disorder that presents significant diagnostic challenges due to its heterogeneous clinical manifestations and symptom overlap with other neurological conditions. Early and accurate diagnosis is critical for initiating timely interventions and improving patient outcomes. Traditional diagnostic approaches rely heavily on clinical expertise and manual interpretation of neuroimaging data, such as structural MRI, Diffusion Tensor Imaging (DTI), and functional MRI (fMRI), which are inherently time-consuming and prone to interobserver variability. Recent advances in Artificial Intelligence (AI) and Deep Learning (DL) have demonstrated potential for automating neuroimaging analysis, yet existing models often suffer from limited generalizability across modalities and datasets. To address these limitations, we propose a Transformer-augmented deep learning ensemble framework for automated ALS diagnosis using multi-modal neuroimaging data. The proposed architecture integrates Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Vision Transformers (ViTs) to leverage the complementary strengths of spatial, temporal, and global contextual feature representations. An adaptive weighting-based fusion mechanism dynamically integrates modality-specific outputs, enhancing the robustness and reliability of the final diagnosis. Comprehensive preprocessing steps, including intensity normalization, motion correction, and modality-specific data augmentation, are employed to ensure cross-modality consistency. Evaluation using 5-fold cross-validation on a curated multi-modal ALS neuroimaging dataset demon-strates the superior performance of the proposed model, achieving a mean classification accuracy of 94.5% ± 0.7%, precision of 93.9% ± 0.8%, recall of 92.9% ± 0.9%, F1-score of 93.4% ± 0.7%, spec-ificity of 97.4% ± 0.6%, and AUC-ROC of 0.968 ± 0.004. These results significantly outperform baseline CNN models and highlight the potential of transformer-augmented ensembles in complex neurodiagnostic applications. This framework offers a promising tool for clinicians, supporting early and precise ALS detection and enabling more personalized and effective patient management strategies.</p>2025-10-13T00:00:00+00:00Copyright (c) 2025 Clive Asuai, Mayor Andrew, Ayigbe Prince Arinomor, Daniel Ezekiel Ogheneochuko, Aghoghovia Agajere Joseph-Brown, Ighere Merit, Atumah Collinshttps://publikasi.dinus.ac.id/jcta/article/view/14793Evaluating Open-Source Machine Learning Project Quality Using SMOTE-Enhanced and Explainable ML/DL Models2025-10-01T00:31:21+00:00Ali Hamzaalihamza.369.2@gmail.comWahid Hussainwahidhussaiin@gmail.comHassan Iftikharhassan.iftikhar@skoltech.ruAziz Ahmadaakhmad@edu.hse.ruAlamgir Md Shamimmalamgir@edu.hse.ru<p style="font-weight: 400;">The rapid growth of open-source software (OSS) in machine learning (ML) has intensified the need for reliable, automated methods to assess project quality, particularly as OSS increasingly underpins critical applications in science, industry, and public infrastructure. This study evaluates the effectiveness of a diverse set of machine learning and deep learning (ML/DL) algorithms for classifying GitHub OSS ML projects as engineered or non-engineered using a SMOTE-enhanced and explainable modeling pipeline. The dataset used in this research includes both numerical and categorical attributes representing documentation, testing, architecture, community engagement, popularity, and repository activity. After handling missing values, standardizing numerical features, encoding categorical variables, and addressing the inherent class imbalance using the Synthetic Minority Oversampling Technique (SMOTE), seven different classifiers—K-Nearest Neighbors (KNN), Decision Tree (DT), Random Forest (RF), XGBoost (XGB), Logistic Regression (LR), Support Vector Machine (SVM), and a Deep Neural Network (DNN)—were trained and evaluated. Results show that LR (84%) and DNN (85%) outperform all other models, indicating that both linear and moderately deep non-linear architectures can effectively capture key quality indicators in OSS ML projects. Additional explainability analysis using SHAP reveals consistent feature importance across models, with documentation quality, unit testing practices, architectural clarity, and repository dynamics emerging as the strongest predictors. These findings demonstrate that automated, explainable ML/DL-based quality assessment is both feasible and effective, offering a practical pathway for improving OSS sustainability, guiding contributor decisions, and enhancing trust in ML-based systems that depend on open-source components.</p>2025-11-16T00:00:00+00:00Copyright (c) 2025 Ali Hamza, Wahid Hussain, Hassan Iftikhar, Aziz Ahmad, Alamgir Md Shamimhttps://publikasi.dinus.ac.id/jcta/article/view/14990ArchEvolve: A Collaborative and Interactive Search-Based Framework with Preference Learning for Optimizing Software Architectures2025-10-28T15:37:35+00:00Ayobami E. Mesioyemesioyeae@mcu.edu.ngAdesola M. Faladefaladeam@mcu.edu.ngKayode E. Akinolaakinolake@mcu.edu.ng<p>The use of Search-Based Software Engineering (SBSE) for optimizing software architecture has evolved from fully automated to interactive approaches, integrating human expertise. However, current interactive tools face limitations: they typically support only single decision-makers, confine architects to passive roles, and induce significant cognitive fatigue from repetitive evaluations. These issues disconnect them from modern, team-based software development, where collaboration and consensus are crucial. To address these shortcomings, we propose "ArchEvolve," a novel framework designed to facilitate collaborative, multi-architect decision-making. ArchEvolve employs a cooperative coevolutionary model that concurrently evolves a population of candidate architectures and distinct populations representing each architect's unique preferences. This structure guides the search towards high-quality consensus solutions that accommodate diverse, often conflicting, stakeholder viewpoints. An integrated Artificial Neural Network (ANN) serves as a preference learning module, trained on explicit team feedback to act as a surrogate evaluator. This active learning cycle substantially reduces the number of required human interactions and alleviates user fatigue. Empirical evaluation on two industrial case studies (E-Commerce System and Healthcare Management System) compared ArchEvolve to a state-of-the-art interactive baseline. Results indicate that ArchEvolve achieves statistically significant improvements in both solution quality and consensus-building. The preference learning module demonstrated over 90% accuracy in predicting team ratings and reduced human evaluations by up to 46% without compromising final solution quality. ArchEvolve provides a practical, scalable framework supporting collaborative, consensus-driven architectural design, making interactive optimization a more viable and efficient tool for real-world software engineering teams by intelligently integrating cooperative coevolutionary search with a preference learning surrogate.</p>2025-11-20T00:00:00+00:00Copyright (c) 2025 Ayobami E. Mesioye, Adesola M. Falade, Kayode E. Akinolahttps://publikasi.dinus.ac.id/jcta/article/view/14901A Solar-Powered Multimodal IoT Framework for Real-Time Transformer Theft Detection2025-10-11T03:41:31+00:00Promise Ojokohpromiseojokoh@gmail.comOlaide Agboladeoaagbolade@futa.edu.ng<p>Power transformer theft, a pervasive issue disrupting critical infrastructure, necessitates the development of cost-effective and energy-autonomous security solutions. This paper presents the design and implementation of a detection-focused anti-theft framework that integrates a Raspberry Pi Zero W, camera module, and passive infrared (PIR) motion sensors powered by a solar system for continuous monitoring. The system is designed for remote, off-grid deployment, utilizing a headless Raspberry Pi powered by a 5V solar panel and power bank to ensure energy autonomy. Upon motion detection, captured images are processed on the edge device using OpenCV’s Haar Cascade classifier, optimized for upper-body detection to minimize false positives and verify human presence. Captured images are processed locally on the edge device using OpenCV’s Haar Cascade classifier to confirm human presence before an alert is sent to the mobile application, emphasizing real-time operation and low latency. Once an intrusion is confirmed, the images are saved locally and uploaded via the Secure File Transfer Protocol to a custom-developed Android application. The app provides a dedicated remote monitoring interface, enabling secure file transfer and system access, while providing users with immediate notifications and image management capabilities. The system emphasizes low power consumption, real-time operation, and low deployment cost. Tests over 200 triggered events under varied environmental conditions achieved 90% detection accuracy with an average latency of 4.5 s. Solar autonomy was maintained for approximately 24 h under normal operation. It is concluded that the integration of solar power, edge computing of images, and mobile monitoring provides a feasible, scalable, and financially viable framework for securing transformers, especially in resource-constrained environments.</p>2025-11-27T00:00:00+00:00Copyright (c) 2025 Promise Ojokoh, Olaide Agbolade