https://publikasi.dinus.ac.id/jcta/issue/feed Journal of Computing Theories and Applications 2026-05-31T00:00:00+00:00 JTCA Editorial editorial.jcta@dinus.id Open Journal Systems <div style="border: 3px #086338 Dashed; padding: 10px; background-color: #ffffff; text-align: left;"> <ol> <li><strong>Journal Title </strong>: Journal of Computing Theories and Applications</li> <li><strong>Online ISSN </strong>: <a href="https://portal.issn.org/resource/ISSN/3024-9104">3024-9104</a> </li> <li><strong>Frequency </strong>: Quarterly (February, May, August, and November) </li> <li><strong>DOI Prefix</strong>: 10.62411/jcta</li> <li><strong>Publisher </strong>: Universitas Dian Nuswantoro</li> </ol> </div> <div id="focusAndScope"> <p><strong data-start="133" data-end="190">Journal of Computing Theories and Applications (JCTA)</strong> is a peer-reviewed international journal that covers all aspects of foundations, theories, and applications in computer science. All accepted articles are published online, assigned a <strong data-start="527" data-end="547">DOI via Crossref</strong>, and made <strong data-start="558" data-end="593" data-is-only-node="">freely accessible (Open Access)</strong>. The journal follows a <strong>rapid peer-review</strong> process, with the first decision typically provided within two to four weeks. JCTA welcomes original research papers in, but not limited to:</p> <p>Artificial Intelligence<br />Big Data<br />Bioinformatics<br />Biometrics<br />Cloud Computing<br />Computer Graphics<br />Computer Vision<br />Cryptography<br />Data Mining<br />Fuzzy Systems<br />Game Technology<br />Image Processing<br />Information Security<br />Internet of Things<br />Intelligent Systems<br />Machine Learning<br />Mobile Computing<br />Multimedia Technology<br />Natural Language Processing<br />Network Security<br />Pattern Recognition<br />Quantum Informatics<br />Signal Processing<br />Soft Computing<br />Speech Processing</p> <p><br />Special emphasis is given to recent trends related to cutting-edge research within the domain.</p> </div> https://publikasi.dinus.ac.id/jcta/article/view/15508 Investigating Security Enhancement in Hybrid Clouds via a Blockchain-Fused Privacy Preservation Strategy: Pilot Study 2026-01-11T14:43:54+00:00 Tabitha Chukwudi Aghaunor tabitha.aghaunor@gmail.com Eferhire Valentine Ugbotu eferhire.ugbotu@gmail.com Emeke Ugboh ugboh1972@gmail.com Paul Avwerosuoghene Onoma kenbridge14@gmail.com Frances Uchechukwu Emordi emordi.frances@dou.edu.ng Arnold Adimabua Ojugo ojugo.arnold@fupre.edu.ng Victor Ochuko Geteloma geteloma.victor@fupre.edu.ng Rebecca Okeoghene Idama idama-ro@dsust.edu.ng Peace Oguguo Ezzeh peace.ezzeh@fcetasaba.edu.ng <p>The proliferation of cloud infrastructures has intensified concerns regarding data security, integrity, identity and access management, and user privacy. Despite recent advances, existing solutions often lack comprehensive integration of privacy-preserving mechanisms, dynamic trust management, and cross-provider interoperability. This study proposes an AI-enabled, zero-trust, blockchain-fused identity management framework for secure, privacy-preserving multi-cloud environments. The framework integrates homomorphic encryption with differential privacy for aggregate-level protection and secure multi-party computation for collaborative data processing. The proposed system was validated in a simulated multi-cloud environment using CloudSim, Ethereum blockchain, and AWS EC2. Experimental results indicate homomorphic encryption latency of approximately 450ms per operation and statistically significant security improvements (t(128) = 12.47, p &lt; 0.001), privacy (t(95) = 8.93, p &lt; 0.001), and throughput (t(156) = 15.21, p &lt; 0.001). The framework achieved differential privacy with ε = 0.1 while retaining 99.2% data utility, and demonstrated a 34% improvement in processing speed over conventional differential privacy approaches. In addition, the implementation was observed to be 2.3× faster than BGV-based configurations, with 45% lower memory consumption than CKKS and a 67% reduction in ciphertext size relative to baseline implementations. From an operational perspective, the framework shows a 23% reduction in security management costs, a 31% improvement in resource utilization efficiency, and an 18% decrease in compliance audit expenses. The model further indicates a 27% reduction in total cost of ownership (TCO) compared with multi-vendor security solutions, a projected return on investment (ROI) within 14 months, and an 89% reduction in security incident response costs under the evaluated conditions.</p> 2026-02-24T00:00:00+00:00 Copyright (c) 2026 Tabitha Chukwudi Aghaunor, Eferhire Valentine Ugbotu, Emeke Ugboh, Paul Avwerosuoghene Onoma, Frances Uchechukwu Emordi, Arnold Adimabua Ojugo, Victor Ochuko Geteloma, Rebecca Okeoghene Idama, Peace Oguguo Ezzeh https://publikasi.dinus.ac.id/jcta/article/view/15811 Behavioral Malware Detection via API Call Sequences: A Comparative Study of LSTM and Transformer Architectures Using NLP-Inspired Representations 2026-02-27T05:49:55+00:00 Anusree K J anunair0603@gmail.com Narottam Das Patel narottamdaspatel@vitbhopal.ac.in Saravanan D saravanan.d@vitbhopal.ac.in Adarsh Patel adarsh.patel@vitbhopal.ac.in <p>The increasing sophistication of malware has rendered traditional signature-based detection methods insufficient, necessitating behavior-driven and adaptive analytical frameworks. This study presents a sequential deep learning framework that models system-level API call sequences as structured linguistic representations for behavioral malware detection. Unlike conventional comparative studies, this work systematically evaluates recurrent and attention-based architectures under controlled experimental conditions, with a particular focus on generalization performance and overfitting mitigation. Two neural architectures, a Long Short-Term Memory (LSTM) network and a Transformer-based attention model, are trained on publicly available API call sequence data for binary classification of malicious and benign executables. Beyond standard accuracy metrics, the study further examines model stability, convergence behavior, and the impact of long-range dependency modeling on detection robustness. Experimental results demonstrate that the Transformer architecture achieves superior performance, attaining 95.54% classification accuracy and consistent improvements in precision, recall, and F1-score, indicating a stronger ability to capture complex behavioral dependencies. These findings highlight the effectiveness of attention mechanisms in behavioral malware modeling and provide empirical evidence that NLP-inspired architectures offer a robust and scalable approach for real-world cybersecurity applications.</p> 2026-04-03T00:00:00+00:00 Copyright (c) 2026 Anusree K J, Narottam Das Patel, Saravanan D, Adarsh Patel https://publikasi.dinus.ac.id/jcta/article/view/15863 Attention-Augmented GRU for Stock Forecasting: A Trade-Off Between Directional Accuracy and Price Prediction Error 2026-03-15T10:44:21+00:00 R. Daniel Hartanto daniel_hartanto@semarangkota.go.id Guruh Fajar Shidik guruh.fajar@research.dinus.ac.id Farrikh Alzami alzami@dsn.dinus.ac.id Ahmad Zainul Fanani a.zainul.fanani@dsn.dinus.ac.id Aris Marjuni aris.marjuni@dsn.dinus.ac.id Abdul Syukur abah.syukur01@dsn.dinus.ac.id <p>Attention mechanisms have been widely incorporated into recurrent neural network architectures for financial time series forecasting, with most prior work reporting improvements in price-level error metrics. This study revisits that claim through a controlled empirical comparison of four deep learning architectures on nearly two decades of Telkom Indonesia (TLKM) closing price data from the Indonesia Stock Exchange (IDX). The models evaluated are a three-layer Gated Recurrent Unit (GRU) baseline, a comparable Long Short-Term Memory (LSTM) network, a Bahdanau end-attention GRU (Attn-GRU-V2), and a multi-head self-attention GRU hybrid (Attn-GRU-V3). Each architecture is trained over 30 independent runs with distinct random seeds, and performance is reported as 95% confidence intervals derived from the t-distribution. Statistical comparisons employ the Wilcoxon signed-rank test, a nonparametric paired test appropriate given the confirmed non-normality of residuals. The main finding is a consistent trade-off: the plain GRU achieves the lowest RMSE (94.02 ± 1.22 IDR) across all 30 runs, while Attn-GRU-V2 achieves the highest directional accuracy (45.91 ± 0.09%), surpassing GRU in every independent run. Bahdanau attention weights are nearly uniform across the 30-day lookback window (coefficient of variation: 3.21%), indicating that the mechanism cannot identify selectively informative timesteps in this univariate price series. This finding is consistent with the weak-form Efficient Market Hypothesis for the Indonesian market. An ablation study reveals that a 20-day lookback window maximizes directional accuracy (47.72 ± 0.21%) for the Attn-GRU-V2 model. These results suggest that Bahdanau end-attention consistently and significantly improves directional accuracy relative to a plain GRU baseline, providing an architecturally attributable advantage for direction-based applications, even when absolute price-level error is not reduced. The directional accuracy values remaining below 50% across all models are consistent with a weak-form efficiency characterization of the Indonesian market.</p> 2026-04-06T00:00:00+00:00 Copyright (c) 2026 R. Daniel Hartanto, Guruh Fajar Shidik, Farrikh Alzami, Ahmad Zainul Fanani, Aris Marjuni, Abdul Syukur https://publikasi.dinus.ac.id/jcta/article/view/15870 Understanding Customer Churn in Retail Banking through Explainable Predictive Analytics: Evidence of a Product Paradox 2026-03-17T02:53:10+00:00 Patrick Ndabarishye patrick.ndabarishye@gmail.com Ajay Kumar Singh ajay41274@gmail.com <p>The retention of customers in the retail banking sector is a critical economic imperative; however, predictive modeling is frequently hindered by severe class imbalance and the “Black Box” nature of complex algorithms. This study proposes a Heterogeneous Stacking Ensemble framework integrating XGBoost, CatBoost, and Random Forest base learners with a Logistic Regression meta-learner to forecast customer attrition. To overcome the pervasive “Majority Class Bias,” we introduce a “Dual-Imbalance Defense” that synergizes the Synthetic Minority Over-sampling Technique (SMOTE) with algorithmic cost-sensitive penalization. Furthermore, moving beyond standard accuracy metrics, the framework mathematically derives a dynamic classification threshold to guarantee a strict 0.90 recall rate, actively optimizing the capture of at-risk capital. Model opacity is addressed through the integration of a SHapley Additive exPlanations (SHAP) TreeExplainer. This cooperative game theory approach provides localized, patient-level “Reason Codes” for regulatory compliance and reveals global systemic vulnerabilities, including non-linear drivers such as the “Product Paradox.” Achieving a 0.90 recall rate and an AUC of 0.8654, this framework provides a statistically robust and operationally transparent tool for targeted customer retention.</p> 2026-04-10T00:00:00+00:00 Copyright (c) 2026 Patrick Ndabarishye, Ajay Kumar Singh https://publikasi.dinus.ac.id/jcta/article/view/15866 Log-Transformed Regime-Based Prediction of Cloud Job Length Using Machine Learning 2026-03-16T15:11:22+00:00 Ardi Pujiyanta ardipujiyanta@tif.uad.ac.id Bambang Robiin bambang.robiin@tif.uad.ac.id Faisal Fajri Rahani faisal.fajri@tif.uad.ac.id <p>Cloud job-length prediction remains challenging when the target distribution is highly skewed and contains rare extreme values. This study proposes a log-transformed, regime-based machine learning framework for robust prediction of cloud job length, represented in million instructions (MI). The approach integrates sequential feature engineering, logarithmic target transformation, weighted learning, and regime-aware modeling to distinguish between normal and extreme job-length behavior. Using an ordered GoCJ-derived cloud job-length sequence of 1000 jobs, the dataset exhibits a heavy-tailed distribution, with a mean of 129,662 MI, a median of 93,000 MI, a 95th percentile of 525,000 MI, a 99th percentile of 900,000 MI, and a skewness of 3.695. The proposed model is evaluated against sequential baselines and stronger machine learning baselines, including Naive_Last, RollingMean_5, Global_Log_ExtraTrees, RandomForest, GradientBoosting, and MLP_Log. On the main test split, the proposed Regime_Log_ExtraTrees achieved the best RMSE of 206,255.66 and the least negative R² of −0.01062, while Global_Log_ExtraTrees remained competitive in terms of MAE, MedAE, and RMSLE. Additional walk-forward validation confirms that the regime-aware model consistently achieves the best mean RMSE and mean R² across temporal folds. Ablation results further show that regime-aware learning is the primary contributor to robustness, although accurate prediction of extreme jobs remains challenging. These findings indicate that log-transformed, regime-based learning provides a practical and more robust strategy for cloud job-length prediction under heavy-tailed workload conditions.</p> 2026-04-21T00:00:00+00:00 Copyright (c) 2026 Ardi Pujiyanta, Bambang Robiin, Faisal Fajri Rahani https://publikasi.dinus.ac.id/jcta/article/view/15875 Dual-Domain Temporal–Spatial Denoising Approach for Autism Spectrum Disorder EEG Signals Based on Stationary Wavelet Transform and SPHARA 2026-03-18T00:46:51+00:00 Cut Siti Azola Syiva csitia@mhs.usk.ac.id Melinda Melinda melinda@usk.ac.id Syahrial Syahrial syahrial@usk.ac.id Imam Fathur Rahman imamfth@mhs.usk.ac.id Souvik Das rndas9@gmail.com M. Ary Heryanto m.aryheryanto@dsn.dinus.ac.id <p>Electroencephalography (EEG) signals are highly susceptible to noise and artifacts, which can degrade analysis accuracy, particularly in Autism Spectrum Disorder (ASD) studies. Therefore, effective preprocessing is required to improve signal quality prior to further analysis. This study proposes an integrated EEG preprocessing pipeline that combines a Finite Impulse Response (FIR) band-pass filter (0.5–70 Hz) with notch filtering and detrending, followed by temporal denoising using the Stationary Wavelet Transform (SWT) with the Daubechies 4 mother wavelet and spatial filtering based on SPHARA. This dual-domain approach is designed to address both temporal and spatial noise in multichannel EEG signals. Experimental results demonstrate that the proposed FIR combined with SWT and SPHARA pipeline consistently outperforms single-domain preprocessing methods, achieving a maximum Signal-to-Noise Ratio (SNR) of 31.93 dB. The proposed method also produces the lowest Mean Absolute Error (MAE) (16.81 µV) and Standard Deviation (SD) (0.75 µV), indicating high signal stability with minimal amplitude distortion. Root Mean Square Error (RMSE) values remain stable within the range of 29.5–592.3 µV, with a minimum RMSE of 29.5 µV, demonstrating effective noise suppression while preserving signal energy. These results confirm that integrating temporal and spatial preprocessing significantly improves EEG signal quality and supports more reliable EEG analysis for ASD-related studies.</p> 2026-04-23T00:00:00+00:00 Copyright (c) 2026 Cut Siti Azola Syiva, Melinda Melinda, Syahrial Syahrial, Imam Fathur Rahman, Souvik Das, M. Ary Heryanto https://publikasi.dinus.ac.id/jcta/article/view/15943 Enhancing Software Defect Prediction through Hybrid Multi-Filter Feature Selection and Imbalance Handling 2026-04-04T09:03:54+00:00 Muhammad Khalid Maulana khaaliddd055@gmail.com Setyo Wahyu Saputro setyo.saputro@ulm.ac.id Mohammad Reza Faisal reza.faisal@ulm.ac.id Radityo Adi Nugroho radityo.adi@ulm.ac.id As’ary Ramadhan as’ary.ramadhan@ulm.ac.id <p>Software Defect Prediction (SDP) aims to identify defective modules early in the software development lifecycle to improve software quality and reduce maintenance costs. However, SDP datasets commonly suffer from high dimensionality, feature redundancy, and class imbalance, which can degrade model performance and stability. This study proposes a hybrid feature selection framework to address these challenges and enhance prediction performance. The proposed approach integrates Combined Correlation and Mutual Information (CONMI), which combines the Pearson Correlation Coefficient (PCC) and Mutual Information (MI) to capture both linear and nonlinear feature relevance. The selected features are further refined through Top-K selection, correlation-based filtering to reduce multicollinearity, and Backward Elimination (BE) to obtain an optimal feature subset. To address class imbalance, SMOTE-Tomek is applied by combining over-sampling and data cleaning techniques. Experiments are conducted on twelve NASA MDP datasets using Logistic Regression (LR) and Naïve Bayes (NB) classifiers. The results show that the proposed framework consistently achieves the best performance, with Logistic Regression combined with SMOTE-Tomek obtaining the highest average AUC of 0.7923 ± 0.0714, while NB achieves 0.7554 ± 0.0580. Statistical analysis using a paired t-test indicates that the proposed method significantly outperforms MI+SMOTE-Tomek and BE+SMOTE-Tomek for Logistic Regression, whereas no significant differences are observed for NB. In addition to improving overall classification performance (AUC), the proposed approach also enhances minority class detection, as reflected in improved Recall and F1-score. Overall, the proposed hybrid framework provides an effective and reliable solution for software defect prediction, particularly for high-dimensional and imbalanced datasets.</p> 2026-04-24T00:00:00+00:00 Copyright (c) 2026 Muhammad Khalid Maulana, Setyo Wahyu Saputro, Mohammad Reza Faisal, Radityo Adi Nugroho, As’ary Ramadhan