https://publikasi.dinus.ac.id/jcta/issue/feedJournal of Computing Theories and Applications2026-05-31T00:00:00+00:00JTCA Editorialeditorial.jcta@dinus.idOpen Journal Systems<div style="border: 3px #086338 Dashed; padding: 10px; background-color: #ffffff; text-align: left;"> <ol> <li><strong>Journal Title </strong>: Journal of Computing Theories and Applications</li> <li><strong>Online ISSN </strong>: <a href="https://portal.issn.org/resource/ISSN/3024-9104">3024-9104</a> </li> <li><strong>Frequency </strong>: Quarterly (February, May, August, and November) </li> <li><strong>DOI Prefix</strong>: 10.62411/jcta</li> <li><strong>Publisher </strong>: Universitas Dian Nuswantoro</li> </ol> </div> <div id="focusAndScope"> <p><strong data-start="133" data-end="190">Journal of Computing Theories and Applications (JCTA)</strong> is a peer-reviewed international journal that covers all aspects of foundations, theories, and applications in computer science. All accepted articles are published online, assigned a <strong data-start="527" data-end="547">DOI via Crossref</strong>, and made <strong data-start="558" data-end="593" data-is-only-node="">freely accessible (Open Access)</strong>. The journal follows a <strong>rapid peer-review</strong> process, with the first decision typically provided within two to four weeks. JCTA welcomes original research papers in, but not limited to:</p> <p>Artificial Intelligence<br />Big Data<br />Bioinformatics<br />Biometrics<br />Cloud Computing<br />Computer Graphics<br />Computer Vision<br />Cryptography<br />Data Mining<br />Fuzzy Systems<br />Game Technology<br />Image Processing<br />Information Security<br />Internet of Things<br />Intelligent Systems<br />Machine Learning<br />Mobile Computing<br />Multimedia Technology<br />Natural Language Processing<br />Network Security<br />Pattern Recognition<br />Quantum Informatics<br />Signal Processing<br />Soft Computing<br />Speech Processing</p> <p><br />Special emphasis is given to recent trends related to cutting-edge research within the domain.</p> </div>https://publikasi.dinus.ac.id/jcta/article/view/15508Investigating Security Enhancement in Hybrid Clouds via a Blockchain-Fused Privacy Preservation Strategy: Pilot Study2026-01-11T14:43:54+00:00Tabitha Chukwudi Aghaunortabitha.aghaunor@gmail.comEferhire Valentine Ugbotueferhire.ugbotu@gmail.comEmeke Ugbohugboh1972@gmail.comPaul Avwerosuoghene Onomakenbridge14@gmail.comFrances Uchechukwu Emordiemordi.frances@dou.edu.ngArnold Adimabua Ojugoojugo.arnold@fupre.edu.ngVictor Ochuko Getelomageteloma.victor@fupre.edu.ngRebecca Okeoghene Idamaidama-ro@dsust.edu.ngPeace Oguguo Ezzehpeace.ezzeh@fcetasaba.edu.ng<p>The proliferation of cloud infrastructures has intensified concerns regarding data security, integrity, identity and access management, and user privacy. Despite recent advances, existing solutions often lack comprehensive integration of privacy-preserving mechanisms, dynamic trust management, and cross-provider interoperability. This study proposes an AI-enabled, zero-trust, blockchain-fused identity management framework for secure, privacy-preserving multi-cloud environments. The framework integrates homomorphic encryption with differential privacy for aggregate-level protection and secure multi-party computation for collaborative data processing. The proposed system was validated in a simulated multi-cloud environment using CloudSim, Ethereum blockchain, and AWS EC2. Experimental results indicate homomorphic encryption latency of approximately 450ms per operation and statistically significant security improvements (t(128) = 12.47, p < 0.001), privacy (t(95) = 8.93, p < 0.001), and throughput (t(156) = 15.21, p < 0.001). The framework achieved differential privacy with ε = 0.1 while retaining 99.2% data utility, and demonstrated a 34% improvement in processing speed over conventional differential privacy approaches. In addition, the implementation was observed to be 2.3× faster than BGV-based configurations, with 45% lower memory consumption than CKKS and a 67% reduction in ciphertext size relative to baseline implementations. From an operational perspective, the framework shows a 23% reduction in security management costs, a 31% improvement in resource utilization efficiency, and an 18% decrease in compliance audit expenses. The model further indicates a 27% reduction in total cost of ownership (TCO) compared with multi-vendor security solutions, a projected return on investment (ROI) within 14 months, and an 89% reduction in security incident response costs under the evaluated conditions.</p>2026-02-24T00:00:00+00:00Copyright (c) 2026 Tabitha Chukwudi Aghaunor, Eferhire Valentine Ugbotu, Emeke Ugboh, Paul Avwerosuoghene Onoma, Frances Uchechukwu Emordi, Arnold Adimabua Ojugo, Victor Ochuko Geteloma, Rebecca Okeoghene Idama, Peace Oguguo Ezzehhttps://publikasi.dinus.ac.id/jcta/article/view/15811Behavioral Malware Detection via API Call Sequences: A Comparative Study of LSTM and Transformer Architectures Using NLP-Inspired Representations2026-02-27T05:49:55+00:00Anusree K Janunair0603@gmail.comNarottam Das Patelnarottamdaspatel@vitbhopal.ac.inSaravanan Dsaravanan.d@vitbhopal.ac.inAdarsh Pateladarsh.patel@vitbhopal.ac.in<p>The increasing sophistication of malware has rendered traditional signature-based detection methods insufficient, necessitating behavior-driven and adaptive analytical frameworks. This study presents a sequential deep learning framework that models system-level API call sequences as structured linguistic representations for behavioral malware detection. Unlike conventional comparative studies, this work systematically evaluates recurrent and attention-based architectures under controlled experimental conditions, with a particular focus on generalization performance and overfitting mitigation. Two neural architectures, a Long Short-Term Memory (LSTM) network and a Transformer-based attention model, are trained on publicly available API call sequence data for binary classification of malicious and benign executables. Beyond standard accuracy metrics, the study further examines model stability, convergence behavior, and the impact of long-range dependency modeling on detection robustness. Experimental results demonstrate that the Transformer architecture achieves superior performance, attaining 95.54% classification accuracy and consistent improvements in precision, recall, and F1-score, indicating a stronger ability to capture complex behavioral dependencies. These findings highlight the effectiveness of attention mechanisms in behavioral malware modeling and provide empirical evidence that NLP-inspired architectures offer a robust and scalable approach for real-world cybersecurity applications.</p>2026-04-03T00:00:00+00:00Copyright (c) 2026 Anusree K J, Narottam Das Patel, Saravanan D, Adarsh Patelhttps://publikasi.dinus.ac.id/jcta/article/view/15863Attention-Augmented GRU for Stock Forecasting: A Trade-Off Between Directional Accuracy and Price Prediction Error2026-03-15T10:44:21+00:00R. Daniel Hartantodaniel_hartanto@semarangkota.go.idGuruh Fajar Shidikguruh.fajar@research.dinus.ac.idFarrikh Alzamialzami@dsn.dinus.ac.idAhmad Zainul Fanania.zainul.fanani@dsn.dinus.ac.idAris Marjuniaris.marjuni@dsn.dinus.ac.idAbdul Syukurabah.syukur01@dsn.dinus.ac.id<p>Attention mechanisms have been widely incorporated into recurrent neural network architectures for financial time series forecasting, with most prior work reporting improvements in price-level error metrics. This study revisits that claim through a controlled empirical comparison of four deep learning architectures on nearly two decades of Telkom Indonesia (TLKM) closing price data from the Indonesia Stock Exchange (IDX). The models evaluated are a three-layer Gated Recurrent Unit (GRU) baseline, a comparable Long Short-Term Memory (LSTM) network, a Bahdanau end-attention GRU (Attn-GRU-V2), and a multi-head self-attention GRU hybrid (Attn-GRU-V3). Each architecture is trained over 30 independent runs with distinct random seeds, and performance is reported as 95% confidence intervals derived from the t-distribution. Statistical comparisons employ the Wilcoxon signed-rank test, a nonparametric paired test appropriate given the confirmed non-normality of residuals. The main finding is a consistent trade-off: the plain GRU achieves the lowest RMSE (94.02 ± 1.22 IDR) across all 30 runs, while Attn-GRU-V2 achieves the highest directional accuracy (45.91 ± 0.09%), surpassing GRU in every independent run. Bahdanau attention weights are nearly uniform across the 30-day lookback window (coefficient of variation: 3.21%), indicating that the mechanism cannot identify selectively informative timesteps in this univariate price series. This finding is consistent with the weak-form Efficient Market Hypothesis for the Indonesian market. An ablation study reveals that a 20-day lookback window maximizes directional accuracy (47.72 ± 0.21%) for the Attn-GRU-V2 model. These results suggest that Bahdanau end-attention consistently and significantly improves directional accuracy relative to a plain GRU baseline, providing an architecturally attributable advantage for direction-based applications, even when absolute price-level error is not reduced. The directional accuracy values remaining below 50% across all models are consistent with a weak-form efficiency characterization of the Indonesian market.</p>2026-04-06T00:00:00+00:00Copyright (c) 2026 R. Daniel Hartanto, Guruh Fajar Shidik, Farrikh Alzami, Ahmad Zainul Fanani, Aris Marjuni, Abdul Syukurhttps://publikasi.dinus.ac.id/jcta/article/view/15870Understanding Customer Churn in Retail Banking through Explainable Predictive Analytics: Evidence of a Product Paradox2026-03-17T02:53:10+00:00Patrick Ndabarishyepatrick.ndabarishye@gmail.comAjay Kumar Singhajay41274@gmail.com<p>The retention of customers in the retail banking sector is a critical economic imperative; however, predictive modeling is frequently hindered by severe class imbalance and the “Black Box” nature of complex algorithms. This study proposes a Heterogeneous Stacking Ensemble framework integrating XGBoost, CatBoost, and Random Forest base learners with a Logistic Regression meta-learner to forecast customer attrition. To overcome the pervasive “Majority Class Bias,” we introduce a “Dual-Imbalance Defense” that synergizes the Synthetic Minority Over-sampling Technique (SMOTE) with algorithmic cost-sensitive penalization. Furthermore, moving beyond standard accuracy metrics, the framework mathematically derives a dynamic classification threshold to guarantee a strict 0.90 recall rate, actively optimizing the capture of at-risk capital. Model opacity is addressed through the integration of a SHapley Additive exPlanations (SHAP) TreeExplainer. This cooperative game theory approach provides localized, patient-level “Reason Codes” for regulatory compliance and reveals global systemic vulnerabilities, including non-linear drivers such as the “Product Paradox.” Achieving a 0.90 recall rate and an AUC of 0.8654, this framework provides a statistically robust and operationally transparent tool for targeted customer retention.</p>2026-04-10T00:00:00+00:00Copyright (c) 2026 Patrick Ndabarishye, Ajay Kumar Singhhttps://publikasi.dinus.ac.id/jcta/article/view/15866Log-Transformed Regime-Based Prediction of Cloud Job Length Using Machine Learning2026-03-16T15:11:22+00:00Ardi Pujiyantaardipujiyanta@tif.uad.ac.idBambang Robiinbambang.robiin@tif.uad.ac.idFaisal Fajri Rahanifaisal.fajri@tif.uad.ac.id<p>Cloud job-length prediction remains challenging when the target distribution is highly skewed and contains rare extreme values. This study proposes a log-transformed, regime-based machine learning framework for robust prediction of cloud job length, represented in million instructions (MI). The approach integrates sequential feature engineering, logarithmic target transformation, weighted learning, and regime-aware modeling to distinguish between normal and extreme job-length behavior. Using an ordered GoCJ-derived cloud job-length sequence of 1000 jobs, the dataset exhibits a heavy-tailed distribution, with a mean of 129,662 MI, a median of 93,000 MI, a 95th percentile of 525,000 MI, a 99th percentile of 900,000 MI, and a skewness of 3.695. The proposed model is evaluated against sequential baselines and stronger machine learning baselines, including Naive_Last, RollingMean_5, Global_Log_ExtraTrees, RandomForest, GradientBoosting, and MLP_Log. On the main test split, the proposed Regime_Log_ExtraTrees achieved the best RMSE of 206,255.66 and the least negative R² of −0.01062, while Global_Log_ExtraTrees remained competitive in terms of MAE, MedAE, and RMSLE. Additional walk-forward validation confirms that the regime-aware model consistently achieves the best mean RMSE and mean R² across temporal folds. Ablation results further show that regime-aware learning is the primary contributor to robustness, although accurate prediction of extreme jobs remains challenging. These findings indicate that log-transformed, regime-based learning provides a practical and more robust strategy for cloud job-length prediction under heavy-tailed workload conditions.</p>2026-04-21T00:00:00+00:00Copyright (c) 2026 Ardi Pujiyanta, Bambang Robiin, Faisal Fajri Rahanihttps://publikasi.dinus.ac.id/jcta/article/view/15875Dual-Domain Temporal–Spatial Denoising Approach for Autism Spectrum Disorder EEG Signals Based on Stationary Wavelet Transform and SPHARA2026-03-18T00:46:51+00:00Cut Siti Azola Syivacsitia@mhs.usk.ac.idMelinda Melindamelinda@usk.ac.idSyahrial Syahrialsyahrial@usk.ac.idImam Fathur Rahmanimamfth@mhs.usk.ac.idSouvik Dasrndas9@gmail.comM. Ary Heryantom.aryheryanto@dsn.dinus.ac.id<p>Electroencephalography (EEG) signals are highly susceptible to noise and artifacts, which can degrade analysis accuracy, particularly in Autism Spectrum Disorder (ASD) studies. Therefore, effective preprocessing is required to improve signal quality prior to further analysis. This study proposes an integrated EEG preprocessing pipeline that combines a Finite Impulse Response (FIR) band-pass filter (0.5–70 Hz) with notch filtering and detrending, followed by temporal denoising using the Stationary Wavelet Transform (SWT) with the Daubechies 4 mother wavelet and spatial filtering based on SPHARA. This dual-domain approach is designed to address both temporal and spatial noise in multichannel EEG signals. Experimental results demonstrate that the proposed FIR combined with SWT and SPHARA pipeline consistently outperforms single-domain preprocessing methods, achieving a maximum Signal-to-Noise Ratio (SNR) of 31.93 dB. The proposed method also produces the lowest Mean Absolute Error (MAE) (16.81 µV) and Standard Deviation (SD) (0.75 µV), indicating high signal stability with minimal amplitude distortion. Root Mean Square Error (RMSE) values remain stable within the range of 29.5–592.3 µV, with a minimum RMSE of 29.5 µV, demonstrating effective noise suppression while preserving signal energy. These results confirm that integrating temporal and spatial preprocessing significantly improves EEG signal quality and supports more reliable EEG analysis for ASD-related studies.</p>2026-04-23T00:00:00+00:00Copyright (c) 2026 Cut Siti Azola Syiva, Melinda Melinda, Syahrial Syahrial, Imam Fathur Rahman, Souvik Das, M. Ary Heryanto