Journal of Computing Theories and Applications https://publikasi.dinus.ac.id/jcta <div style="border: 3px #086338 Dashed; padding: 10px; background-color: #ffffff; text-align: left;"> <ol> <li><strong>Journal Title </strong>: Journal of Computing Theories and Applications</li> <li><strong>Online ISSN </strong>: <a href="https://portal.issn.org/resource/ISSN/3024-9104">3024-9104</a> </li> <li><strong>Frequency </strong>: Quarterly (February, May, August, and November) </li> <li><strong>DOI Prefix</strong>: 10.62411/jcta</li> <li><strong>Publisher </strong>: Universitas Dian Nuswantoro</li> </ol> </div> <div id="focusAndScope"> <p><strong data-start="133" data-end="190">Journal of Computing Theories and Applications (JCTA)</strong> is a peer-reviewed international journal that covers all aspects of foundations, theories, and applications in computer science. All accepted articles are published online, assigned a <strong data-start="527" data-end="547">DOI via Crossref</strong>, and made <strong data-start="558" data-end="593" data-is-only-node="">freely accessible (Open Access)</strong>. The journal follows a <strong>rapid peer-review</strong> process, with the first decision typically provided within two to four weeks. JCTA welcomes original research papers in, but not limited to:</p> <p>Artificial Intelligence<br />Big Data<br />Bioinformatics<br />Biometrics<br />Cloud Computing<br />Computer Graphics<br />Computer Vision<br />Cryptography<br />Data Mining<br />Fuzzy Systems<br />Game Technology<br />Image Processing<br />Information Security<br />Internet of Things<br />Intelligent Systems<br />Machine Learning<br />Mobile Computing<br />Multimedia Technology<br />Natural Language Processing<br />Network Security<br />Pattern Recognition<br />Quantum Informatics<br />Signal Processing<br />Soft Computing<br />Speech Processing</p> <p><br />Special emphasis is given to recent trends related to cutting-edge research within the domain.</p> </div> en-US editorial.jcta@dinus.id (JTCA Editorial) editorial.jcta@gmail.com (JTCA Editorial Support Team) Sun, 31 May 2026 00:00:00 +0000 OJS 3.2.1.4 http://blogs.law.harvard.edu/tech/rss 60 Investigating Security Enhancement in Hybrid Clouds via a Blockchain-Fused Privacy Preservation Strategy: Pilot Study https://publikasi.dinus.ac.id/jcta/article/view/15508 <p>The proliferation of cloud infrastructures has intensified concerns regarding data security, integrity, identity and access management, and user privacy. Despite recent advances, existing solutions often lack comprehensive integration of privacy-preserving mechanisms, dynamic trust management, and cross-provider interoperability. This study proposes an AI-enabled, zero-trust, blockchain-fused identity management framework for secure, privacy-preserving multi-cloud environments. The framework integrates homomorphic encryption with differential privacy for aggregate-level protection and secure multi-party computation for collaborative data processing. The proposed system was validated in a simulated multi-cloud environment using CloudSim, Ethereum blockchain, and AWS EC2. Experimental results indicate homomorphic encryption latency of approximately 450ms per operation and statistically significant security improvements (t(128) = 12.47, p &lt; 0.001), privacy (t(95) = 8.93, p &lt; 0.001), and throughput (t(156) = 15.21, p &lt; 0.001). The framework achieved differential privacy with ε = 0.1 while retaining 99.2% data utility, and demonstrated a 34% improvement in processing speed over conventional differential privacy approaches. In addition, the implementation was observed to be 2.3× faster than BGV-based configurations, with 45% lower memory consumption than CKKS and a 67% reduction in ciphertext size relative to baseline implementations. From an operational perspective, the framework shows a 23% reduction in security management costs, a 31% improvement in resource utilization efficiency, and an 18% decrease in compliance audit expenses. The model further indicates a 27% reduction in total cost of ownership (TCO) compared with multi-vendor security solutions, a projected return on investment (ROI) within 14 months, and an 89% reduction in security incident response costs under the evaluated conditions.</p> Tabitha Chukwudi Aghaunor, Eferhire Valentine Ugbotu, Emeke Ugboh, Paul Avwerosuoghene Onoma, Frances Uchechukwu Emordi, Arnold Adimabua Ojugo, Victor Ochuko Geteloma, Rebecca Okeoghene Idama, Peace Oguguo Ezzeh Copyright (c) 2026 Tabitha Chukwudi Aghaunor, Eferhire Valentine Ugbotu, Emeke Ugboh, Paul Avwerosuoghene Onoma, Frances Uchechukwu Emordi, Arnold Adimabua Ojugo, Victor Ochuko Geteloma, Rebecca Okeoghene Idama, Peace Oguguo Ezzeh https://creativecommons.org/licenses/by/4.0 https://publikasi.dinus.ac.id/jcta/article/view/15508 Tue, 24 Feb 2026 00:00:00 +0000 Behavioral Malware Detection via API Call Sequences: A Comparative Study of LSTM and Transformer Architectures Using NLP-Inspired Representations https://publikasi.dinus.ac.id/jcta/article/view/15811 <p>The increasing sophistication of malware has rendered traditional signature-based detection methods insufficient, necessitating behavior-driven and adaptive analytical frameworks. This study presents a sequential deep learning framework that models system-level API call sequences as structured linguistic representations for behavioral malware detection. Unlike conventional comparative studies, this work systematically evaluates recurrent and attention-based architectures under controlled experimental conditions, with a particular focus on generalization performance and overfitting mitigation. Two neural architectures, a Long Short-Term Memory (LSTM) network and a Transformer-based attention model, are trained on publicly available API call sequence data for binary classification of malicious and benign executables. Beyond standard accuracy metrics, the study further examines model stability, convergence behavior, and the impact of long-range dependency modeling on detection robustness. Experimental results demonstrate that the Transformer architecture achieves superior performance, attaining 95.54% classification accuracy and consistent improvements in precision, recall, and F1-score, indicating a stronger ability to capture complex behavioral dependencies. These findings highlight the effectiveness of attention mechanisms in behavioral malware modeling and provide empirical evidence that NLP-inspired architectures offer a robust and scalable approach for real-world cybersecurity applications.</p> Anusree K J, Narottam Das Patel, Saravanan D, Adarsh Patel Copyright (c) 2026 Anusree K J, Narottam Das Patel, Saravanan D, Adarsh Patel https://creativecommons.org/licenses/by/4.0 https://publikasi.dinus.ac.id/jcta/article/view/15811 Fri, 03 Apr 2026 00:00:00 +0000 Attention-Augmented GRU for Stock Forecasting: A Trade-Off Between Directional Accuracy and Price Prediction Error https://publikasi.dinus.ac.id/jcta/article/view/15863 <p>Attention mechanisms have been widely incorporated into recurrent neural network architectures for financial time series forecasting, with most prior work reporting improvements in price-level error metrics. This study revisits that claim through a controlled empirical comparison of four deep learning architectures on nearly two decades of Telkom Indonesia (TLKM) closing price data from the Indonesia Stock Exchange (IDX). The models evaluated are a three-layer Gated Recurrent Unit (GRU) baseline, a comparable Long Short-Term Memory (LSTM) network, a Bahdanau end-attention GRU (Attn-GRU-V2), and a multi-head self-attention GRU hybrid (Attn-GRU-V3). Each architecture is trained over 30 independent runs with distinct random seeds, and performance is reported as 95% confidence intervals derived from the t-distribution. Statistical comparisons employ the Wilcoxon signed-rank test, a nonparametric paired test appropriate given the confirmed non-normality of residuals. The main finding is a consistent trade-off: the plain GRU achieves the lowest RMSE (94.02 ± 1.22 IDR) across all 30 runs, while Attn-GRU-V2 achieves the highest directional accuracy (45.91 ± 0.09%), surpassing GRU in every independent run. Bahdanau attention weights are nearly uniform across the 30-day lookback window (coefficient of variation: 3.21%), indicating that the mechanism cannot identify selectively informative timesteps in this univariate price series. This finding is consistent with the weak-form Efficient Market Hypothesis for the Indonesian market. An ablation study reveals that a 20-day lookback window maximizes directional accuracy (47.72 ± 0.21%) for the Attn-GRU-V2 model. These results suggest that Bahdanau end-attention consistently and significantly improves directional accuracy relative to a plain GRU baseline, providing an architecturally attributable advantage for direction-based applications, even when absolute price-level error is not reduced. The directional accuracy values remaining below 50% across all models are consistent with a weak-form efficiency characterization of the Indonesian market.</p> R. Daniel Hartanto, Guruh Fajar Shidik, Farrikh Alzami, Ahmad Zainul Fanani, Aris Marjuni, Abdul Syukur Copyright (c) 2026 R. Daniel Hartanto, Guruh Fajar Shidik, Farrikh Alzami, Ahmad Zainul Fanani, Aris Marjuni, Abdul Syukur https://creativecommons.org/licenses/by/4.0 https://publikasi.dinus.ac.id/jcta/article/view/15863 Mon, 06 Apr 2026 00:00:00 +0000