https://publikasi.dinus.ac.id/jcta/issue/feed Journal of Computing Theories and Applications 2026-05-31T00:00:00+00:00 JTCA Editorial editorial.jcta@dinus.id Open Journal Systems <div style="border: 3px #086338 Dashed; padding: 10px; background-color: #ffffff; text-align: left;"> <ol> <li><strong>Journal Title </strong>: Journal of Computing Theories and Applications</li> <li><strong>Online ISSN </strong>: <a href="https://portal.issn.org/resource/ISSN/3024-9104">3024-9104</a> </li> <li><strong>Frequency </strong>: Quarterly (February, May, August, and November) </li> <li><strong>DOI Prefix</strong>: 10.62411/jcta</li> <li><strong>Publisher </strong>: Universitas Dian Nuswantoro</li> </ol> </div> <div id="focusAndScope"> <p><strong data-start="133" data-end="190">Journal of Computing Theories and Applications (JCTA)</strong> is a peer-reviewed international journal that covers all aspects of foundations, theories, and applications in computer science. All accepted articles are published online, assigned a <strong data-start="527" data-end="547">DOI via Crossref</strong>, and made <strong data-start="558" data-end="593" data-is-only-node="">freely accessible (Open Access)</strong>. The journal follows a <strong>rapid peer-review</strong> process, with the first decision typically provided within two to four weeks. JCTA welcomes original research papers in, but not limited to:</p> <p>Artificial Intelligence<br />Big Data<br />Bioinformatics<br />Biometrics<br />Cloud Computing<br />Computer Graphics<br />Computer Vision<br />Cryptography<br />Data Mining<br />Fuzzy Systems<br />Game Technology<br />Image Processing<br />Information Security<br />Internet of Things<br />Intelligent Systems<br />Machine Learning<br />Mobile Computing<br />Multimedia Technology<br />Natural Language Processing<br />Network Security<br />Pattern Recognition<br />Quantum Informatics<br />Signal Processing<br />Soft Computing<br />Speech Processing</p> <p><br />Special emphasis is given to recent trends related to cutting-edge research within the domain.</p> </div> https://publikasi.dinus.ac.id/jcta/article/view/15508 Investigating Security Enhancement in Hybrid Clouds via a Blockchain-Fused Privacy Preservation Strategy: Pilot Study 2026-01-11T14:43:54+00:00 Tabitha Chukwudi Aghaunor tabitha.aghaunor@gmail.com Eferhire Valentine Ugbotu eferhire.ugbotu@gmail.com Emeke Ugboh ugboh1972@gmail.com Paul Avwerosuoghene Onoma kenbridge14@gmail.com Frances Uchechukwu Emordi emordi.frances@dou.edu.ng Arnold Adimabua Ojugo ojugo.arnold@fupre.edu.ng Victor Ochuko Geteloma geteloma.victor@fupre.edu.ng Rebecca Okeoghene Idama idama-ro@dsust.edu.ng Peace Oguguo Ezzeh peace.ezzeh@fcetasaba.edu.ng <p>The proliferation of cloud infrastructures has intensified concerns regarding data security, integrity, identity and access management, and user privacy. Despite recent advances, existing solutions often lack comprehensive integration of privacy-preserving mechanisms, dynamic trust management, and cross-provider interoperability. This study proposes an AI-enabled, zero-trust, blockchain-fused identity management framework for secure, privacy-preserving multi-cloud environments. The framework integrates homomorphic encryption with differential privacy for aggregate-level protection and secure multi-party computation for collaborative data processing. The proposed system was validated in a simulated multi-cloud environment using CloudSim, Ethereum blockchain, and AWS EC2. Experimental results indicate homomorphic encryption latency of approximately 450ms per operation and statistically significant security improvements (t(128) = 12.47, p &lt; 0.001), privacy (t(95) = 8.93, p &lt; 0.001), and throughput (t(156) = 15.21, p &lt; 0.001). The framework achieved differential privacy with ε = 0.1 while retaining 99.2% data utility, and demonstrated a 34% improvement in processing speed over conventional differential privacy approaches. In addition, the implementation was observed to be 2.3× faster than BGV-based configurations, with 45% lower memory consumption than CKKS and a 67% reduction in ciphertext size relative to baseline implementations. From an operational perspective, the framework shows a 23% reduction in security management costs, a 31% improvement in resource utilization efficiency, and an 18% decrease in compliance audit expenses. The model further indicates a 27% reduction in total cost of ownership (TCO) compared with multi-vendor security solutions, a projected return on investment (ROI) within 14 months, and an 89% reduction in security incident response costs under the evaluated conditions.</p> 2026-02-24T00:00:00+00:00 Copyright (c) 2026 Tabitha Chukwudi Aghaunor, Eferhire Valentine Ugbotu, Emeke Ugboh, Paul Avwerosuoghene Onoma, Frances Uchechukwu Emordi, Arnold Adimabua Ojugo, Victor Ochuko Geteloma, Rebecca Okeoghene Idama, Peace Oguguo Ezzeh https://publikasi.dinus.ac.id/jcta/article/view/15811 Behavioral Malware Detection via API Call Sequences: A Comparative Study of LSTM and Transformer Architectures Using NLP-Inspired Representations 2026-02-27T05:49:55+00:00 Anusree K J anunair0603@gmail.com Narottam Das Patel narottamdaspatel@vitbhopal.ac.in Saravanan D saravanan.d@vitbhopal.ac.in Adarsh Patel adarsh.patel@vitbhopal.ac.in <p>The increasing sophistication of malware has rendered traditional signature-based detection methods insufficient, necessitating behavior-driven and adaptive analytical frameworks. This study presents a sequential deep learning framework that models system-level API call sequences as structured linguistic representations for behavioral malware detection. Unlike conventional comparative studies, this work systematically evaluates recurrent and attention-based architectures under controlled experimental conditions, with a particular focus on generalization performance and overfitting mitigation. Two neural architectures, a Long Short-Term Memory (LSTM) network and a Transformer-based attention model, are trained on publicly available API call sequence data for binary classification of malicious and benign executables. Beyond standard accuracy metrics, the study further examines model stability, convergence behavior, and the impact of long-range dependency modeling on detection robustness. Experimental results demonstrate that the Transformer architecture achieves superior performance, attaining 95.54% classification accuracy and consistent improvements in precision, recall, and F1-score, indicating a stronger ability to capture complex behavioral dependencies. These findings highlight the effectiveness of attention mechanisms in behavioral malware modeling and provide empirical evidence that NLP-inspired architectures offer a robust and scalable approach for real-world cybersecurity applications.</p> 2026-04-03T00:00:00+00:00 Copyright (c) 2026 Anusree K J, Narottam Das Patel, Saravanan D, Adarsh Patel https://publikasi.dinus.ac.id/jcta/article/view/15863 Attention-Augmented GRU for Stock Forecasting: A Trade-Off Between Directional Accuracy and Price Prediction Error 2026-03-15T10:44:21+00:00 R. Daniel Hartanto daniel_hartanto@semarangkota.go.id Guruh Fajar Shidik guruh.fajar@research.dinus.ac.id Farrikh Alzami alzami@dsn.dinus.ac.id Ahmad Zainul Fanani a.zainul.fanani@dsn.dinus.ac.id Aris Marjuni aris.marjuni@dsn.dinus.ac.id Abdul Syukur abah.syukur01@dsn.dinus.ac.id <p>Attention mechanisms have been widely incorporated into recurrent neural network architectures for financial time series forecasting, with most prior work reporting improvements in price-level error metrics. This study revisits that claim through a controlled empirical comparison of four deep learning architectures on nearly two decades of Telkom Indonesia (TLKM) closing price data from the Indonesia Stock Exchange (IDX). The models evaluated are a three-layer Gated Recurrent Unit (GRU) baseline, a comparable Long Short-Term Memory (LSTM) network, a Bahdanau end-attention GRU (Attn-GRU-V2), and a multi-head self-attention GRU hybrid (Attn-GRU-V3). Each architecture is trained over 30 independent runs with distinct random seeds, and performance is reported as 95% confidence intervals derived from the t-distribution. Statistical comparisons employ the Wilcoxon signed-rank test, a nonparametric paired test appropriate given the confirmed non-normality of residuals. The main finding is a consistent trade-off: the plain GRU achieves the lowest RMSE (94.02 ± 1.22 IDR) across all 30 runs, while Attn-GRU-V2 achieves the highest directional accuracy (45.91 ± 0.09%), surpassing GRU in every independent run. Bahdanau attention weights are nearly uniform across the 30-day lookback window (coefficient of variation: 3.21%), indicating that the mechanism cannot identify selectively informative timesteps in this univariate price series. This finding is consistent with the weak-form Efficient Market Hypothesis for the Indonesian market. An ablation study reveals that a 20-day lookback window maximizes directional accuracy (47.72 ± 0.21%) for the Attn-GRU-V2 model. These results suggest that Bahdanau end-attention consistently and significantly improves directional accuracy relative to a plain GRU baseline, providing an architecturally attributable advantage for direction-based applications, even when absolute price-level error is not reduced. The directional accuracy values remaining below 50% across all models are consistent with a weak-form efficiency characterization of the Indonesian market.</p> 2026-04-06T00:00:00+00:00 Copyright (c) 2026 R. Daniel Hartanto, Guruh Fajar Shidik, Farrikh Alzami, Ahmad Zainul Fanani, Aris Marjuni, Abdul Syukur https://publikasi.dinus.ac.id/jcta/article/view/15870 Understanding Customer Churn in Retail Banking through Explainable Predictive Analytics: Evidence of a Product Paradox 2026-03-17T02:53:10+00:00 Patrick Ndabarishye patrick.ndabarishye@gmail.com Ajay Kumar Singh ajay41274@gmail.com <p>The retention of customers in the retail banking sector is a critical economic imperative; however, predictive modeling is frequently hindered by severe class imbalance and the “Black Box” nature of complex algorithms. This study proposes a Heterogeneous Stacking Ensemble framework integrating XGBoost, CatBoost, and Random Forest base learners with a Logistic Regression meta-learner to forecast customer attrition. To overcome the pervasive “Majority Class Bias,” we introduce a “Dual-Imbalance Defense” that synergizes the Synthetic Minority Over-sampling Technique (SMOTE) with algorithmic cost-sensitive penalization. Furthermore, moving beyond standard accuracy metrics, the framework mathematically derives a dynamic classification threshold to guarantee a strict 0.90 recall rate, actively optimizing the capture of at-risk capital. Model opacity is addressed through the integration of a SHapley Additive exPlanations (SHAP) TreeExplainer. This cooperative game theory approach provides localized, patient-level “Reason Codes” for regulatory compliance and reveals global systemic vulnerabilities, including non-linear drivers such as the “Product Paradox.” Achieving a 0.90 recall rate and an AUC of 0.8654, this framework provides a statistically robust and operationally transparent tool for targeted customer retention.</p> 2026-04-10T00:00:00+00:00 Copyright (c) 2026 Patrick Ndabarishye, Ajay Kumar Singh https://publikasi.dinus.ac.id/jcta/article/view/15866 Log-Transformed Regime-Based Prediction of Cloud Job Length Using Machine Learning 2026-03-16T15:11:22+00:00 Ardi Pujiyanta ardipujiyanta@tif.uad.ac.id Bambang Robiin bambang.robiin@tif.uad.ac.id Faisal Fajri Rahani faisal.fajri@tif.uad.ac.id <p>Cloud job-length prediction remains challenging when the target distribution is highly skewed and contains rare extreme values. This study proposes a log-transformed, regime-based machine learning framework for robust prediction of cloud job length, represented in million instructions (MI). The approach integrates sequential feature engineering, logarithmic target transformation, weighted learning, and regime-aware modeling to distinguish between normal and extreme job-length behavior. Using an ordered GoCJ-derived cloud job-length sequence of 1000 jobs, the dataset exhibits a heavy-tailed distribution, with a mean of 129,662 MI, a median of 93,000 MI, a 95th percentile of 525,000 MI, a 99th percentile of 900,000 MI, and a skewness of 3.695. The proposed model is evaluated against sequential baselines and stronger machine learning baselines, including Naive_Last, RollingMean_5, Global_Log_ExtraTrees, RandomForest, GradientBoosting, and MLP_Log. On the main test split, the proposed Regime_Log_ExtraTrees achieved the best RMSE of 206,255.66 and the least negative R² of −0.01062, while Global_Log_ExtraTrees remained competitive in terms of MAE, MedAE, and RMSLE. Additional walk-forward validation confirms that the regime-aware model consistently achieves the best mean RMSE and mean R² across temporal folds. Ablation results further show that regime-aware learning is the primary contributor to robustness, although accurate prediction of extreme jobs remains challenging. These findings indicate that log-transformed, regime-based learning provides a practical and more robust strategy for cloud job-length prediction under heavy-tailed workload conditions.</p> 2026-04-21T00:00:00+00:00 Copyright (c) 2026 Ardi Pujiyanta, Bambang Robiin, Faisal Fajri Rahani https://publikasi.dinus.ac.id/jcta/article/view/15875 Dual-Domain Temporal–Spatial Denoising Approach for Autism Spectrum Disorder EEG Signals Based on Stationary Wavelet Transform and SPHARA 2026-03-18T00:46:51+00:00 Cut Siti Azola Syiva csitia@mhs.usk.ac.id Melinda Melinda melinda@usk.ac.id Syahrial Syahrial syahrial@usk.ac.id Imam Fathur Rahman imamfth@mhs.usk.ac.id Souvik Das rndas9@gmail.com M. Ary Heryanto m.aryheryanto@dsn.dinus.ac.id <p>Electroencephalography (EEG) signals are highly susceptible to noise and artifacts, which can degrade analysis accuracy, particularly in Autism Spectrum Disorder (ASD) studies. Therefore, effective preprocessing is required to improve signal quality prior to further analysis. This study proposes an integrated EEG preprocessing pipeline that combines a Finite Impulse Response (FIR) band-pass filter (0.5–70 Hz) with notch filtering and detrending, followed by temporal denoising using the Stationary Wavelet Transform (SWT) with the Daubechies 4 mother wavelet and spatial filtering based on SPHARA. This dual-domain approach is designed to address both temporal and spatial noise in multichannel EEG signals. Experimental results demonstrate that the proposed FIR combined with SWT and SPHARA pipeline consistently outperforms single-domain preprocessing methods, achieving a maximum Signal-to-Noise Ratio (SNR) of 31.93 dB. The proposed method also produces the lowest Mean Absolute Error (MAE) (16.81 µV) and Standard Deviation (SD) (0.75 µV), indicating high signal stability with minimal amplitude distortion. Root Mean Square Error (RMSE) values remain stable within the range of 29.5–592.3 µV, with a minimum RMSE of 29.5 µV, demonstrating effective noise suppression while preserving signal energy. These results confirm that integrating temporal and spatial preprocessing significantly improves EEG signal quality and supports more reliable EEG analysis for ASD-related studies.</p> 2026-04-23T00:00:00+00:00 Copyright (c) 2026 Cut Siti Azola Syiva, Melinda Melinda, Syahrial Syahrial, Imam Fathur Rahman, Souvik Das, M. Ary Heryanto https://publikasi.dinus.ac.id/jcta/article/view/15943 Enhancing Software Defect Prediction through Hybrid Multi-Filter Feature Selection and Imbalance Handling 2026-04-04T09:03:54+00:00 Muhammad Khalid Maulana khaaliddd055@gmail.com Setyo Wahyu Saputro setyo.saputro@ulm.ac.id Mohammad Reza Faisal reza.faisal@ulm.ac.id Radityo Adi Nugroho radityo.adi@ulm.ac.id As’ary Ramadhan as’ary.ramadhan@ulm.ac.id <p>Software Defect Prediction (SDP) aims to identify defective modules early in the software development lifecycle to improve software quality and reduce maintenance costs. However, SDP datasets commonly suffer from high dimensionality, feature redundancy, and class imbalance, which can degrade model performance and stability. This study proposes a hybrid feature selection framework to address these challenges and enhance prediction performance. The proposed approach integrates Combined Correlation and Mutual Information (CONMI), which combines the Pearson Correlation Coefficient (PCC) and Mutual Information (MI) to capture both linear and nonlinear feature relevance. The selected features are further refined through Top-K selection, correlation-based filtering to reduce multicollinearity, and Backward Elimination (BE) to obtain an optimal feature subset. To address class imbalance, SMOTE-Tomek is applied by combining over-sampling and data cleaning techniques. Experiments are conducted on twelve NASA MDP datasets using Logistic Regression (LR) and Naïve Bayes (NB) classifiers. The results show that the proposed framework consistently achieves the best performance, with Logistic Regression combined with SMOTE-Tomek obtaining the highest average AUC of 0.7923 ± 0.0714, while NB achieves 0.7554 ± 0.0580. Statistical analysis using a paired t-test indicates that the proposed method significantly outperforms MI+SMOTE-Tomek and BE+SMOTE-Tomek for Logistic Regression, whereas no significant differences are observed for NB. In addition to improving overall classification performance (AUC), the proposed approach also enhances minority class detection, as reflected in improved Recall and F1-score. Overall, the proposed hybrid framework provides an effective and reliable solution for software defect prediction, particularly for high-dimensional and imbalanced datasets.</p> 2026-04-24T00:00:00+00:00 Copyright (c) 2026 Muhammad Khalid Maulana, Setyo Wahyu Saputro, Mohammad Reza Faisal, Radityo Adi Nugroho, As’ary Ramadhan https://publikasi.dinus.ac.id/jcta/article/view/15921 Effectiveness and Limitations of Preprocessing Methods for Proprioceptive Sensor Noise in Quadruped Robots 2026-03-29T01:18:37+00:00 Mui D. Nguyen ducmui@tnut.edu.vn Minh T. Nguyen nguyentuanminh@tnut.edu.vn Ha T. Nguyen hant@tnu.edu.vn Binh TT. Nguyen binhntt@tnu.edu.vn Long Q. Dinh dqlong@ictu.edu.vn Dung T. Nguyen ntdungcndt@ictu.edu.vn Thang C. Vu vcthang@ictu.edu.vn Duc M. Ngo ngoduc198-tdh@tnut.edu.vn <p>Proprioceptive sensor data, including inertial measurement units (IMU), joint encoders, and torque sensors, plays a critical role in state estimation for quadruped robots operating in dynamic and unstructured environments. However, these signals are often degraded by various sources of error, such as high-frequency noise, bias, drift, and contact-induced disturbances, which directly affect estimation accuracy and stability. This study presents a systematic analysis of sensor-specific noise characteristics and evaluates the effectiveness of preprocessing methods tailored to each sensor modality. Specifically, moving average filtering is applied to encoder signals to mitigate noise amplification during differentiation, while first-order low-pass filtering is employed for IMU and torque signals to suppress high-frequency noise. Experimental results on a publicly available quadruped dataset demonstrate that encoder velocity RMSE is reduced by 12.09%, high-frequency energy decreases by 59.63%, and signal-to-noise ratio (SNR) improves by 145.6%. However, variance reductions remain limited (3.39% for IMU and 4.05% for torque), indicating the persistence of impulsive, non-Gaussian noise caused by contact events. These findings highlight that linear preprocessing methods are effective for attenuating high-frequency noise but insufficient for handling non-Gaussian disturbances. The study provides practical insights into the effectiveness and limitations of preprocessing strategies, serving as a foundation for developing more robust signal processing and state estimation frameworks in quadruped robotics.</p> 2026-04-28T00:00:00+00:00 Copyright (c) 2026 Mui D. Nguyen, Minh T. Nguyen, Ha T. Nguyen, Binh TT. Nguyen, Long Q. Dinh, Dung T. Nguyen, Thang C. Vu, Duc M. Ngo https://publikasi.dinus.ac.id/jcta/article/view/15975 Language-Similarity-Guided Transfer Fine-Tuning of Pre-trained Transformer Models for Sentiment Analysis Across 12 Indonesian Regional Languages 2026-04-10T07:27:10+00:00 Brian Rizqi Paradisiaca Darnoto brianrizqi@unej.ac.id Dony Bahtera Firmawan donybf@unej.ac.id <p>Sentiment analysis for Indonesian regional languages faces two persistent challenges: labeled training data is extremely limited for most regional varieties, and transformer models pre-trained on Bahasa Indonesia do not generalize reliably to languages with substantially different morphological structures. Prior work on the NusaX benchmark has primarily relied on direct fine-tuning, treating each regional language independently and without exploiting linguistic proximity between related languages as a transfer signal. This paper proposes Language-Similarity-Guided Transfer (LSGT), a sequential fine-tuning strategy that first adapts a pre-trained model to a pivot language selected using character trigram similarity, followed by fine-tuning on the target language. Four transformer models are evaluated across all 12 NusaX languages using the official train/validation/test splits: IndoBERT, NusaBERT, mBERT, and XLM-R. Performance is evaluated using four metrics: accuracy, macro F1, macro precision, and macro recall. Experimental results show that LSGT improves macro F1 in 44 of 48 model-language combinations, demonstrating that the fine-tuning strategy itself is a major factor in low-resource cross-lingual sentiment classification. XLM-R benefits most strongly from LSGT, achieving an average improvement of +0.137 macro F1 and a peak gain of +0.298 on Madurese. SHAP-based token attribution analysis further reveals that predictions rely heavily on named entities and domain-specific nouns rather than sentiment-bearing vocabulary, indicating a dataset-level bias inherited from the original SmSA corpus and propagated through the NusaX translation pipeline.</p> 2026-05-07T00:00:00+00:00 Copyright (c) 2026 Brian Rizqi Paradisiaca Darnoto, Dony Bahtera Firmawan https://publikasi.dinus.ac.id/jcta/article/view/15980 Quantifying the Impact of Text Preprocessing on IndoBERT Fine-Tuning for Indonesian Informal Culinary Sentiment Analysis 2026-04-11T13:24:04+00:00 Rahmat Budianoor 2211016210032@mhs.ulm.ac.id Setyo Wahyu Saputro setyo.saputro@ulm.ac.id Friska Abadi friska.abadi@ulm.ac.id Radityo Adi Nugroho radityo.adi@ulm.ac.id Andi Farmadi andifarmadi@ulm.ac.id <p>Indonesian culinary comments on social media platforms such as Instagram are characterized by informal spelling, regional language mixing, slang expressions, and emojis, posing substantial challenges for automated sentiment classification. While IndoBERT has demonstrated strong performance across Indonesian natural language processing tasks, the contribution of individual preprocessing components to fine-tuning performance on informal text remains underexplored, particularly in the culinary domain. This study addresses this gap by conducting a systematic preprocessing ablation study on IndoBERT-Base fine-tuning for Indonesian culinary sentiment classification, accompanied by a comparative evaluation against Naive Bayes with TF-IDF, SVM with TF-IDF, and BiLSTM as representative baselines. A dataset of 3,500 manually labeled Instagram culinary comments across three sentiment classes was used, with a stratified 80/10/10 split. Six preprocessing variants were evaluated under identical experimental conditions to isolate the contribution of each component. The results show that slang normalization is the most impactful single preprocessing step, yielding a macro F1-score gain of +0.0609 over the no-preprocessing baseline, while the full pipeline achieves an accuracy of 0.8800 and a macro F1-score of 0.8465. IndoBERT-Base with the full pipeline outperforms all baselines across all evaluation metrics. Per-class analysis reveals that the negative class achieves the lowest F1-score of 0.7600, with sarcastic expressions and Banjar regional vocabulary identified as primary sources of misclassification. These findings indicate that preprocessing decisions have a measurable and non-uniform effect on IndoBERT fine-tuning performance. In this study, slang normalization provides the most substantial individual contribution in bridging the vocabulary gap between informal user-generated text and the model’s pre-training distribution.</p> 2026-05-07T00:00:00+00:00 Copyright (c) 2026 Rahmat Budianoor, Setyo Wahyu Saputro, Friska Abadi, Radityo Adi Nugroho, Andi Farmadi https://publikasi.dinus.ac.id/jcta/article/view/15777 A Multi-Branch BiLSTM with Multi-Head Self-Attention for Suspicious Sound Recognition 2026-02-04T01:26:41+00:00 Shehu Mohammed Yusuf smyusuf@abu.edu.ng Hamza Saidu hamzasaidu34@gmail.com Sani Saleh Saminu sssaleh@engineering.abu.edu.ng <p>Suspicious urban sound recognition is a critical component of intelligent public safety and urban monitoring systems, enabling the automated identification of anomalous acoustic events such as gunshots, sirens, and other security-sensitive sounds. However, existing deep learning approaches often struggle to simultaneously capture long-range temporal dependencies and global contextual relationships, particularly under noisy and acoustically complex urban conditions. This limitation can reduce reliability in safety-critical scenarios where missed detections carry significant risk. To address these challenges, this study proposes a Multi-Branch Bidirectional Long Short-Term Memory (BiLSTM) framework with Multi-Head Self-Attention (MHSA) for enhanced sequential and contextual feature modeling. Mel-frequency cepstral coefficients (MFCCs) are extracted from a curated subset of the UrbanSound8K dataset, comprising five suspicious sound classes, and used as input to the proposed architecture. The multi-branch design enables complementary temporal representations, while the self-attention mechanism provides lightweight contextual weighting of BiLSTM outputs. Experimental results demonstrate that the proposed model achieves a test accuracy of 95.59%, outperforming conventional Dense and LSTM-based baseline models under identical experimental settings. An ablation study further confirms the contribution of multi-branch integration and attention-based enhancement to overall performance. Class-wise evaluation reveals consistently high recall across all sound categories, particularly for safety-critical classes such as gunshots and sirens. These findings indicate that the proposed framework provides robust and reliable performance, making it suitable for real-time smart city surveillance and public safety applications.</p> 2026-05-12T00:00:00+00:00 Copyright (c) 2026 Shehu Mohammed Yusuf, Hamza Saidu, Sani Saleh Saminu https://publikasi.dinus.ac.id/jcta/article/view/15963 Beyond Dashboards: A Systematic Literature Review of Learning Analytics, Business Intelligence, and Generative AI for Decision-Making in Universities 2026-04-08T03:45:51+00:00 Heri Purwanto hpurwanto@students.undip.ac.id R. Rizal Isnanto rizal_isnanto@yahoo.com Qidir Maulana Binu Soesanto qidirbinu@fisika.fsm.undip.ac.id Agus Nursikuwagus agusnursikuwagus@email.unikom.ac.id Fahmi Reza Ferdiansyah fahmi.reza@ekuitas.ac.id <p>The rapid proliferation of learning analytics, business intelligence (BI), artificial intelligence (AI), and generative AI (GenAI) has significantly expanded universities’ ability to collect, integrate, analyze, and operationalize institutional data. However, despite advances in predictive analytics, dashboards, and AI-driven systems, the translation of analytical outputs into consistent and accountable institutional decision-making remains uneven. This systematic literature review synthesizes contemporary research on analytics-enabled decision-making in higher education with the aim of moving beyond dashboard-centric perspectives toward a socio-technical and computing-oriented understanding of how data are transformed into institutional actions and outcomes. Guided by the PRISMA framework, the review synthesizes evidence across four interconnected dimensions: data ecosystems and learning analytics foundations; analytics capability, BI adoption, and digital readiness; AI and advanced analytics for decision support; and human-in-the-loop (HITL) decision routines and institutional outcomes. The findings show that predictive performance and analytical sophistication alone do not guarantee decision value. Instead, effective analytics-enabled decision-making depends on interoperable data ecosystems, organizational analytics capability, governance mechanisms, explainability, and sustained human oversight. Based on these findings, this review contributes a computing-oriented decision-intelligence framework that conceptualizes analytics-enabled decision-making as an end-to-end socio-technical pipeline linking heterogeneous data acquisition, integration, feature construction, analytical modeling, explainability, human validation, governance, and feedback-based refinement. By integrating learning analytics, BI, AI, GenAI, and HITL mechanisms within a unified framework, the review clarifies how universities can move beyond dashboard-based reporting toward accountable, adaptive, and institutionally actionable decision-support infrastructures.</p> 2026-05-14T00:00:00+00:00 Copyright (c) 2026 Heri Purwanto, R. Rizal Isnanto, Qidir Maulana Binu Soesanto, Agus Nursikuwagus, Fahmi Reza Ferdiansyah