Analisis Tren Penelitian Hyperparameter Tuning dalam Software Engineering melalui Systematic Literature Review dan Bibliometric Analysis
DOI:
https://doi.org/10.52436/1.jpti.817Kata Kunci:
Bibliometrix analysis, Hyperparameter tuning, Software engineering, Systematic Literature ReviewAbstrak
Hyperparameter tuning merupakan salah satu aspek penting dalam meningkatkan performa model machine learning di bidang software engineering. Meskipun memiliki dampak signifikan, kajian terkait tren dan perkembangan penelitian hyperparameter tuning di bidang ini masih terbatas dan belum banyak dieksplorasi secara sistematis. Penelitian ini bertujuan untuk menganalisis tren penelitian hyperparameter tuning dalam software engineering melalui pendekatan Systematic Literature Review (SLR) dan Bibliometric Analysis. Metode bibliometrix menggunakan Bibliometrix R-Package, sebanyak 503 artikel diperoleh dari database Scopus dianalisis untuk mengidentifikasi metode tuning yang dominan, tantangan yang dihadapi, serta peluang penelitian masa depan. Hasil kajian menunjukkan adanya fluktuasi jumlah publikasi dari tahun 2020 hingga 2025, dengan peningkatan signifikan pada tahun 2024 (132 artikel), namun diiringi penurunan rata-rata sitasi pada tahun 2025 (10 artikel) dapat dikaitkan dengan waktu yang lebih singkat bagi publikasi baru untuk mendapatkan kutipan. Wang Y menjadi penulis terbanyak dengan 11 artikel dan sebagai penulis yanng paling berpengaruh dengan skor fraksionalisasi 1,75. Sumber paling relevan dan populer berdasarkan jumlah publikasi kategori Jurnal ilmiah adalah IEEE Transactions on Software Engineering (18 artikel), untuk kategori konferensi yaitu ACM International Conference Proceeding Series (15 artikel). Tren topik berdasarkan kata kunci yang sering muncul adalah deep learning dengan 89 kemunculan. Tren penelitian juga menunjukkan peningkatan signifikan dalam eksplorasi teknik tuning otomatis guna mengatasi kompleksitas model dan biaya komputasi yang tinggi. Kajian ini memberikan wawasan mengenai perkembangan terkini dan tantangan, seperti kurangnya generalisasi hasil tuning dan pendekatan akan pendekatan yang lebih adaptif, serta membuka peluang penelitian untuk inovasi di bidang hyperparameter tuning dalam software engineering.
Unduhan
Referensi
M. A. Khan et al., “Machine Learning-based Test Case Prioritization using Hyperparameter Optimization,” Proc. - 2024 IEEE/ACM Int. Conf. Autom. Softw. Test, AST 2024, pp. 125–135, 2024, doi: 10.1145/3644032.3644467.
J. Wu, S. P. Chen, and X. Y. Liu, “Efficient hyperparameter optimization through model-based reinforcement learning,” Neurocomputing, vol. 409, pp. 381–393, 2020, doi: 10.1016/j.neucom.2020.06.064.
N. Arkabaev, E. Rahimov, A. Abdullaev, H. Padmanaban, and V. Salmanov, “Modelling and Analysis of Optimization Algorithms,” J. Ilm. Ilmu Terap. Univ. Jambi, vol. 9, no. 1, pp. 161–177, Feb. 2025, doi: 10.22437/JIITUJ.V9I1.38410.
N. Agarwal, N. Gupta, and V. Sharma, “Enhancing Fault Detection Accuracy and Realiability in Software Engineering Through Supervised Machine Learning Algorithm,” Int. J. Glob. Res. Innov. Technol., vol. 02, no. 03, pp. 108–112, Sep. 2024, doi: 10.62823/IJGRIT/02.03.6892.
V. Gupta, V. K. Mishra, P. Singhal, and A. Kumar, “An Overview of Supervised Machine Learning Algorithm,” Proc. 2022 11th Int. Conf. Syst. Model. Adv. Res. Trends, SMART 2022, pp. 87–92, 2022, doi: 10.1109/SMART55829.2022.10047618.
L. Zaadnoordijk, T. R. Besold, and R. Cusack, “Lessons from infant learning for unsupervised machine learning,” Nat. Mach. Intell. 2022 46, vol. 4, no. 6, pp. 510–520, Jun. 2022, doi: 10.1038/s42256-022-00488-2.
C. S. Wickramasinghe, K. Amarasinghe, D. L. Marino, C. Rieger, and M. Manic, “Explainable Unsupervised Machine Learning for Cyber-Physical Systems,” IEEE Access, vol. 9, pp. 131824–131843, 2021, doi: 10.1109/ACCESS.2021.3112397.
A. Kanso and K. Patra, “Engineering a Platform for Reinforcement Learning Workloads,” Proc. - 1st Int. Conf. AI Eng. - Softw. Eng. AI, CAIN 2022, pp. 88–89, 2022, doi: 10.1145/3522664.3528609.
R. T. Icarte, T. Q. Klassen, R. Valenzano, and S. A. McIlraith, “Reward Machines: Exploiting Reward Function Structure in Reinforcement Learning,” J. Artif. Intell. Res., vol. 73, pp. 173–208, Jan. 2022, doi: 10.1613/JAIR.1.12440.
J. G. Brandão et al., “Optimization of machine learning models for sentiment analysis in social media,” Inf. Sci. (Ny)., vol. 694, p. 121704, Mar. 2025, doi: 10.1016/J.INS.2024.121704.
H. Zhang, J. Sun, Y. Wang, J. Shi, and Z. Xu, “Variational Reinforcement Learning for Hyper-Parameter Tuning of Adaptive Evolutionary Algorithm,” IEEE Trans. Emerg. Top. Comput. Intell., vol. 7, no. 5, pp. 1511–1526, 2023, doi: 10.1109/TETCI.2022.3221483.
A. Gjorgjevikj, K. Mishev, L. Antovski, and D. Trajanov, “Requirements Engineering in Machine Learning Projects,” IEEE Access, vol. 11, pp. 72186–72208, 2023, doi: 10.1109/ACCESS.2023.3294840.
A. Khan, A. Ali, J. Khan, F. Ullah, and M. Faheem, “Using Permutation-Based Feature Importance for Improved Machine Learning Model Performance at Reduced Costs,” IEEE Access, 2025, doi: 10.1109/ACCESS.2025.3544625.
A. Khalid, G. Badshah, N. Ayub, M. Shiraz, and M. Ghouse, “Software Defect Prediction Analysis Using Machine Learning Techniques,” Sustain. 2023, Vol. 15, Page 5517, vol. 15, no. 6, p. 5517, Mar. 2023, doi: 10.3390/SU15065517.
S. Hanifi, A. Cammarono, and H. Zare-Behtash, “Advanced hyperparameter optimization of deep learning models for wind power prediction,” Renew. Energy, vol. 221, p. 119700, Feb. 2024, doi: 10.1016/J.RENENE.2023.119700.
L. Ferreira, A. Pilastri, F. Romano, and P. Cortez, “Using supervised and one-class automated machine learning for predictive maintenance,” Appl. Soft Comput., vol. 131, p. 109820, Dec. 2022, doi: 10.1016/J.ASOC.2022.109820.
D. Kusumaningrum, N. Kurniati, and B. Santosa, “Machine learning for predictive maintenance,” Proc. Int. Conf. Ind. Eng. Oper. Manag., pp. 2348–2356, 2021, doi: 10.46254/sa02.20210717.
S. Haldar and L. Fernando Capretz, “Interpretable Software Maintenance and Support Effort Prediction Using Machine Learning,” IEEE/ACM 46th Int. Conf. Softw. Eng. Companion Proc. Interpret., 2024, doi: 10.1145/3639478.3643069.
C. Wan et al., “Keeper: Automated Testing and Fixing of Machine Learning Software,” ACM Trans. Softw. Eng. Methodol., vol. 33, no. 7, 2024, doi: 10.1145/3672451.
R. Walia, “Application of Machine Learning for GUI Test Automation,” 2022 28th Int. Conf. Information, Commun. Autom. Technol. ICAT 2022 - Proc., 2022, doi: 10.1109/ICAT54566.2022.9811187.
D. Reddy Seelam, “Automated Test Case Generation using Machine Learning,” Int. J. Adv. Res. Sci. Commun. Technol., 2024, doi: 10.48175/IJARSCT-22892.
Y. Zhang, “New Approaches to Automated Software Testing Based on Artificial Intelligence,” 2024 5th Int. Conf. Artif. Intell. Comput. Eng. ICAICE 2024, pp. 806–810, 2024, doi: 10.1109/ICAICE63571.2024.10863866.
Q. Wang et al., “Graph Confident Learning for Software Vulnerability Detection,” Eng. Appl. Artif. Intell., vol. 133, p. 108296, Jul. 2024, doi: 10.1016/J.ENGAPPAI.2024.108296.
K. Napier, T. Bhowmik, and S. Wang, “An empirical study of text-based machine learning models for vulnerability detection,” Empir. Softw. Eng., vol. 28, no. 2, pp. 1–45, Mar. 2023, doi: 10.1007/S10664-022-10276-6.
X. Li, Y. Xin, H. Zhu, Y. Yang, and Y. Chen, “Cross-domain vulnerability detection using graph embedding and domain adaptation,” Comput. Secur., vol. 125, p. 103017, Feb. 2023, doi: 10.1016/J.COSE.2022.103017.
T. Narbaev, Ö. Hazir, B. Khamitova, and S. Talgat, “A machine learning study to improve the reliability of project cost estimates,” Int. J. Prod. Res., vol. 62, no. 12, pp. 4372–4388, Jun. 2024, doi: 10.1080/00207543.2023.2262051.
D. Hammann, “Big data and machine learning in cost estimation: An automotive case study,” Int. J. Prod. Econ., vol. 269, p. 109137, Mar. 2024, doi: 10.1016/J.IJPE.2023.109137.
M. O. Sanni-Anibire, R. Mohamad Zin, and S. O. Olatunji, “Developing a preliminary cost estimation model for tall buildings based on machine learning,” Int. J. Manag. Sci. Eng. Manag., vol. 16, no. 2, pp. 134–142, Apr. 2021, doi: 10.1080/17509653.2021.1905568.
K. Filippou, G. Aifantis, G. A. Papakostas, and G. E. Tsekouras, “Structure Learning and Hyperparameter Optimization Using an Automated Machine Learning (AutoML) Pipeline,” Inf., vol. 14, no. 4, 2023, doi: 10.3390/info14040232.
M. H. Rizky, M. R. Faisal, I. Budiman, D. Kartini, and F. Abadi, “Effect of Hyperparameter Tuning Using Random Search on Tree-Based Classification Algorithm for Software Defect Prediction,” IJCCS (Indonesian J. Comput. Cybern. Syst., vol. 18, no. 1, p. 95, 2024, doi: 10.22146/ijccs.90437.
L. Liao, H. Li, W. Shang, and L. Ma, “An Empirical Study of the Impact of Hyperparameter Tuning and Model Optimization on the Performance Properties of Deep Neural Networks,” ACM Trans. Softw. Eng. Methodol., vol. 31, no. 3, Jul. 2022, doi: 10.1145/3506695.
P. Chen, J. Gong, and T. Chen, “Accuracy Can Lie: On the Impact of Surrogate Model in Configuration Tuning,” IEEE Trans. Softw. Eng., vol. 51, no. 2, pp. 548–580, 2025, doi: 10.1109/TSE.2025.3525955.
G. Tripathy and A. Sharaff, “Hyperparameter elegance: fine-tuning text analysis with enhanced genetic algorithm hyperparameter landscape,” Knowl. Inf. Syst., vol. 66, no. 11, pp. 6761–6783, Nov. 2024, doi: 10.1007/S10115-024-02202-7.
M. A. Serhani, H. Ismail, H. T. El-Kassabi, and H. Al Breiki, “Enhancing arrhythmia prediction through an adaptive deep reinforcement learning framework for ECG signal analysis,” Biomed. Signal Process. Control, vol. 101, p. 107155, Mar. 2025, doi: 10.1016/J.BSPC.2024.107155.
L. Liao, H. Li, W. Shang, and L. Ma, “An Empirical Study of the Impact of Hyperparameter Tuning and Model Optimization on the Performance Properties of Deep Neural Networks,” ACM Trans. Softw. Eng. Methodol., vol. 31, no. 3, 2022, doi: 10.1145/3506695.
M. Ali, T. Mazhar, A. Al-Rasheed, T. Shahzad, Y. Y. Ghadi, and M. A. Khan, “Enhancing software defect prediction: a framework with improved feature selection and ensemble machine learning,” PeerJ Comput. Sci., vol. 10, p. e1860, Feb. 2024, doi: 10.7717/PEERJ-CS.1860/SUPP-13.
W. Nugraha and A. Sasongko, “Hyperparameter Tuning on Classification Algorithm with Grid Search,” Sist. J. Sist. Inf., vol. 11, no. 2, pp. 391–401, May 2022, doi: 10.32520/STMSI.V11I2.1750.
A. T. Nair and M. Arivazhagan, “Industrial activated sludge model identification using hyperparameter-tuned metaheuristics,” Swarm Evol. Comput., vol. 91, p. 101733, Dec. 2024, doi: 10.1016/J.SWEVO.2024.101733.
P. Buczak, A. Groll, M. Pauly, J. Rehof, · Daniel Horn, and D. Horn, “Using sequential statistical tests for efficient hyperparameter tuning,” vol. 108, pp. 441–460, 2024, doi: 10.1007/s10182-024-00495-1.
G. Tripathy and A. Sharaff, “Hyperparameter elegance: fine-tuning text analysis with enhanced genetic algorithm hyperparameter landscape,” Knowl. Inf. Syst., vol. 66, no. 11, pp. 6761–6783, Nov. 2024, doi: 10.1007/S10115-024-02202-7/METRICS.
M. Hassanali, M. Soltanaghaei, T. Javdani Gandomani, and F. Zamani Boroujeni, “Software development effort estimation using boosting algorithms and automatic tuning of hyperparameters with Optuna,” J. Softw. Evol. Process, vol. 36, no. 9, p. e2665, Sep. 2024, doi: 10.1002/SMR.2665.
J. Resti, H. D. Purnomo, T. Gonsalves, E. Mailoa, Y. Santoso, and M. R. Pribadi, “Metaheuristics Approach for Hyperparameter Tuning of Convolutional Neural Network,” J. RESTI (Rekayasa Sist. dan Teknol. Informasi), vol. 8, no. 3, pp. 340–345, Jun. 2024, doi: 10.29207/RESTI.V8I3.5730.
Y. A. Ali, E. M. Awwad, M. Al-Razgan, and A. Maarouf, “Hyperparameter Search for Machine Learning Algorithms for Optimizing the Computational Complexity,” Processes, vol. 11, no. 2, p. 349, Feb. 2023, doi: 10.3390/pr11020349.
Y. Rimal, N. Sharma, and A. Alsadoon, “The accuracy of machine learning models relies on hyperparameter tuning: student result classification using random forest, randomized search, grid search, bayesian, genetic, and optuna algorithms,” Multimed. Tools Appl., vol. 83, no. 30, pp. 74349–74364, Sep. 2024, doi: 10.1007/S11042-024-18426-2/METRICS.
J. Singh, J. K. Sandhu, and Y. Kumar, “Enhanced Metaheuristic Based Hyper-Parameters Tuning for Learning Models,” Proc. Int. Conf. Contemp. Comput. Informatics, IC3I 2023, pp. 374–378, 2023, doi: 10.1109/IC3I59117.2023.10398069.
A. R. Sampaio, I. Beschastnikh, D. Maier, D. Bourne, and V. Sundaresen, “Auto-tuning elastic applications in production,” Proc. - Int. Conf. Softw. Eng., pp. 355–367, 2023, doi: 10.1109/ICSE-SEIP58684.2023.00038.
O. Kurniawan, Y. Alkhalifi, L. A. Fitriana, M. R. Firdaus, A. N. Rais, and S. W. Hadi, “Hyperparameter Tuning Optimization on Machine Learning Models to Predict Software Defects,” ICAISD 2024 - Int. Conf. Adv. Inf. Sci. Dev. AI Invest. Sustain. Dev. Hum. Living Digit. Proc., pp. 35–40, 2024, doi: 10.1109/ICAISD63055.2024.10895942.
E. Christou, A. Parmaxi, and P. Zaphiris, “A systematic exploration of scoping and mapping literature reviews,” Univers. Access Inf. Soc., 2024, doi: 10.1007/s10209-024-01120-3.
G. G. Giwangkoro and Y. S. Nugroho, “Unveiling Research Trends in Stack Overflow: A Comprehensive Analysis of General Discussion Theme,” 2024 Int. Conf. Smart Comput. IoT Mach. Learn. SIML 2024, pp. 130–136, 2024, doi: 10.1109/SIML61815.2024.10578280.
A. Satar, “Examining the Determinants of Sustainable Competitive Advantage: A Systematic Literature Review,” WSEAS Trans. Comput. Res., vol. 12, pp. 112–122, 2024, doi: 10.37394/232018.2024.12.11.
J. Ali, A. Jusoh, N. Idris, A. G. Airij, and R. Chandio, “Wearable Devices in Healthcare Services. Bibliometrix Analysis by using R Package,” Int. J. online Biomed. Eng., vol. 18, no. 8, pp. 61–86, 2022, doi: 10.3991/IJOE.V18I08.31785.
S. Büyükkidik, “A Bibliometric Analysis: A Tutorial for the Bibliometrix Package in R Using IRT Literature,” J. Meas. Eval. Educ. Psychol., vol. 13, no. 3, pp. 164–193, Sep. 2022, doi: 10.21031/EPOD.1069307.
T. Lutellier, H. V Pham, L. Pang, Y. Li, M. Wei, and L. Tan, “CoCoNuT: Combining context-aware neural translation models using ensemble for program repair,” in ISSTA 2020 - Proceedings of the 29th ACM SIGSOFT International Symposium on Software Testing and Analysis, 2020, pp. 101–114. doi: 10.1145/3395363.3397369.
J. Choe, S. J. Oh, S. Lee, S. Chun, Z. Akata, and H. Shim, “Evaluating weakly supervised object localization methods right,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2020, pp. 3130–3139. doi: 10.1109/CVPR42600.2020.00320.
J.-H. Luo and J. Wu, “AutoPruner: An end-to-end trainable filter pruning method for efficient deep model inference,” Pattern Recognit., vol. 107, 2020, doi: 10.1016/j.patcog.2020.107461.
N. D. Q. Bui, Y. Yu, and L. Jiang, “InferCode: Self-supervised learning of code representations by predicting subtrees,” in Proceedings - International Conference on Software Engineering, 2021, pp. 1186–1197. doi: 10.1109/ICSE43902.2021.00109.
M. M. Rahman, Y. Watanobe, and K. Nakamura, “A bidirectional lstm language model for code evaluation and repair,” Symmetry (Basel)., vol. 13, no. 2, pp. 1–15, 2021, doi: 10.3390/sym13020247.
Y. Liu, N. Zhu, K. Li, M. Li, J. Zheng, and K. Li, “An angle dominance criterion for evolutionary many-objective optimization,” Inf. Sci. (Ny)., vol. 509, pp. 376–399, 2020, doi: 10.1016/j.ins.2018.12.078.
C.-L. Zhang, Y.-H. Cao, and J. Wu, “Rethinking the route towards weakly supervised object localization,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2020, pp. 13457–13466. doi: 10.1109/CVPR42600.2020.01347.
Z. Yang, J. Shi, J. He, and D. Lo, “Natural Attack for Pre-trained Models of Code,” in Proceedings - International Conference on Software Engineering, 2022, pp. 1482–1493. doi: 10.1145/3510003.3510146.
N. P. Lawrence, M. G. Forbes, P. D. Loewen, D. G. McClement, J. U. Backström, and R. B. Gopaluni, “Deep reinforcement learning with shallow controllers: An experimental application to PID tuning,” Control Eng. Pract., vol. 121, 2022, doi: 10.1016/j.conengprac.2021.105046.
Z. Zou, S. Lei, T. Shi, Z. Shi, and J. Ye, “Deep Adversarial Decomposition: A Unified Framework for Separating Superimposed Images,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2020, pp. 12803–12813. doi: 10.1109/CVPR42600.2020.01282.
I. Ashkenazi and O. Olsha, “Inappropriate Journal Authorship, Disputes, Plagiarism, and Mistrust in the Institution: Different Beasts ... Same Problem,” Rambam Maimonides Med. J., vol. 14, no. 4, pp. 14–16, 2023, doi: 10.5041/RMMJ.10514.
J. A. Teixeira da Silva, “The Centrality of Trust in Academic Publishing Lies with the Corresponding Author,” Rambam Maimonides Med. J., vol. 15, no. 2, pp. 1–2, 2024, doi: 10.5041/rmmj.10525.
X. Liu, J. Wu, and S. Chen, “Efficient Hyperparameters optimization through Model-based Reinforcement Learning and Meta-Learning,” Proc. - 2020 IEEE 22nd Int. Conf. High Perform. Comput. Commun. IEEE 18th Int. Conf. Smart City IEEE 6th Int. Conf. Data Sci. Syst. HPCC-SmartCity-DSS 2020, pp. 1036–1041, 2020, doi: 10.1109/HPCC-SmartCity-DSS50907.2020.00139.
A. H. Victoria and G. Maragatham, “Automatic tuning of hyperparameters using Bayesian optimization,” Evol. Syst., vol. 12, no. 1, pp. 217–223, 2021, doi: 10.1007/s12530-020-09345-2.
P. S. N. Mindom, A. Nikanjam, and F. Khomh, “Harnessing pre-trained generalist agents for software engineering tasks,” Empir. Softw. Eng., vol. 30, no. 1, 2025, doi: 10.1007/s10664-024-10597-8.
T. Elansari, M. Ouanan, and H. Bourray, “Mixed Radial Basis Function Neural Network Training Using Genetic Algorithm,” Neural Process. Lett., vol. 55, no. 8, pp. 10569–10587, 2023, doi: 10.1007/s11063-023-11339-5.