Prioritizing Test Cases Using Supervised Machine Learning Techniques Based on Requirement Correlation and Fault Severity
Main Article Content
Abstract
Test case prioritisation plays a vital role in regression testing as test case prioritisation provides significant outcomes for producing effective in regression and other testing test cases methodologies. Many factors can be used to prioritise taste cases. Prior researchers created approaches for prioritising test cases based on requirements. But they used rigid algorithms for computation, which suggests that the outcomes are imprecise and rigid. Here, a model is developed for test case prioritisation using requirement correlation and fault severity. This study was conducted using an experimental research methodology which accepts 1000 test cases that have been classified as positive and negative by specialists for the experiment. Natural Language Processing (NLP) principles were employed during the pre-processing of the datasets, which used as input to the proposed model. Following pre-processing, the Term Frequency-Inverse Document Frequency (TF-IDF) approach is used to vectorize the textual data format. The machine learning algorithms Support Vector Machine (SVM), K-Nearest Neighbor (KNN), Naive Bayes (NB), and Decision Tree (DT) were used to generate the model. Each trial's accuracy results ranged from 74% to 94%, which enabled us to find and select a model that performed well in our experiment. SVM based on requirement correlation and fault severity did well for prioritizing test cases. KNN is the sole useful technique for determining how significant a fault is. This outcome is due to the fact that SVM outperforms KNN for a large number of features while KNN outperforms SVM for a smaller number of features.