What is Reliability Testing?
‘Reliable’ means ‘something that can be trusted or believed because it is working in the way you expect’. Reliability testing is testing the software to check software reliability and to ensure that the software performs well in given environmental conditions for a specific period without any errors.
Reliability testing ensures that the software is reliable, it is performing in an expected manner and it satisfies the client’s requirements.
Some of the objectives of reliability testing are to find out the pattern of repetitive failures, the number of failures that occur in a specific amount of time, the main cause of the failure and to test the modules once defects have been eliminated.
Types of Reliability Testing
Following are the three types of reliability testing-
Feature Testing – In feature testing, all the features of the software are checked, and it involves the following steps.
First, test each feature of the software documented in software requirements in isolation.
Test two individual features together and reduce the interaction between them as much as possible.
Make sure each feature is working in the desired manner.
Load Testing – Load testing is done to ensure that the system is able to handle the required load and does not suffer a malfunction. In this type of testing, multiple users will be using the system and doing similar kinds of operations simultaneously to check its consequences on the system performance.
Regression Testing – Regression testing is done whenever any change is made in the system i.e. new feature is added or any existing feature is changed. Regression testing checks that the changes do not affect other unchanged parts of the system.
Categories of Reliability Testing
Following are the three categories of reliability testing-
In software modeling techniques we observe the failure data and analyze them. It is of two types-
Prediction Modelling – Such models use old historical data and predict future failure behavior. Generally, these models are used before the beginning of the development phase and software reliability can be predicted even before the coding starts. Examples of prediction models are Musa’s Execution Time Model and Putnam’s Model.
Estimation Modelling – Estimation models use data from ongoing software development and estimate the failure behavior of the system. Generally, these models are used once the development process has started and enough data has been collected. Examples of such models are the Weibull Distribution Model and Exponential Distribution Models.
It is important to note that no individual model is best in all situations. Based on the software requirement, the correct model will have to be chosen by the team.
Measuring the reliability of software is difficult but we can identify the characteristics related to software reliability and based on them, reliability metrics can be measured. Let’s discuss such metrics in this section.
Following are the sub-categories of the software reliability measurement practices-
Product Metrics– Product metrics are computed from data collected from source code, requirements, design models and test cases. These metrics are useful in assessing various software characteristics and gain insight into software quality.
Software Size – To measure software size, an important approach is ‘Line of Code’ (LOC). In this approach, lines of source code are counted, the calculation does not include comments and non-executable statements. This is not a perfect method to calculate the software size as every software uses different languages and some parts of the code may have been reused.
Function Point Metric – Function Point (FP) metric is used to measure developed software’s functionality. Function points are calculated based on countable measures such as Number of external inputs (EIs), Number of external outputs (EOs), Number of external inquiries (EIs), Number of internal logical files (ILFs) and Number of external interface files (EIFs). This metric is independent of the programming language used and it is used to measure the functional complexity of the software.
Complexity – Complexity metrics are used to measure the complexity of the software control structure by simplifying the code into a graphical representation. One such metric is McCabe’s Complexity Metric. In this metric, any software module can be described by a Control Flow Graph (CFG) using nodes ad edges.
Test Coverage – Such metrics are used to estimate faults in the system by performing various types of testing on the software.
Project Management Metrics – Project management metrics are used to evaluate the management process. Better management by developer results in a better development process that will reduce the cost and complete the project on time. Examples of such metrics are Schedule Variance, Effort Variance, Size variance, etc.
Process Metrics – Process metrics focus on process quality by measuring attributes of the software development process. Examples of such metrics are Cost of Quality, Defect Density, Testing Efficiency, etc.
Fault and Failure Metrics – These metrics are used to achieved failure-free software execution. These metrics use faults found by the testing team during testing and failures found by end users; this data is then gathered and analysed. One of the important metrics used for this purpose is MTBF i.e. Mean Time between Failures.
MTBF is an addition of two other metrics MTTF i.e. Mean Time to Failure (The difference of Time between two consecutive failures) and MTTR i.e. Mean Time to Repair (The time required to fix the failure).
In other words, MTBF = MTTF + MTTR
After development and before deployment, the software is tested to identify any defects or bugs in the system. There are some analysis tools that can also be used to reduce the possibility of defects such as Fault Tree Analysis (FTA) and Orthogonal Defect Classification (ODC).
Approaches used for Reliability Testing
Reliability testing is done to check whether the software works in the desired manner without any faults. It is difficult to calculate exact reliability, but it can be estimated using various approaches. A couple of them are discussed below:
Test-Retest Reliability – In this type of reliability, consistency of results is measured when the same test is performed by or on the same sample at a different time. Once the results are ready, the correlation between the two results is calculated. Usually, if the correlation is greater than 0.8, it is considered a reliable test.
Parallel Forms Reliability – It is used to measure reliability when two similar kinds of tests are performed by or on the same or two different samples. Response or results from samples are collected and their co-relation is calculated.
Reliability Testing Tools
Some of the tools or programs available in the market to measure software reliability are mentioned in brief below:
SOFTREL – SOFTRELL LLC is a US-based company founded in 1991. It provides multiple products and services to measure software reliability. Some of the products are ‘Software Reliability Toolkit’, ‘Frestimate Software’, etc.
SoREL – SoREL tool is used for software reliability analysis and prediction. This tool offers four types of reliability growth tests i.e. Arithmetical Mean, Laplace Test, Kendall Test, and Spearmann Test. They allow four types of reliability growth models. SoREL allows two types of failure data processing i.e. inter-failure data and failure intensity data.
SMERFS – SMERFS stands for Statistical Modelling ad Estimation of Reliability Functions for Software. It was created in 1982 and has two versions: SMERFS and SMERFS Cubed. SMERFS collects raw data and after examination, failure and fault detection rates are predicted.
Software reliability is an important area of software quality. Reliability testing is a costly process and it should be done with proper planning. Even with the cost factor, reliability testing gives confidence to the development team and clients that the software being developed is of high reliability and works in the desired manner.