Hypothesis based testing

From Deletionpedia.org: a home for articles deleted from Wikipedia
Jump to: navigation, search
This article was considered for deletion at Wikipedia on May 19 2017. This is a backup of Wikipedia:Hypothesis_based_testing. All of its AfDs can be found at Wikipedia:Special:PrefixIndex/Wikipedia:Articles_for_deletion/Hypothesis_based_testing, the first at Wikipedia:Wikipedia:Articles_for_deletion/Hypothesis_based_testing. Purge

Hypothesis based testing (HBT) is a personal scientific test methodology that focuses on leveraging one’s intellect by enabling sharp goal focus, via tools for scientific thinking to rapidly assess cleanliness of software. The central theme of HBT is to hypothesis potential defect types (PDT) that can impede “cleanliness of software” and then “prove their presence”.[1] [2] [3]

The philosophy behind HBT

  • Clarity is key to what we do.
  • Value in what we deliver rather than activities we do.
  • Evolve continuously and adapt.
  • Do less and deliver more.
  • Prove why we do and effectiveness of what we do.

The focus of HBT is to enable you to “see clearly”. To ask questions

  1. Are you clear about what you want to accomplish?
  2. Are you clear about what you need to do?
  3. Are you clear about the outcome of your work?

The process of seeking clarity results in coming up with interesting questions enabling you to uncover issues.

HBT vs. typical testing

So what is the difference between the typical and HBT way of testing? Typically testing is seen as a series of activities whose outcome (goal) is dependent on the individual’s experience. In the case of HBT, goal clarity is sought before commencement of any activity – “What are you testing and what are you testing for”. And then the activities are powered by the well formed scientific tenets that enable the individual to do smartly.[4] [5] [6] [7]

The table below drills down the key differences between these in a detailed tabular form across various dimensions listed in the first column.

Typical HBT
Outcomes Experience is key here Techniques are key here
Approach Hard work Smart work
Effectiveness Trust on skill Prove test effectiveness
Test approach Multiple issues at one go Staged detection
Automation approach Do more(repeat), do fast Do less, adapt quickly
Measure(s) Externally observed Intrinsic

Scientific approach to testing

Read more about Scientific approach in the Wikipedia link.

A scientific approach is about stating a problem, coming up with hypothesis and then proving/disproving them.

In the context of testing, we state the problem in terms of end user expectations and then hypothesise potential types of defects that may be probable and then prove their presence. The below table outlines this.

What is user expectation of "Good Quality"?

Clarify problem Cleanliness criteria

What types of issues can result in poor quality

Hypothesise Potential Defect Types

When should I uncover them?

Identify Evaluation Stage

How should I uncover them?

Test Types

What are design techniques?

Scenarios/Cases

How do I execute them optimally?

Scripts

How good is it? How am I doing?

Metrics & Management

Six staged evaluation in HBT

The act of validation in HBT consists of “SIX Stages of DOING”. It commences with first two stages focused on a scientific approach to understanding of the customer expectations and the context of the software. One of the key outcomes of the first two stages is “Cleanliness Criteria” that gives a clear understanding of the expectation of quality. In the third stage, the Cleanliness Criteria and the information acquired in the first two stages are used to hypothesise potential types of defects that are probable in the software. The fourth stage consists of devising a proof to scientifically ensure that the hypothesised defects can be indeed be detected cost-efficiently. The fifth stage focuses on building the tooling support needed to execute the proof. The last stage is about executing the proof and assessing if the software does indeed meet the Cleanliness Criteria.

HBT architecture

The various activities related to testing are done in SIX stages (S1-S6) and these activities are performed scientifically using the EIGHT disciplines of thinking. Each discipline consists of a tiny tools that use THIRTY TWO core concepts that form the core of scientific thinking.[8][9] [10]

HBT Terms & definitions[11]

Needs and expectations Needs represent the various features/flows of software/system, to enable end-user(s) to perform their tasks. Expectations on the other hand are about how well the “needs” (i.e. features of the system) are fulfilled. Good quality is about meeting expectations.
Cleanliness Criteria (CC) "Given that a need has to be meeting the expectation(s), how do we state the expectation? In HBT, this is stated as “cleanliness criteria” what quality aspects, where a criterion represents a characteristic/property that ‘a need’ shall have to satisfy.
A collection of the entire criterion for ‘a need’ is termed “cleanliness criteria”, to enable expectation to be clear/testable."
Potential Defect (PD) A possible problem/issue that could exist in the software application.
Potential Defect Type (PDT) A set of similar PD(s) grouped into a defect category. Note CC is not met when PDTs that affect that CC may be present.
Feature A service/function that is offered by the system.
Requirement/Use case A collection of features that enables an end user to do a job.
Quality level The act of uncovering PDTs in HBT is seen as a filtration process. In HBT there are NINE levels of filtration. Each filter is focused on uncovering a set of PDTs thereby meeting some of the cleanliness criteria. Each of these levels is termed ‘Quality level’.
Test type A specific test that focuses on uncovering a specific set of PDTs.
Test technique One that enables a formal design of scenarios and test cases.
Test strategy Strategy in HBT is a Cartesian product of ‘What-to-test’ ,‘Test-for-what’ and ‘How-to-test’ i.e. what types of tests have to be performed on what features/requirements , what-test (design)-techniques’ to use and how to execute the test cases.
Behaviour-Stimuli approach Stimulate the system-under-test and check behaviours - the design approach in HBT, wherein the focus is on coming up with ‘stimuli’ to ascertain correctness of ‘behaviours’. Test cases are the ‘stimuli’ and these validate the scenario (‘behaviour’).
Test scenario "In HBT, a test scenario is one that validates “a-behaviour’ of an entity under test. Given that the behaviour can be modelled as a set of conditions to be satisfied by an EUT, a scenario represents an instantiation of the values of all the conditions resulting in a flow. 
A test scenario with at least one condition violated is termed as “negative scenario”, while a positive scenario is one where none of the conditions are violated."
Test case[12] A test case is a combination of inputs and represents a data set. A test scenario is thus a collection of test cases.
Fault traceability Fault traceability is about ensuring that test scenarios/cases indeed have the power to detect the potential defects that have been hypothesised. In requirement traceability test scenarios/cases are mapped to the entity under test, while here the test scenarios/cases are mapped to the potential defect type(PDT).
Countability The ability to justify logically as to why we have the given number of test scenarios/cases. Useful to judge test adequacy.
Potency The ability of test cases to uncover an instance of potential defect type. Potency is about the least number of test cases with ability to target specific types of defects in specific areas to ensure 'clean software'.
Immunity A term that indicates the ‘potential resistance’ to uncovering PDTs, that software has become immune to the test cases. Denotes that the areas targeted have become strong (good quality) and that the potency of the test cases have indeed become weak.

Core Concept

A problem can be solved in two ways : (1)using one’s experience or (2) by applying a well formed scientific approach that is implemented via a process. The former depends on the “skill of an individual” while the latter depends on the “strength of the process”. Skill-based problem solving can be seen as an ‘art/craft while a logical approach can be deemed as one based on science & engineering i.e. scientific principles + process of usage of these principles.

The focus of HBT is to solve the problem of ‘Producing clean software’ using a scientific approach, an embodiment of (2). The logical problem solving tool-set that is at the heart of the HBT are called core concepts and these are a collection of techniques, principles, guidelines. There are THIRTY TWO of these in total and these are used in various parts of the SDLC to decompose the problem, identify cleanliness criteria, hypothesis potential defect types, formulate test strategy, design test cases, identify metrics and build appropriate automation.

Cleanliness Criteria (CC)

Given that a need has to be meeting the expectation(s), how do we state the expectation? In HBT, this is stated as “cleanliness criteria” what quality aspects, where a criterion represents a characteristic/property that ‘a need’ shall have to satisfy.

A collection of the entire criterion for ‘a need’ is termed “cleanliness criteria”, to enable expectation to be clear/testable.

Cleanliness criteria are a mirror of expectations of the end user from the system. Cleanliness criteria in HBT represent testable expectations. Cleanliness criteria provide a very strong basis for ensuring a goal-focused testing. This allows one to identify potential types of defects and then formulate an effective strategy in the complete set of test cases It is important that the cleanliness criteria is not vague or fuzzy.

Cleanliness criteria represents the “intrinsic quality” i.e. what properties should the final system have to ensure that it is deemed clean? Use the properties of the FIVE aspects of Data, Business logic, Structure, Environment, Usage as applied to your application to arrive at these criteria specific to your application.

Potential Defect Type (PDT)

Potential Defect Type (PDT) is a set of Potential Defect (PD – A possible problem/issue that could exist in the software application) grouped into a category. This is based on an HBT core concept “Defect centricity principle”.

Hypothesizing PDT is done by considering the various aspects (Data, Behaviour/Business Logic, Internal structure, External environment and Usage of the EUT) of EUT along the three views of Input(Error injection), Inherent (Fault proneness), Output (Failure) and then asking the questions to come up with PDTs.

PDT identification is based on TWO core concepts of HBT (Negative thinking & EFF Model that is based on Aspects & Views)

Quality Levels

Typically we have always looked at the levels of testing like unit, integration and system from the aspect of the “size” of entity-under test. Unit test is typically understood as being done on the smallest component that can be independently tested. Integration test is typically understood as being done once the various units have been integrated. System test is typically seen as the last stage of validation and is done on the whole system.

What specific types of defects are expected to be uncovered at each of these test levels is typically not clear. This lack of clarity results in test cases being insufficient or redundant. In HBT, the focus shifts to specific types of defects to be detected, and the act of detection is staged to ensure an efficient detection approach.

In HBT, the notion is of quality levels, where each quality level represents a milestone towards meeting the final cleanliness criteria. At each level, the focus is on uncovering certain types of defects and therefore there is a sharp clarity of the objective of what we are looking for at each level. Each quality level represents a step in the staircase of quality.

In HBT, there are NINE pre-defined quality levels where the lowest quality level focuses on input correctness progressively going onto the highest quality level to validate if the intended business value is indeed delivered.


Test Type

Clarity of purpose is a key tenet of HBT. And therefore the act of uncovering specific types of defects should be intensely goal focused. This means that a type of test shall only uncover specific type of defects. The act of test type identification results in specific types of tests to be executed at each quality levels and therefore uncovering defects in a staged manner.

Levels, PDTs and Test types

The objective of each level is to focus on specific PDTs (potential defect types). And to do this it requires specific types of tests to be executed. The table below shows the relationship between levels, PDTs and Test types. Note that PDTs are numbered only and the exact PDT is not listed.

L9 End user value End user value test
PDT 57-59
L8 Clean deployment Installation test
PDT 54-55
Migration test
PDT 56
L7 Attributes met LSPS test
PDT 44-51
Reliability test
PDT 52
Security test
PDT 53
L6 Environment cleanliness Good citizen test
PDT 39-41
Compatibility test
PDT 42-43
L5 Flow correctness Interaction test
PDT 35-38
L4 Behaviour correctness Functionality test
PDT 24-31
Access control test
PDT 32-34
L3 Structural integrity Structural test
PDT 14-23
L2 Input interface cleanliness API validation test
PDT 5-7
GUI validation test
PDT 8-13
L1 Input cleanliness Input validation test
PDT 1-4

This enables one to be clear as to the focus at each level and therefore focus on specific test cases for each type of test. Also this partitioning of PDTs at each level enables a clear partition of work between the various stakeholders in the SDLC.

Countability

‘Countability’ is a property of the test cases that relates to sufficiency. How do we judge that the ‘count of test cases’ are indeed no-more or no-less? Is there a way to justify the number of test cases?

Test design in HBT is done in two stages, first generating test scenarios and then later test cases. Test scenarios are derived from the behavioural model of the entity under test. And therefore based on the modelling technique, the number of test scenarios generated can indeed be justified. To execute these scenarios, we need to come up with specific test cases which are really combinations of test data. The data specification model is used to generate the test cases and the number of test cases can be justified.

Fault Traceability

Requirement traceability is about ensuring that each requirement does indeed have test case(s). So after we design test cases, we map test cases to requirements to ensure that all the requirements are indeed being validated. This is typically used as a measure of test adequacy.

Let us consider a situation wherein there is exactly one test case for each requirement. Now are the test cases adequate? No! Requirement traceability is a necessary condition for test adequacy but not sufficient.

Given that the Requirement could have the PDTs that have been mapped earlier, let us map the designed test cases to the PDTs. The intent of this is to ensure that the designed test cases do have the power to uncover the hypothesised defects.

In addition to requirements traceability, it is expected that the test scenarios and the corresponding test cases are indeed traced to the potential types of defects that they are expected to uncover. This is termed as fault traceability.

Fault Traceability in conjunction with Requirements Traceability makes the condition for test adequacy Necessary and Sufficient.

If we had a clear notion of types of defects that could affect the customer experience and then mapped these to test cases, we have Fault Traceability). This allows us to be sure that our test cases can indeed detect those defects that will impact customer experience.[13]

References

  1. T, Ashok. "Hypothesis Based Testing". http://hbtcentral.org. Retrieved 30 September 2015. 
  2. Swamy, Vimali. "STAG SOFTWARE – In Pursuit of Cleanliness". http://www.siliconindia.com/magazine-articles-in/STAG_SOFTWARE_%E2%80%93_In_Pursuit_of_Cleanliness_-BBAA208004541.html. Retrieved 3 Oct 2010. 
  3. "STEM 2.0 - A Scientific and Disciplined Way to Produce Clean Software". Unisys Technical Report 99 Vol. 28 No.4 (ISSN 0914-9996): 165. February 2009. http://dl.ndl.go.jp/info:ndljp/pid/8561376?tocOpened=1. Retrieved 30 Sep 2015. 
  4. "Accelerate Defect Detection with HBT". PRLog. https://www.prlog.org/11794615-accelerate-defect-detection-with-hbt.html. Retrieved 9 February 2012. 
  5. Bureau, DQI. "Happy Days!". http://www.dqindia.com/happy-days/. Retrieved 30 Sep 2015. 
  6. "Testing waters for bugs!". http://outsourceportfolio.com/testing-waters-bugs/. Retrieved 30 Sep 2015. 
  7. "Scientific method for effective testing". Software Test Press 8 (ISBN 9784774137490): 74. 15 Feb 2009. 
  8. T, Ashok. "An introduction to Hypothesis Based Testing". https://www.slideshare.net/stagsoft/an-introduction-to-hypothesis-based-testing. Retrieved 30 September 2015. 
  9. "Strategic consultancy". http://epicentre.co.uk/central/wp-content/uploads/2012/11/Strategic_consultancy_datasheet.pdf. Retrieved 30 Sep 2015. 
  10. "Hypothesis-based Testing (HBT)". http://www.stepinforum.org/STeP-IN_SUMMIT_2010/tutorials/Ashok2b34.html. Retrieved 30 Sep 2015. 
  11. "Elements of Good Testing". http://docs.wixstatic.com/ugd/c47e45_9cdcc2fd598c48b688f609cf69ba7f39.pdf. Retrieved Jul-Aug 2016. 
  12. "Potency of a Test Case". http://docs.wixstatic.com/ugd/c47e45_ec5d9161ee31e349bb965b7ae7b6df57.pdf. 
  13. "Aesthetics in Software Testing". http://docs.wixstatic.com/ugd/c47e45_da710736706859e661e3d687a65c1f8a.pdf.