Wednesday, August 27, 2014

The Bill Hetzel Principle

After taking a practice test for ISTQB certification I came across a question asking where 'testing must be planned' is stated and the Bill Hetzel Principle was the answer.  I wasn't sure what it was, and since this was a chapter 1 quiz I tried to find where it was mentioned in ISTQB book with no success, so I decided to do some research.  So I found that Bill Hetzel wrote a book on software testing called 'The Complete Guide to Software Testing' in 1988, and it is still considered a good one from what I can tell.
While I haven't been able to get access to it I found that some of his main testing principles are:


  • Complete testing is not possible
  • Testing is creative and difficult
  • An important reason for testing is to prevent defects
  • Testing is risk based
  • Testing must be planned
  • Testing requires independence
Then after looking at this I realized that all of these are mentioned in some way or another throughout the first chapter of the ISTQB book, it just doesn't mention Bill Hetzel by name.  For instance the principle of 'Complete testing is impossible' can be found as one of the testing principles.
Testing must be planned can be found in section 1.2 'What is Testing'
Testing requires independence can be found as one of the code of ethics, and can be found (more or less) along with the remaining principles in section 1.5 The Psychology of Testing.
The principles of 'Testing is creative and difficult', 'An important reason for testing is preventing errors', and 'testing is risk-based' may not necessarily be found word for word in the book, but some of the stuff the book talks about in CH 1 is mostly the same basic principle.

Wednesday, August 20, 2014

The Test Process Part 2

leaving off from last post I wrote that Test activities can be divided into 5 phases of the testing process:


1. Planning and control
2. Analysis and design
3. Implementation and Execution
4. Evaluating exit criteria and reporting
5. Test closure activities.


I covered the first 3 so now I'll finish the list.


For evaluating exit criteria this is the point in testing when the tester takes what was done in test execution and assesses it against the defined objectives. ISTQB suggests that this should be done for each test level, so we can know when we have done enough testing and can move on to the next level. Also in order to assess the risks of deeming a component, activity or level is complete and can move on we need to come up with a series of exit criteria. Exit criteria should be set and evaluated for each test level. According to ISTQB evaluating exit criteria has the following major tasks:
• Check test logs against the exit criteria specified in test planning
• Assess if more tests are needed or if the exit criteria specified should be changed
• Write a test summary report for stakeholders
The last on the list of test activities is the test closure activities.
During test closure activities, the testers gather up the data from finished test activities from each of the testers involved, this includes checking and filing test ware, and analyzing facts and numbers. We may need to do this when software is delivered. There are a number of reasons why a testing team would close testing, these include the testers getting all the info they needed from testing, the project could be cancelled at anytime, a particular milestone is achieved where we no longer need to continue testing after, or when a maintenance release or update is done. Test closure activities include the following major tasks:
• Check which planned deliverables we actually delivered and ensure all incident reports have been resolved through defect repair or deferral.
• Finalize and archive test ware, such as scripts, the test environment, and any other test infrastructure, for later reuse. This can save time later if the testers have to test a new version of the same software later.
• Hand over testware to the maintenance organization who will support the software and make any bug fixes or maintenance changes, for use in con firmation testing and regression testing. This group may be a separate group to the people who build and test the software; the maintenance testers are one of the customers of the development testers; they will use the library of tests.
• Evaluate how the testing went and analyze lessons learned for future releases and projects. This might include process improvements for the soft ware development life cycle as a whole and also improvement of the test processes.

Monday, August 18, 2014

The test process Part 1

Now that I've posted a few things about the different kinds of tests, and the differences of functional and non-functional testing I will go a little into the testing process.  Test activities can be divided into 5 phases of the testing process:


1. Planning and control
2. Analysis and design
3. Implementation and Execution
4. Evaluating exit criteria and reporting
5. Test closure activities.




A test plan is defined by ISTQB as 'A document describing the scope, approach, resources, and schedule of intended test activities.'  With a test plan a tester can identify many things including features to be tested, tasks, and who will do the testing tasks, test environments, design techniques entry/exit criteria, choice rationale and risks.  Basically a test plan records the entire test process.  A related task in this phase is Test monitoring where the status of the project is checked periodically through reports.




In Test Analysis and Design testers take the test objectives and turn them into test conditions and test cases. The activities in this stage of testing goes in the following order as stated by ISTQB:
1. Review Test basis (such as the product risk analysis, requirements, architecture, design specifications, and interfaces), and examine specifications of the software certain types of tests such as black box testing can be designed at this point as well. As we study the test basis, we often identify gaps and ambiguities in the specifications, because we are trying to identify precisely what happens at each point in the system, and this also pre- vents defects appearing in the code.
2. Identify the test conditions based on analysis of test items, specifications, and what we know about their behavior and structure. This gives us a high-level list of what we are interested in testing. In testing, we use the test techniques to help us define the test conditions. From this we can start to identify the type of generic test data we might need.
3. Design test cases using techniques to help select representative tests that relate to particular aspects of the software which carry risks or which are of particular interest, based on the test conditions and going into more detail. I'll create a post later about test design.
4. Evaluate testability of the requirements and system. The requirements may be written in a way that allows a tester to design tests; for example, if the performance of the software is important, that should be specified in a testable way. If the requirements just say 'the software needs to respond quickly enough' that is not testable, because 'quick enough' may mean different things to different people. A more testable requirement would be 'the software needs to respond in 5 seconds with 20 people logged on'. The testability of the system depends on aspects such as whether it is possible to set up the system in an environment that matches the operational environment and whether all the ways the system can be configured or used can be understood and tested. For example, if we test a website, it may not be possible to identify and recreate all the configurations of hardware, operating system, browser, connection, firewall and other factors that the website might encounter.
5. Design the test environment set-up and identify any required infrastructure and tools. This includes testing tools and support tools such as spreadsheets, word processors, project planning tools, and non-IT tools and equipment - everything we need to carry out our work.


In the test implementation and execution phase, we take the test conditions and make them into test cases and set up the test environment, then we actually do the tests.  For the test implementation we develop and prioritize our test cases and create test data for those tests. We will also write instructions for carrying out the tests and organize related test cases into test case collections AKA test suites to allow for a more efficient test execution.  Then for the test Execution the testers will actually perform the tests they have created, and compare the results with the expected results.





Friday, August 8, 2014

functional vs. non-functional testing

For this post I have decided to get more in depth on the differences of functional and non-functional testing. These are the categories in which most types of testing are divided into.


Functional testing is defined by the ISTQB as "Testing based on an analysis of the specification of the functionality of a component or system."  Basically it's testing done to see if the system performs the way that the test specifications and requirements say it should.


Functional testing  concentrates on testing activities that verify a specific action or function of the code. These are usually found in the code requirements documentation, although some development methodologies work from use cases or user stories. Functional tests tend to answer the question of "can the user do this" or "does this particular feature work." 




An example of a type of functional testing is black box testing which tests the functionality of a software without knowing how it works, testers will only know what it's supposed to do.


ISTQB defines Non-Functional testing as "Testing the attributes of a component or system that do not relate to functionality."  The attributes that don't relate to functionality include reliability, efficiency, usability, portability, and maintainability.


Wikipedia explains that "testing will determine the breaking point, the point at which extremes of scalability or performance leads to unstable execution. Non-functional requirements tend to be those that reflect the quality of the product, particularly in the context of the suitability perspective of its users."
Test types that test the attributes of Non functional testing coincide with the names of the attributes themselves and include reliability, efficiency, usability, portability, and maintainability testing.


In conclusion Functional testing tests to see if the software functions the way the client wants, while non-functional testing tests everything else about the system.