This is really part II of my testing series (as I have become to call it) - I had an earlier post on <strong>Unit Testing</strong> . I recently had the opportunity to review some of the new stuff coming out of the <strong>Prescriptive Architecture Group</strong> at Microsoft and here is some of the interesting things proposed by them.
When you are doing Functional Testing of your application or code, it can be of two kinds - Black Box and White Box. I am going to highlight some of the suggestions of MS on Black-Box here and have another followup post on the Whitebox testing.
Black-Box testing
The black box testing assumes no knowledge of code and is intended to simulate the end user experience. One can use sample applications to integrate and test the various components of an application for black box testing. This approach allows for testing of all the possible combinations of the end user actions. Some of the testing techniques which are covered in this are:
- Testing all the external interfaces for all possible usage scenarios. All the external interfaces that can be integrated by the end users with their applications.
- Ensure the interfaces meet the requirements and functional specs. This type of testing ensures that the components in an application implement the interfaces required per the functional specifications. This also allows you to develop a test harnesses. You need to test for all the possible ways in which the APIs can be called by the clients of the code block. The usage scenarios include both the expected process flows and random inputs.
- Testing for various types of inputs. The second step is to ensure that the interfaces are returning the expected output and are robust enough to handle invalid data and exceptional conditions gracefully. The input data can be randomly generated within a specified range expected by the application, outside the specified range or at the boundary of the range. Testing with data outside the specified range ensures that the application is robust and can handle invalid data and the error messages generated are meaningful for the end user. Boundary testing ensures that the highest and lowest permitted inputs produce expected output.
- Performance Testing. You execute performance related test cases from the test plan in a simulated environment which is close to the real world deployment. Performance testing is done to verify that the application is able to perform under expected and peak load conditions, and that it can scale sufficiently to handle increased capacity. There are two main aspects of performance testing with different end goals. You must plan and execute test cases for both these aspects as mentioned below:
- Load Testing: Use load testing to verify the code behavior under normal and peak load conditions. This allows you to verify that the application can meet the desired performance objectives and does not overshoot the allocated budget for resource utilization such as memory, processor, network I/O etc. This also allows you to measure the measure response times and throughput rates for the application. Load Testing also helps you identify the overhead (if any) of using the application to achieve a desired functionality by testing applications with and without the code block for achieving the same end result.
- Stress Testing: Use stress testing to evaluate the code’s behavior when it is pushed beyond the normal or peak load conditions. The goal of stress testing is to unearth bugs that surface only under high load conditions such things as synchronization issues, race conditions, and memory leaks.
The analysis from performance tests may serve as input to White Box Testing. You may need to do a code review of the suspected module to weed out any possible causes of issues such as a coarse –grained lock which is causing increased wait times for threads etc. The data analysis from performance tests provides useful inputs to the type of problems which surface under load conditions. These inputs help you focus on profiling a particular code path during white box testing.
- Security Testing. Security Testing is done by simulating the target deployment environments and tested for potential security attacks.
- Globalization testing. The design and implementation has already been reviewed for adherence to best practices for globalization. Here one should check for Globalization related test cases to ensure that the code block can handle international support and is ready to be used from various locales around the world. The goal of globalization testing is to detect potential problems in application design that could cause problems. It makes sure that the code can handle all international support and supports any of the culture/local settings without breaking functionality that would cause either data loss or display problems. To perform globalization testing, you must install multiple language groups and ensure that the culture/locale is not your local culture/locale. For example, executing test cases in both Japanese and German environments, and a combination of both, can cover most globalization issues.
For more information on performance testing check out: