Understand software testing.
Purpose: to ensure software is functional, secure and meets specified business requirements.
Test methods: unit testing e.g. source code testing; integration testing e.g. big bang, top down-top up; system testing eg usability, performance, compatibility, error handling, security; black box testing e.g. test cases based on inputs and expected outputs; white box testing e.g. data flow, branch, path testing; purpose of each; static testing e.g. walkthrough without executing code; dynamic testing eg from a debugger environment
Test stages: e.g. planning, developing test procedures, carrying out tests, reporting (is software ready?), analysis of results, retesting; alpha e.g. white box testing; beta e.g. usability testing; acceptance e.g. black box testing; non-functional testing; performance testing; acceptance testing.
Different types of testing: Alpha/beta.
Unit testing: identify processes and input and output requirements, identify and isolate code into its smallest testable part, plan tests cases, identify test data, debug code.
Integration testing: identify units of code that will work together, decide on approach to be used (top down, bottom up), define parameters for the way in which the units will work, use modules that have been unit tested as the input for the test.
Performance testing: define performance goals, identify suitable metrics, deploy manual and automatic testing tools as required, analyse data generated by testing tools.
System testing: test software as a complete package, plan destructive testing cases, plan non-destructive testing cases, compare performance with functional requirements specification.
Acceptance testing: identify and engage suitable test users, deploy users to test the program in real or simulated use scenarios, gather feedback from test users, compare user feedback against functional and non-functional requirements specification.
Regression testing: fix errors identified in other stages of testing o retest the identified component to check error is fixed, retest associated components to ensure no unintentional issues have arisen.
Load/stress testing: agree acceptable performance parameters (data access speed, load times, number of concurrent users, system availability), identify and deploy browser-level and protocol-level testing, expose site to low, normal, high and extreme levels of traffic, analyse performance of site against agreed parameters.
Assessment Criteria