Tuesday, June 1, 2010

Contents

1 Preface........................................................................................................................................... 2

Purpose............................................................................................................................................. 2

Introduction....................................................................................................................................... 2

2 Test Stages.................................................................................................................................... 3

2.1 Unit Test....................................................................................................................................... 3

2.2 Integration Test............................................................................................................................. 3

2.3 Quality Assurance Test................................................................................................................. 4

2.4 Regression Test............................................................................................................................. 4

2.5 System Test.................................................................................................................................. 5

2.6 User Acceptance Test................................................................................................................... 5

2.7 Operational Readiness Test........................................................................................................... 6

2.8 Production Readiness Test............................................................................................................ 7

2.9 Test Stages Activities - Roles and Responsibilities........................................................................ 8

3 Test Process.................................................................................................................................. 9

3.1 Test Planning................................................................................................................................ 9

3.2 Test Case Execution................................................................................................................... 10

3.3 Test Result Evaluation................................................................................................................ 10

4 Test Status Reports.................................................................................................................... 12

5 Resources and Training............................................................................................................. 13

5.1 Resource Involvement................................................................................................................ 13

5.2 Training Requirements................................................................................................................ 14

6 Test Environment....................................................................................................................... 15

6.1 Unit Test..................................................................................................................................... 15

6.2 Integration Test........................................................................................................................... 15

6.3 System Test................................................................................................................................ 15

6.4 User Acceptance Test................................................................................................................. 15

6.5 Operational Readiness Test......................................................................................................... 15

6.6 Production Readiness Test.......................................................................................................... 15

7 Approvals..................................................................................................................................... 16

8 Risks............................................................................................................................................. 17

9 Appendix A : Test Plan Report................................................................................................. 18

10 Appendix B: Test Case and Review Log Form....................................................................... 19

11 Appendix C: Test Result Report............................................................................................... 21


1 Preface

1.1 Purpose

This document represents the Test Strategy for an Implementation project. The test strategy includes explicit testing of all the component of the application Customized/Out of Box, as well as the implicit testing of all interfaces that will be part of the project.

1.2 Introduction

Testing is the process of checking that a business requirement has been correctly implemented. The testing process will ensure that all components of the project/application deliver the specified functional, technical and quality requirements. Testing is neither intended to complete missing designs or flesh out functional aspects overlooked in requirement gathering stage of the project nor to improve design or fix code.

Testing activities will start at the completion of each phase of the development cycle and is usually the first opportunity for a group, external to the development team, to assess the quality of the system. Testing activities will span the entire system development lifecycle and will be integrated into the iterative development cycles of the project, referred to as “builds or phases”.

The proposed test strategy will help in:

  • · Implementing Quality Application
  • · Easy Maintenance
  • · Acceptance by end user, etc.

2 Test Stages

This section describes the different stages of testing including, objectives, entry and exit criteria and responsibilities that will be executed throughout the project implementation life cycle.

2.1 Unit Test

2.1.1 Introduction

In unit test all the components are individually tested to ensure that each component[1] meets the functional as well as technical specification, prior to integration with other components.

2.1.2 Objective
  • · Validate the business and technical functionality of individual component of the overall application
  • · Validate the program execution all boundary condition
2.1.3 Responsibility

· Development Team

2.1.4 Entry Criteria
  • · Completed and Reviewed Code
  • · Unit Test Plan and Test Cases
  • · Sample Test Data
2.1.5 Exit Criteria
  • · Unit Tested code
  • · Completed Unit Test Results Report
2.1.6 Approver
  • · Project Manager/Quality Coordinator

2.2 Integration Test

2.2.1 Introduction

At the end of the development of each phase/build of the project, an integration test will be conducted to ensure that the components function properly when integrated. Integration testing is organized by technical integration points and transactions, rather than by business events.

2.2.2 Objective
  • · Validate that individual components function properly when integrated
  • · Validate navigation, data-integrity and cross component transaction functions as per the specification
2.2.3 Responsibility
  • · Development and Functional Team
2.2.4 Entry Criteria
  • · Unit Tested Components
  • · Integration Test Plan and Test Cases
  • · Unit Test Result Report
  • · Sample Test Data
2.2.5 Exit Criteria
  • · Stable system with proper information flow across the system
  • · Integration Test Result Report
2.2.6 Approver
  • · Project Manager/Quality Coordinator

2.3 Quality Assurance Test

2.3.1 Introduction

Once the integration test exit criteria has been met, the code used in integration test will be moved from the development environment to the quality assurance (QA) test environment. A subset of the integration test scripts will then be re-executed to verify all connections and interfaces perform as expected.

2.3.2 Objective
  • · Validate that migrated code works as intended
2.3.3 Responsibility
  • · Functional Team and Quality Coordinator
2.3.4 Entry Criteria
  • · Completely migrated code on QA environment
  • · Integration Test Plan, Test Case and Test Result
  • · Sample Test Data
2.3.5 Exit Criteria
  • · QA Test Result Report
2.3.6 Approver
  • · Project Manager

2.4 Regression Test

2.4.1 Introduction

Regression testing is the process of ensuring that the functionalities of previously tested components are not adversely affected by a change to a particular component.

Regression testing must be conducted before the beginning of QA testing for subsequent phase, in case the project involves more than one phase/build.

2.4.2 Objective
  • · Validate that additional functionality does not impact the existing functionality of the application/component
2.4.3 Responsibility
  • · Functional Team
2.4.4 Entry Criteria
  • · QA Test Result Report
  • · Regression Test Plan and Test Cases
  • · QA Test Result Report
  • · Sample Test Data
2.4.5 Exit Criteria
  • · Regression Test Result Report
2.4.6 Approver
  • · Quality Coordinator/Project Manager

2.5 System Test

2.5.1 Introduction

System testing exercises the functional aspects of the application, concentrating on business requirements and process flow specifications, to ensure the application performs as intended. System testing is organized by business function and each of the functions will be individually tested, executing every process as specified in the requirements according to the scope of the phase/build.

2.5.2 Objective
  • · Validate that all the interfaces work as per the specification in design documents
  • · Validate all the business and process flow specifications
2.5.3 Responsibility
  • · Functional Team
2.5.4 Entry Criteria
  • · Regression/QA Test Result Report
  • · System Test Plan and Test Cases
  • · Snapshot of Production Data
2.5.5 Exit Criteria
  • · System Test Result Report
2.5.6 Approver
  • · Quality Coordinator

2.6 User Acceptance Test

2.6.1 Introduction

User acceptance is process of familiarizing the end user with the application. Business capabilities will be tested in user created scenarios to ensure that business processes flow in accordance with the process reengineering design and vision.

A group of personnel will be selected from the end users to form a business users group. These users will assist the project team throughout the life cycle of project.

2.6.2 Objective
  • · Validate that system works properly from end users point of view
2.6.3 Responsibility
  • · User Group
2.6.4 Entry Criteria
  • · Finalization of User Group
  • · System Test Result Report
  • · User Test Plan and Test Cases
  • · Snapshot of Production Data
2.6.5 Exit Criteria
  • · User Test Result Report
2.6.6 Approver
  • · Project Steering Committee

2.7 Operational Readiness Test[2]

2.7.1 Introduction

The Operational Readiness Test team is responsible for testing the application not on functional or business requirement criteria but on the operational criteria. The operational readiness test includes:

Load / Performance Test

Objective
  • · To validate that the response times are acceptable for individual transactions - interfaces, queries, reports and batch processes
  • · To verify that the application meets the business expectations and requirements for performance, both in throughput and response time under significant operational load, based on peak hour, number of users and transactions

Security Test

Objective
  • · To validate application access and authority as required for different classes of users
  • · To validate visibility functions as required for different classes of users
  • · To validate controls required to protect the application from improper or illegal penetration or use as per the design specifications

Backup Test

Objective
  • · To validate that data is accurately preserved at a designated point in time, to allow refreshing of data when requested

Failure/Load Balancing Test

Objective
  • · To check the capability of another server to take over the load of a failed server in the event of a hardware malfunction

Stress Test

Objective
  • · To check the capability of the system to handle peak load and the level to which it can do so before failure
2.7.2 Responsibility
  • · Operational Group/Infrastructure Group
2.7.3 Entry Criteria
  • · User Test Result Report
  • · Operational Readiness Test Plan and Test Cases
  • · Sample Test Data
2.7.4 Exit Criteria
  • · Operational Test Result Report
2.7.5 Approver
  • · Project Steering Committee

Note: In case of multi phase environment these tests can be conducted for initial phases and then can be done, if required, in separate operational readiness environment mirroring production equipment capacity to the extent possible.

2.8 Production Readiness Test

2.8.1 Introduction

Unit, integration, QA, regression, system and user acceptance test will be conducted for each phases of the application. At the end of last phase, the production environment will be prepared with the equipment and system installation required to execute the application.

2.8.2 Objective
  • · To ensure the pre-implementation environment and application is ready for use
  • · To determine the completeness, usability and functionality of the application
2.8.3 Responsibility
  • · Functional Team, User Group and Steering Committee
2.8.4 Entry Criteria
  • · User Accepted System and Successful Completion of Operational Readiness Test
  • · Production Test Plan and Subset of Integration, System and Operational Readiness Test Cases
  • · Snapshot of Production Data
2.8.5 Exit Criteria
  • · Completed Production Test Result Report
2.8.6 Approver
  • · Project Steering Committee

2.9 Test Stages Activities - Roles and Responsibilities

The following table is a summary of the primary activities related to testing throughout the development life cycle. The teams responsible for performing or assisting with each activity are noted (X = Perform, S = Support, A = Approve).


Activity

Development Team

Functional Team

Operational Readiness

User Group

Project Steering Committee

Quality Coordinator/Project Manager

Develop Requirements

X

S/A

Document Requirements

X

A

X

A

Write Test Strategy

X

S

S

A

Document Detail Design

X

A

S

A

Develop Code

X

Unit Test

X

A

Integration Test

X

S

A

QA Test

S

X

A

A/S

Regression Test

S

X

A

System Test

S

X

S

S

A

User Acceptance Test

S

S

X

S / A

Operational Readiness Test

S

S

X

A

Production Readiness Test

S

X

S

X

A


3 Test Process

3.1 Test Planning

This stage outlines the pre-requisite for testing activities to begin.

3.1.1 Review Scope / Requirements Documents

The business and technology capabilities contained within each phase will be reviewed in detail to gain a understanding of the scope of the phase, the test objectives to be created, the test cases to be developed and the test plan to be written.

The Development team is responsible for documenting the business and technology requirements as well as the detail design specifications for each component of the application. The Functional team will attend design sessions and observe unit and integration test execution to gain a solid understanding of the application functionality.

3.1.2 Create the Test Plan Report

The Functional team will create a Test Plan Report (TPR) for each test stage. The TPR defines the detailed test plan to be followed, describing each element within the plan.

Objective of Test Plan Report is:

  • · To crystallize the test plan for the test team preparing it
  • · To present the finalized test objectives
  • · To verify that the test team has an accurate and comprehensive understanding of the capabilities encompassed in the phase

The template of TPR is as given in Appendix A Test Plan Report.

The Business Team/Project Steering Committee's approval is required before TPR can be used.

3.1.3 Identify Entry and Exit Criteria

Entry and exit criteria will be included in the TPR. Entry criteria define what must take place before a test stage can begin. Exit criteria define what must take place in the test stage before the test can be considered complete. The functional team and the project steering committee will be responsible for finalizing of entry and exit criteria.

3.1.4 Identify Components

Each technical component and business capability included in the test stage will be listed in the TPR. Each business requirement will be mapped to the application module to ensure proper mapping of the requirements. In this way, complete coverage of the components and capabilities of the application can be assured.

3.1.5 Identify Test Stages/Cycles

Cycles are natural divisions of tests, serving to break up the execution of test cases into related stages. Often, dates or chronological order determines cycles, but it is also appropriate to break logical areas into individual cycles/stage. Defining the test conditions and test cycles as part of the test plan ensures that the test specification is complete and thorough. The functional team and project steering committee will be responsible for identifying the types of test stage/cycles.

3.1.6 Write Test Cases

After studying the requirements and the detailed design for the build, test cases are created. Each test case is an element of the execution phase of the test. The state of each test case determines whether or not processing can progress or must wait for problems to be resolved.

Test cases are comprised of several items, including: a unique test case number, the script name, the objective or test function, a description of the actions to be performed, and the expected results. The test case/review log is used to document the actual results of the test case during execution. See Appendix B Test Case and Review Log Form for a sample test case form and review log form.

Test Cases must be approved from the quality coordinator/functional team/steering committee before they can be used.

3.1.7 Capture Test Scripts

An automated[3] testing will be used to capture and replay test scripts to aid in regression testing and general test execution. The tool can be used to capture the manual test scripts and, once verified, produce automated test cycles to ensure consistency of quality between each project phase/build via an automated means.

3.1.8 Prepare Test Data

Any special test data upon which the test cases are to be based will be identified. The data will be prepared in the test environment according to specifications. In most instances, however, the tester will enter the data for system and user acceptance testing via the application user interface. The development team will be responsible for preparing the test data.

3.2 Test Case Execution

3.2.1 Conduct Test

Once entry criteria has been met for a given test stage, test execution may begin. During test execution the tester will use the review log form to note the actual results and the status of the test case (pass/fail) along with his/her initials and the date of execution.

3.2.2 Process Anomalies

In conducting the test and documenting the results, if the actual results differ from the expected results, an anomaly report (Defect Log) is created. This process will escalate the differences noted to the appropriate party to determine if the test case is inaccurate or if there is a problem requiring resolution. If code or configuration changes are required, the test case will be flagged for retest. The anomaly resolution process is iterative, and will continue until accurate test results are obtained.

3.3 Test Result Evaluation

3.3.1 Prepare Test Results Report

At the conclusion of each test event, a formal evaluation will be conducted to determine whether or not the system can pass to the next test event. To assist with this process, a Test Results Report (TRR) will be generated, stating the test cases that have been completed, the test cases that are incomplete (with explanation), and the functions that were not delivered to the test environment (with explanation).

The layout of Test Result Report is given in Appendix C Test Result Report.

3.3.2 Stage Containment

Stage containment is an approach used to identify problems in the system before they are passed to the next stage. This helps build quality into the system. For the purpose of stage containment, problems are sorted into three categories:

· Errors

Problems found in the stage where they were created (e.g. Program bug in unit test).

· Defects

Problems found in stages successive to where they were created (e.g. Interface parameter design error in integration test rather than in system test).

· Faults

Problems found in production.

The objective of testing should be to minimize the defects and faults

3.3.3 Conduct Test Results Review

The project steering committee and key representative from testing stage will sign off the entry and exit criteria for all the test stages.

At the end of the integration test for each phase, the Functional and Development team will meet to discuss the TRR for the integration test. A go/no go decision will be made based on the TRR’s support of the exit criteria to determine whether or not to pass the system into the QA testing environment.

At the end of the System Test phase, Functional team will review the system test TRR and determine whether or not user acceptance test should begin.

At the end of the user acceptance test phase, Development Team, Functional Team and Project Steering Team will discuss the Test Results Report. This will be the formal approval process to determine the successful completion of the phase.

At the end of Production Readiness Testing, the Project Steering Team will determine whether or not to go live.


4 Test Status Reports

During the test execution process, metrics will be captured to allow the quality of the development and testing process to be measured. Additional reporting metrics will be gathered to analyze anomaly handling effectiveness and to monitor error recurrence. Following metrics will be collected during the testing

  • · Number of test cases against total duration of testing
  • · Number of defects/errors against testing stage
  • · Duration for fixing the defect/error
  • · Severity of the defects/errors
  • · Duration of Test Case preparation, etc

These testing metrics will act as fundamental in measuring the test process, providing a means to determine the earned testing progress/value, the amount of testing remaining, productivity rates, and time to completion of the test. The analysis of the data collected during testing phases will help to

  • · Shows progress towards goals
  • · Serves as a prediction tool for scheduling test and fix activities
  • · Enables projections to be made with respect to dates
  • · Determines backlog in conjunction with the defect/error fixing metrics
  • · Identifies error prone modules, etc.

Following reports will be created for effective management of the testing stages.

Progress Report

Progress reporting must map actual progress to planned progress. This will assist program management with determining schedule impacts or potential slippage.

Quantity of Test

Status will be reported on the total number of test conditions tested along with the break down of those passed and failed.

Impact of Test

Impact reporting will focus on the defects discovered, their severity, root cause, and area of functionality. The goal is to understand not only the number or quantity of detailed test conditions completed but the significance of those on the business.

5 Resources and Training

5.1 Resource Involvement

The following set of tables reflects the groups involved in each of the major test stages conducted throughout the project development life cycle. The involvement has been expressed as a percentage of the overall test time and is a rough guide to the level of contribution only.

5.1.1 Unit Test

Role

Involvement

Project Steering Committee

5%

Development Team

90%

Quality Coordinator

5%

5.1.2 Integration Test

Role

Involvement

Project Steering Committee

5%

Development Team

80%

Functional team

10%

Quality Coordinator

5%

5.1.3 QA Test

Role

Involvement

Project Steering Committee

5%

Functional Team

70%

Development Team

20%

Quality Coordinator

5%

5.1.4 Regression Test

Role

Involvement

Project Steering Committee

5%

Functional Team

70%

Development Team

20%

Quality Coordinator

5%

5.1.5 System Test

Role

Involvement

Project Steering Committee

5%

Functional Team - Lead

5%

Functional Team, Business User and Quality Coordinator

65%

Development Team

25%

5.1.6 User Acceptance Test

Role

Involvement

Project Steering Committee

10%

Functional Team and User Group

80%

Development Team

10%

5.1.7 Operations Readiness Test

Role

Involvement

Project Steering Committee

10%

Operational Team

60%

Development Team

15%

Infrastructure Services – Network / HW / SW Support

15%

5.1.8 Production Readiness Test

Role

Involvement

Project Steering Committee

15%

Functional Team

25%

Development Team

10%

User Group

25%

Infrastructure Services

25%

5.2 Training Requirements

5.2.1 Automated Testing Tool Training[4]

All Functional team and User group personnel dedicated to the QA testing efforts will need to become intimately familiar with the automated testing tool to assist with test case generation, test execution and regression testing.

5.2.2 User Group Testing Basics[5]

A subset of the user group identified for the project will be dedicated to assist with QA testing efforts. After being selected to participate in the testing process these users will complete a training course to familiarize them with the fundamentals of testing applications, testing process and methodology, how to write test objectives, how to write and use test cases and the test case log, and how to process anomalies.

6 Test Environment

Development, unit test, and integration test activities will be performed in the development environment. In this environment, debugging tools and test harnesses can be used to ease development and integration efforts. The environment will be owned and maintained by the Development team.

Once the application has met the exit criteria established for integration testing, the application components will be migrated to the QA testing environment. The QA testing environment will be owned and tightly controlled by the functional team. No debugging tools will be used in this environment and test harnesses will be limited only to those required to test functions that are available in the current phase. The QA test environment equipment and hardware/software configurations will mirror the production environment to the greatest extent possible – with the exception being the capacity of the servers and network components supporting the application.

Operational readiness testing (performance, failure, fault tolerance, etc.) will be performed in an isolated test lab environment. Production readiness testing will take place in the production environment prior to system conversion.

The following sections define environments for each of the individual test stages:

6.1 Unit Test

The unit test environment must accommodate testing an individual piece of a system. Unit test environment will exist at a workstation level environment with associated databases constructed by the tester.

6.2 Integration Test

The integration test environment must allow for the testing of the interactions of related components. This test will occur in the development environment.

6.3 System Test

The system test environment must support the testing of an entire application, including all related interfaces. The system test environment will mirror production to the extent possible. The data used for testing will use sample production data.

6.4 User Acceptance Test

The user acceptance test environment will be used to test the business capabilities of the application from an end-user perspective. As in system testing, data used for user acceptance testing will closely mimic that of production and, if required, snapshot of production data will be used.

6.5 Operational Readiness Test

A separate Operational Readiness Test environment will be used for the performance, stress, and fault tolerance/failure testing. Stress testing tools are to be used to emulate realistic workload scenarios as well as to determine the maximum volume that can be handled without failure.

6.6 Production Readiness Test

Production Readiness Test will be performed in the production environment to ensure rollout will be successful.

7 Approvals

The following table lists all the responsibilities for sign off for testing.

Deliverable

Approval Responsibility

Test Strategy

Project Steering Committee

Unit Test Cases

Project Manager and Quality Coordinator

Integration Test Plan Report

Project Manager and Quality Coordinator

Integration Test Results Report

Project Manager and Quality Coordinator

System Test Plan Report

Project Manager and Quality Coordinator

System Test Results Report

Project Manager and Quality Coordinator

User Acceptance Test Plan Report

Project Steering Committee and User Group

User Acceptance Test Results Report

Project Steering Committee and User Group

Operational Readiness Test Plan Report

Project Steering Committee

Operational Readiness Test Results Report

Project Steering Committee

Production Readiness Test Plan Report

Project Steering Committee

Production Readiness Test Results Report

Project Steering Committee


8 Risks[6]

The major risks that face any testing effort are:

  • · Incomplete test conditions or cycles coming from the test planning phase due to inadequate or inconsistent business and technical requirement documentation

Mitigation Plan: Enforce quality check on the technical and detail design documentation. Set exit criteria accordingly.

  • · Inadequate or incomplete testing at the previous stage of testing

Mitigation Plan: Establish and enforce exit criteria that verifies the completeness and quality of the testing process. Check that the exit criteria were met prior to approval.

  • · Lack of skilled testing resources in the immediate time frame

Mitigation Plan: Supplement existing QA testing team with a few resources highly skilled in testing procedures and automated testing tools.

  • · Lack of time available for testing, given the tight schedule

Mitigation Plan: Establish the fact that testing is not a “nice-to-have” but a necessity. If the design/development schedule slips or the quality of deliverables is sub-standard, the schedule will need to be readjusted. Supplementing the team with experienced testing resources will lessen the length of time required for each of the testing stages.

  • · Lack of consistent user involvement in the testing process

Mitigation Plan: Require a consistent set of resources from the business user group and ensure that the communication channels between the project team and these resources is very solid, even when they are away from the project. Enforce user signoff at each functional test phase.

  • · Lack of scope management

Mitigation Plan: Establish and enforce a formal change management process. Identify an effective way to notify the project steering committee of changes to business and technical capabilities that have cross-team impact.


9 Appendix A Test Plan Report

Reference Document Id: (Document id from which test plan is derived -- Functional Specification,

Technical Specification)

Reference Document Description:

Level of Testing: (Unit/Integration/System)

Test Plan Id:

Test Plan Description:

· Entry Criteria

· Exit Criteria

Items to be tested

Feature Id/Functionality to be tested

Feature Id/Functionality not to be tested

10 Appendix B Test Case and Review Log Form

a. Test Case Layout

Test Plan Reference Id:

Feature Id (From Test Plan):

Test Case Id

Test Condition

Expected Behavior

Actual Result

b. Defect Log Layout

Project Code : Unit Id :

Review Ref. No. : Date :

Review Team

Name

Preparation Time

Author (s)

Review Team Leader

Review Team Member

1.
2.
3.

Total A =

Scope:

Review Summary:

Error Summary (Error distribution should be categorized under the Source of Error 9 - Others):

Error Code

รจ

Violation of Standards

(V)

Logical Error

(L)

Inconsistent with other documents

(I)

Omission

(M)

Lack of Clarity

(CL)

Cosmetic

(CO)

Others

(O)

No. of Errors

Defect Summary

Defect Phase

RS / FS (1)

Design (2)

PS (3)

Change Mgmt (4)

Code (5)

User Doc

Others

No. of Defects

Type of Errors

Numbers

Fatal

Major

Minor

Review Meeting Effort(B) : (person hours)

Total Review Effort (C = A+B) :(person hours)

Size of the Work Item(s):

Select the appropriate UOM -

KLOC ( ) FP ( ) Modules ( ) Pages ( ) Functions ( ) Items  ( ) Objects Classes (  ) Screens ( )  Forms  ( ) Reports ( )

Total Number of Errors :

Total Number of Defects :

Action :

Corrections required Rework required

Approved

Signature & Date

Corrections :

Planned Effort

(Person Hours)

Actual Effort

(Person Hours)

Corrections checked and approved by :

Signature & Date

Rework

Planned Effort

(Person Hours)

Actual Effort

(Person Hours)

Comments

11 Appendix C Test Result Report

Test Plan Id:

Test Phase: (Unit/System/Integration...)

Test Case Id

Status (Complete/Incomplete)

Remark


  • [1]Component can include Screens, Reports, and Objects etc.
  • [2] Inclusion of some of the option given in this stage depends upon the usage of application and the scope of the project
  • [3] This phase in test process depends upon the availability of automated testing tools and usage of application
  • [4] User need to undergo training in Automated tool if one is used in testing
  • [5] The training needs will be identified based on the skills set of the user group
  • [6] The risks identified here are general for testing phase and may not be applicable for some implementation project. The project specific risk and mitigation plan should be addressed in details

Software Quality By :

Ahmed Abdelhamid
Software Quality Engineer
Interactive Saudi Arabia Ltd.
An Economic Offset Program Co.
http://www.il.com.sa/ahamid



0 comments: