Wednesday, October 10, 2007

The 'W' SDLC Model

The Test activities first start after the implementation. The connection between various test stages and the basis for the test is not clear.

The tight link between Test, Debug and Change tasks during the test phase is not clear.

W-model—further clarifies the priority of the tasks and the dependence between the development and testing activities. Though as simple as the V-model, the W-model makes the importance of testing and the ordering of the individual testing activities clear. It also clarifies that testing and debugging are not the same thing.

They usually appears an unattractive task to be carried out after coding. In order to place testing on an equal footing, a second 'V' dedicated to testing is integrated into the model. Both 'V's put together give the 'W' of the 'W-Model'


Tuesday, October 9, 2007

Defect Management Process......

Software defects are expensive. Moreover,cost of finding and correcting defects represents one of the most expensive software development activities.While defects may be inevitable, we can minimize their number and impact on our projects. To do this development teams need to implement a defect management process that focuses on preventing defects, catching defects as early in the process as possible, and minimizing the impact of defects.

With the help of this defect management model, we can reduce the defect and their Impact.

The defect management process is based on the following general principles:

The primary goal is to prevent defects as quickly as possible and minimize the impact of the defect.

The defect management process should be risk driven -- i.e., strategies, priorities, and resources should be based on the extent to which risk can be reduced.








This model contains two main sections. The first one, Defect Management Process, lays out the Defect Management model and The second section, Implementing the process, gives tips on how to take the model and use it within your company.

The arrows on either side of these links, are another way to navigate around. Up arrows take you back to the parent of the page you are on. Down arrows allow you to delve into greater detail about a topic. For example, the down arrow below takes you to the Defect Management Process. Left and right arrows allow you to move to the next or previous logical page.


Defect Prevention -- Implementation of techniques, methodology and standard processes to reduce the risk of defects.

Deliverable Baseline -- Deliverables will be considered complete and ready for further development work. When a deliverable is baseline, any further changes are controlled. Errors in a deliverable are not considered defects until after the deliverable is baseline.

Defect Discovery -- Identify and report of defects for development team. A defect is only termed discovered when it has been documented as a valid defect by the development team member(s) responsible for the component(s) in error.

Defect Resolution -- Development team member responsibility to fix a defect,This also includes notification back to the tester to ensure that the resolution is verified.

Process Improvement -- Identification and analysis of the process in which a defect originated to identify ways to improve the process to prevent future occurrences of similar defects. Also the validation process that should have identified the defect earlier is analyzed to determine ways to strengthen that process.

Management Reporting -- Analysis and reporting of defect information to assist management with risk management, process improvement and project management.

Difference Between Quality Assurance, Quality Control, And Testing.

Most of the people and organizations are confused about the difference between quality assurance (Q.A.), quality control (Q.C.), and Testing. They are closely related, but they are different concepts. Since all three are necessary to effectively manage the risks of developing and maintaining software.They are defined below:

* Quality Assurance: A set of activities designed to ensure that the development and/or maintenance process is adequate to ensure a system will meet its objectives.
* QA is Prevent the issue and correct it.
* QA is the responsibility of entire team.

* Quality Control: A set of activities designed to evaluate a developed work product.
* QC is detect the issue and resolve it.
*QC is the responsibility of team member.
* Testing: The process of executing a system with the intent of finding defects. (Note that the "process of executing a system" includes test planning prior to the execution of the test cases.)

QA activities ensure that the process is defined and appropriate. Methodology and standards development are examples of QA activities. A QA review would focus on the process elements of a project - e.g., are requirements being defined at the proper level of detail. In contrast,

QC activities focus on finding defects in specific deliverables - e.g., are the defined requirements the right requirements. Testing is one example of a QC activity, but there are others such as inspections.

Controversy can arise around who should be responsible for QA and QC activities -- i.e., whether a group external to the project management structure should have responsibility for either QA or QC. The correct answer will vary depending on the situation,
Both QA and QC activities are generally required for successful software development.

* While line management should have the primary responsibility for implementing the appropriate QA, QC and testing activities on a project, an external QA function can provide valuable expertise and perspective.

Friday, September 28, 2007

Difference betweew Patch, Build and Pilot.

Patch - If there is anything wrong with the application and user has complained about it, then we will release a patch which will be having a solution to that, lets us suppose you have an finance application and whenever you create an voucher system does some wrong calculation of 1 paisa and when you see your reports then there is a mismatch of DR and CR so you can send a patch running which it will correct the database entries and then release a new version of your product.

Build - It is a compete software /combination of module which are ready to be tested or ready to be delivered to client after testing.

Pilot - The primary purposes of a pilot are to demonstrate that your design works in the production environment as you expected and that it meets your organization’s business requirements. Conducting a pilot reduces your organization’s risk of encountering problems during full-scale deployment. To further minimize your risk during deployment, you might want to conduct multiple pilots consisting of separate pilots for different technologies or operating systems, or you might want to conduct a full-scale pilot in phases

How do you test an application with over 500 users without using any performance tools?

It is quite impossible as bringing in so many resources is not possible and not advisable as well, it is always advisable to use virtual users for such type of testing, but I believe that if we need to do it manually then my approach would be – create a batch of 10/20 users (A feasible number) and put stress on server by gradually increasing the number of user I will manually see the performance by using task manager, and then can find out a rough figure for 500 as if the response time is 15 seconds for 20 users then for 500 users the response time will be 6 – 6.30 minutes

Same thing can be done by putting load of 20 users simultaneously

You have new functionality to be tested and also you have a bug fix that needs to be tested. How do you go about testing each of these?

As testing a new functionality needs some time as we need to write the test cases of that as well where as testing an existing bug is just a regression testing so if there is a release then I would prefer to test the bug first.
And if he says that explain the process then explain the test scenario creation test case creation and the entire process for new functionality and for bug testing explain regression testing process

Thursday, September 27, 2007

Difference between CMM and CMMi

CMM is a reference model of matured practices in a specified discipline for e.g. Systems Engineering CMM,
Software CMM, People CMM, Software Acquisition CMM etc. But they were difficult to integrate as and when needed.
So the CMMi evolved as a more matured set of guidelines and was built combining the best components of individual
Disciplines of CMM (Software CMM, People CMM etc). It can be applied to product manufacturing, People management, Software development etc
The Capability Maturity Model is a baseline of key practices that should be implemented by any entity developing or maintaining a product which is completely or partially software. With the SW-CMM, the emphasis is on the software practices.
Whereas with the CMMI, we can find both software and systems practices. We can imagine that these models would be the outcome of vast consultation of successful projects which would then be documented in a model of what to do to carry out projects with success and to improve continuously, and recommended to the software and systems engineering community