Wednesday, October 10, 2007

The 'W' SDLC Model

The Test activities first start after the implementation. The connection between various test stages and the basis for the test is not clear.

The tight link between Test, Debug and Change tasks during the test phase is not clear.

W-model—further clarifies the priority of the tasks and the dependence between the development and testing activities. Though as simple as the V-model, the W-model makes the importance of testing and the ordering of the individual testing activities clear. It also clarifies that testing and debugging are not the same thing.

They usually appears an unattractive task to be carried out after coding. In order to place testing on an equal footing, a second 'V' dedicated to testing is integrated into the model. Both 'V's put together give the 'W' of the 'W-Model'


Tuesday, October 9, 2007

Defect Management Process......

Software defects are expensive. Moreover,cost of finding and correcting defects represents one of the most expensive software development activities.While defects may be inevitable, we can minimize their number and impact on our projects. To do this development teams need to implement a defect management process that focuses on preventing defects, catching defects as early in the process as possible, and minimizing the impact of defects.

With the help of this defect management model, we can reduce the defect and their Impact.

The defect management process is based on the following general principles:

The primary goal is to prevent defects as quickly as possible and minimize the impact of the defect.

The defect management process should be risk driven -- i.e., strategies, priorities, and resources should be based on the extent to which risk can be reduced.








This model contains two main sections. The first one, Defect Management Process, lays out the Defect Management model and The second section, Implementing the process, gives tips on how to take the model and use it within your company.

The arrows on either side of these links, are another way to navigate around. Up arrows take you back to the parent of the page you are on. Down arrows allow you to delve into greater detail about a topic. For example, the down arrow below takes you to the Defect Management Process. Left and right arrows allow you to move to the next or previous logical page.


Defect Prevention -- Implementation of techniques, methodology and standard processes to reduce the risk of defects.

Deliverable Baseline -- Deliverables will be considered complete and ready for further development work. When a deliverable is baseline, any further changes are controlled. Errors in a deliverable are not considered defects until after the deliverable is baseline.

Defect Discovery -- Identify and report of defects for development team. A defect is only termed discovered when it has been documented as a valid defect by the development team member(s) responsible for the component(s) in error.

Defect Resolution -- Development team member responsibility to fix a defect,This also includes notification back to the tester to ensure that the resolution is verified.

Process Improvement -- Identification and analysis of the process in which a defect originated to identify ways to improve the process to prevent future occurrences of similar defects. Also the validation process that should have identified the defect earlier is analyzed to determine ways to strengthen that process.

Management Reporting -- Analysis and reporting of defect information to assist management with risk management, process improvement and project management.

Difference Between Quality Assurance, Quality Control, And Testing.

Most of the people and organizations are confused about the difference between quality assurance (Q.A.), quality control (Q.C.), and Testing. They are closely related, but they are different concepts. Since all three are necessary to effectively manage the risks of developing and maintaining software.They are defined below:

* Quality Assurance: A set of activities designed to ensure that the development and/or maintenance process is adequate to ensure a system will meet its objectives.
* QA is Prevent the issue and correct it.
* QA is the responsibility of entire team.

* Quality Control: A set of activities designed to evaluate a developed work product.
* QC is detect the issue and resolve it.
*QC is the responsibility of team member.
* Testing: The process of executing a system with the intent of finding defects. (Note that the "process of executing a system" includes test planning prior to the execution of the test cases.)

QA activities ensure that the process is defined and appropriate. Methodology and standards development are examples of QA activities. A QA review would focus on the process elements of a project - e.g., are requirements being defined at the proper level of detail. In contrast,

QC activities focus on finding defects in specific deliverables - e.g., are the defined requirements the right requirements. Testing is one example of a QC activity, but there are others such as inspections.

Controversy can arise around who should be responsible for QA and QC activities -- i.e., whether a group external to the project management structure should have responsibility for either QA or QC. The correct answer will vary depending on the situation,
Both QA and QC activities are generally required for successful software development.

* While line management should have the primary responsibility for implementing the appropriate QA, QC and testing activities on a project, an external QA function can provide valuable expertise and perspective.

Friday, September 28, 2007

Difference betweew Patch, Build and Pilot.

Patch - If there is anything wrong with the application and user has complained about it, then we will release a patch which will be having a solution to that, lets us suppose you have an finance application and whenever you create an voucher system does some wrong calculation of 1 paisa and when you see your reports then there is a mismatch of DR and CR so you can send a patch running which it will correct the database entries and then release a new version of your product.

Build - It is a compete software /combination of module which are ready to be tested or ready to be delivered to client after testing.

Pilot - The primary purposes of a pilot are to demonstrate that your design works in the production environment as you expected and that it meets your organization’s business requirements. Conducting a pilot reduces your organization’s risk of encountering problems during full-scale deployment. To further minimize your risk during deployment, you might want to conduct multiple pilots consisting of separate pilots for different technologies or operating systems, or you might want to conduct a full-scale pilot in phases

How do you test an application with over 500 users without using any performance tools?

It is quite impossible as bringing in so many resources is not possible and not advisable as well, it is always advisable to use virtual users for such type of testing, but I believe that if we need to do it manually then my approach would be – create a batch of 10/20 users (A feasible number) and put stress on server by gradually increasing the number of user I will manually see the performance by using task manager, and then can find out a rough figure for 500 as if the response time is 15 seconds for 20 users then for 500 users the response time will be 6 – 6.30 minutes

Same thing can be done by putting load of 20 users simultaneously

You have new functionality to be tested and also you have a bug fix that needs to be tested. How do you go about testing each of these?

As testing a new functionality needs some time as we need to write the test cases of that as well where as testing an existing bug is just a regression testing so if there is a release then I would prefer to test the bug first.
And if he says that explain the process then explain the test scenario creation test case creation and the entire process for new functionality and for bug testing explain regression testing process

Thursday, September 27, 2007

Difference between CMM and CMMi

CMM is a reference model of matured practices in a specified discipline for e.g. Systems Engineering CMM,
Software CMM, People CMM, Software Acquisition CMM etc. But they were difficult to integrate as and when needed.
So the CMMi evolved as a more matured set of guidelines and was built combining the best components of individual
Disciplines of CMM (Software CMM, People CMM etc). It can be applied to product manufacturing, People management, Software development etc
The Capability Maturity Model is a baseline of key practices that should be implemented by any entity developing or maintaining a product which is completely or partially software. With the SW-CMM, the emphasis is on the software practices.
Whereas with the CMMI, we can find both software and systems practices. We can imagine that these models would be the outcome of vast consultation of successful projects which would then be documented in a model of what to do to carry out projects with success and to improve continuously, and recommended to the software and systems engineering community

Difference between Test Methodology and Test strategy

Test Methodology includes Planning & method/approach used to test the S/w. Testing methodology is generally three types such as black box, white box and gray box testing.
Testing Technique---Use black box and white box testing technique such as path testing, coverage, code coverage, boundary value, equivalence partitioning etc….
Test Strategy contains different types of testing used to test overall & each phase of the s/w, which methodology used BBT or WBT and also when we use manual & when we will automation test

Difference between Web Server and Application-server application

Web Server
Webserver serves pages for viewing in web browser.Webserver exclusively handles http requests
Webserver delegation model is fairly simple,when the request comes into the webserver,it simply passes the request to the program best able to handle it(Server side program). It may not support transactions and database connection pooling
Web Server serves static HTML pages or gifs, jpegs, etc., and can also run code written in CGI, JSP etc. A Web server handles the HTTP protocol. Eg of some web server are IIS or apache
A J2EE application server runs servlets and JSPs (infact a part of the app server called web container is responsible for running servlets and JSPs) that are used to create HTML pages dynamically. In addition, J2EE application server can run EJBs - which are used to execute business logic.

Application server
application server provides exposes business logic for client applications through various protocols
Application server serves business logic to application programs through any number of protocols.

Application server is more capable of dynamic behavior than webserver. We can also configure application server to work as a webserver.Simply Application server is a superset of webserver.
An Application Server is used to run business logic or dynamically generated presentation code. It can either be .NET based or J2EE based (BEA WebLogic Server, IBM WebSphere, and JBoss).

A J2EE application server runs servlets and JSPs (infact a part of the app server called web container is responsible for running servlets and JSPs) that are used to create HTML

What is 3- and n-tier architecture


N-Tier Architecture:

N-tier architecture has become a buzzword among serious developers. Simply put, n-tier architecture means carefully separating your application into any number of logical, functional layers. For example, you may
separate your your data, your business logic, and your presentation. This allows your application to be better organized and maintained. It also allows your application to be deployed in a wider variety of ways.

Developing an application using n-tier architecture usually (not always) takes more time, so it is important to identify which level n-tier to use
Easy installation, simple upgrades, and excellent security are only a few of the benefits that come when an application is divided into multiple layers.
As an example, imagine this diagram represents a company with users accessing the application from the office, home, and remotely, using wireless devices. If the company wants to change their application (e.g. use a more powerful database, or add some business logic), they can do so with little effort.
Client-tier-
Is responsible for the presentation of data, receiving user events and controlling the user interface. The actual business logic (e.g. calculating added value tax) has been moved to an application-server. Today, Java-applets offer an alternative to traditionally written PC-applications. See our Internet-page for further information.
Application-server-tier-
application server provides exposes business logic for client applications through various protocols.
Furthermore, the term "component" is also to be found here. Today the term pre-dominantly describes visual components on the client-side. In the non-visual area of the system, components on the server-side can be defined as configurable objects, which can be put together to form new application processes.

Data-server-tier-
This tier is responsible for data storage. Besides the widespread relational database systems, existing legacy systems databases are often reused here.
It is important to note that boundaries between tiers are logical. It is quite easily possible to run all three tiers on one and the same (physical) machine. The main importance is that the system is neatly structured, and that there is a well planned definition of the software boundaries between the different tiers.


Clear Explanation for 2 and 3 –tier architectures with examples:

Let's suppose I'm going to write a piece of software that students at a school can use to find out what their current grade is in all their classes. I structure the program so that a database of grades resides on the server, and the application resides on the client (the computer the student is physically interacting with).

When the student wants to know his grades, he manipulates my program (by clicking buttons, menu options, etc). The program fires off a query to the database, and the database responds with all the student's grades. Now my application uses all this data to calculate the student's grade, and displays it for him.

This is an example of a 2-tier architecture. The two tiers are:

1. Data server: the database serves up data based on SQL queries submitted by the application.
2. Client application: the application on the client computer consumes the data and presents it in a readable format to the student.

Now this architecture is fine.If you've got a school with 50 students. But suppose the school has 10,000 students.Now we've got a problem. Why?

Because every time a student queries the client application, the data server has to serve up large queries for the client application to manipulate. This is an enormous drain on network resources.

So what do we do? We create a 3-tier architecture by inserting another program at the server level. We call this the server application. Now the client application no longer directly queries the database; it queries the server application, which in turn queries the data server.

What is the advantage to this? Well, now when the student wants to know his final grade, the following happens:

1. The student asks the client application.
2. The client application asks the server application.
3. The server application queries the data server.
4. The data server serves up a recordset with all the student's grades.
5. The server application does all the calculations to determine the grade.
6. The server application serves up the final grade to the client application.
7. The client application displays the final grade for the student.

It's a much longer process on paper, but in reality it's much faster. Why? Notice step 6. Instead of serving up an entire recordset of grades, which has to be passed over a network, the server application is serving up a single number, which is a tiny amount of network traffic in comparison.

There are other advantages to the 3-tier architecture, but that at least gives you a general idea of how it works.

Incidentally, this website is a 3-tier application. The client application is your web browser. The server application is the ASP code, which queries the database (the third tier) for the question-and-answer you requested

Good Test Case

A good test case is effective. That is, it will find a fault. This does not mean that a test case that does not find a fault is not a good one, since this implies that the fault that the test case could have found is not present (i.e. it gives us some confidence in the software, and this in itself has value). Perhaps we should say that a good test case has the potential to find a fault.

A good test case is exemplary; meaning that it does more than one thing for us (is capable of finding more than one fault).

A good test case is revolvable. As the software changes so too will some of the tests need changing to reflect different functionality, new features, etc. The effort to update test cases is usually very significant. However, much can be done when designing test cases to reduce or minimize the amount of maintenance effort needed to update test cases to make them compatible with later versions of the software.

A good test case is economic. A test case that requires 50 people to come into the office on a Saturday morning and all be poised at their keyboards at 9am is expensive to perform and it can only be run once a week. A test case that can be run at the touch of a button and only lasts two seconds is much more economic

Wednesday, September 26, 2007

Test flow

Information flow for testing follows the pattern shown in the figure below. Two types of input are given to the test process: (1) a software configuration; (2) a test configuration. Tests are performed and all outcomes considered, test results are compared with expected results. When erroneous data is identified error is implied and debugging begins. The debugging procedure is the most unpredictable element of the testing procedure. An “error” that indicates a discrepancy of 0.01 percent between the expected and the actual results can take hours, days or months to identify and correct. It is the uncertainty in debugging that causes testing to be difficult to schedule reliability

Software testing

Software testing is the process used to measure the quality of developed computer software. Usually, quality is constrained to such topics as correctness, completeness, security, but can also include more technical requirements as described under the ISO standard ISO 9126, such as capability, reliability, efficiency, portability, maintainability, compatibility, and usability. Testing is a process of technical investigation, performed on behalf of stakeholders, that is intended to reveal quality-related information about the product with respect to the context in which it is intended to operate. This includes, but is not limited to, the process of executing a program or application with the intent of finding errors. Quality is not an absolute; it is value to some person. With that in mind, testing can never completely establish the correctness of arbitrary computer software; testing furnishes a criticism or comparison that compares the state and behaviour of the product against a specification. An important point is that software testing should be distinguished from the separate discipline of Software Quality Assurance which encompasses all business process areas, not just testing.