HP recently released the newest version of Quality Center, they call it the Application Lifecycle Management Application or QC 11 to refer to context. Quite exciting!. I feel the name suits the objective, as ALM is designed to adapt the dynamics of Agile Software Development more aggressively and productively.
HP Application Lifecycle Management provides a centralized platform for managing and automating the application lifecycle, from inception to retirement. It empowers application teams to plan, build and release better-quality applications with fewer delays.
HP Application Lifecycle Management features :
- Centrally manage and track all application projects
- Attain real-time visibility into the application lifecycle
- Centrally manage and enforce consistent workflows and processes
- Reduce duplication of effort across projects
- Provide an aggregated, cross-application project view of quality status and defect trends
- Facilitate collaboration and communication among internal and external stakeholder groups, across multiple projects
Here is a video on a Webinar, given by Raziel Tabib, Sr. Product Manager at HP for the HP ALM Suite, the video briefly provides an orientation on the features on the latest version of QC 11.
Many businesses are asked to undertake User Acceptance Testing (UAT) when an IT change is implemented within their organisation. For users who have never tested before this can be a daunting task.
Often users take their steer on how to conduct UAT from the IT Test phases which occur before hand. Whilst it is essential to work with the IT test teams on any implementation, this approach can often result in a repetition of the system testing which has been undertaken previously, resulting in errors making their way into the production environment. To prevent this, users must ensure that UAT focuses on all the tools which support their business processes of which the IT system is only one.
A user will most likley wish to consult their organisational Test strategy before starting to conduct UAT however the following briefly outlines the broad concepts which underpin this phase of testing.
Imagine a business process runs horizontally from A to Z and is supported by the following four key pillars:
- The documented procedure/process (Procedure)
- The IT System which enables the service (System)
- The person who uses the IT system (User)
- The training which is delivered to the user either personally or via a user guide (Training)
User Acceptance Testing should aim to test that these four pillars work in harmony to deliver the end to end business process or “user experience” as the case may be.
Lets understand what is User Acceptance Testing?
Once the application is ready to be released the crucial step is User Acceptance Testing.
In this step a group representing a cross section of end users tests the application.
The user acceptance testing is done using real world scenarios and perceptions relevant to the end users.
What is User Acceptance Testing?
User Acceptance Testing is often the final step before rolling out the application.
Usually the end users who will be using the applications test the application before ‘accepting’ the application.
This type of testing gives the end users the confidence that the application being delivered to them meets their requirements.
This testing also helps nail bugs related to usability of the application.
User Acceptance Testing – Prerequisites:
Before the User Acceptance testing can be done the application is fully developed.
Various levels of testing (Unit, Integration and System) are already completed before User Acceptance Testing is done. As various levels of testing have been completed most of the technical bugs have already been fixed before UAT.
User Acceptance Testing – What to Test?
To ensure an effective User Acceptance Testing Test cases are created.
These Test cases can be created using various use cases identified during the Requirements definition stage.
The Test cases ensure proper coverage of all the scenarios during testing.
During this type of testing the specific focus is the exact real world usage of the application. The Testing is done in an environment that simulates the production environment.
The Test cases are written using real world scenarios for the application
User Acceptance Testing – How to Test?
The user acceptance testing is usually a black box type of testing. In other words, the focus is on the functionality and the usability of the application rather than the technical aspects. It is generally assumed that the application would have already undergone Unit, Integration and System Level Testing.
However, it is useful if the User acceptance Testing is carried out in an environment that closely resembles the real world or production environment.
The steps taken for User Acceptance Testing typically involve one or more of the following:
- User Acceptance Test (UAT) Planning
- Designing UA Test Cases
- Selecting a Team that would execute the (UAT) Test Cases
- Executing Test Cases
- Documenting the Defects found during UAT
- Resolving the issues/Bug Fixing
- Sign Off
User Acceptance Test (UAT) Planning:
As always the Planning Process is the most important of all the steps. This affects the effectiveness of the Testing Process. The Planning process outlines the User Acceptance Testing Strategy. It also describes the key focus areas, entry and exit criteria.
Designing UA Test Cases:
The User Acceptance Test Cases help the Test Execution Team to test the application thoroughly. This also helps ensure that the UA Testing provides sufficient coverage of all the scenarios.
The Use Cases created during the Requirements definition phase may be used as inputs for creating Test Cases. The inputs from Business Analysts and Subject Matter Experts are also used for creating.
Each User Acceptance Test Case describes in a simple language the precise steps to be taken to test something.
The Business Analysts and the Project Team review the User Acceptance Test Cases.
Selecting a Team that would execute the (UAT) Test Cases:
Selecting a Team that would execute the UAT Test Cases is an important step.
The UAT Team is generally a good representation of the real world end users.
The Team thus comprises of the actual end users who will be using the application.
Executing Test Cases:
The Testing Team executes the Test Cases and may additional perform random Tests relevant to them
Documenting the Defects found during UAT:
The Team logs their comments and any defects or issues found during testing.
Resolving the issues/Bug Fixing:
The issues/defects found during Testing are discussed with the Project Team, Subject Matter Experts and Business Analysts. The issues are resolved as per the mutual consensus and to the satisfaction of the end users.
Upon successful completion of the User Acceptance Testing and resolution of the issues the team generally indicates the acceptance of the application. This step is important in commercial software sales. Once the User “Accept” the Software delivered they indicate that the software meets their requirements.
The users now confident of the software solution delivered and the vendor can be paid for the same.
What are the key deliverables of User Acceptance Testing?
In the Traditional Software Development Lifecycle successful completion of User Acceptance Testing is a significant milestone.
The Key Deliverables typically of User Acceptance Testing Phase are:
- The Test Plan- This outlines the Testing Strategy
- The UAT Test cases – The Test cases help the team to effectively test the application
- The Test Log – This is a log of all the test cases executed and the actual results.
- User Sign Off – This indicates that the customer finds the product delivered to their satisfaction
After loosing all my previous blogs on software testing, it kinda took me a while to think around this fact to come over this trauma and start to blog on my core competency again. Well, as i said in my previous blogs, time should move on and so should my blogging.
So i have decided to start up with the variation of both the absolute basics of software testing as well as the more complex topics. Well, to kick off, i wanted to first and foremost start with the absolute basics on answering a question, which many ask me most of times. “Whats the future of QA and Software Testing and does this field of expertise have a scope for a long term professional career” Well to start off, “Software” in its sense was introduced with the very sole objective to achieve productive business, and over the years, every single line of business is now done using software applications and systems. So literally there cannot be any software/hardware/process/business without a QA intervention, so the scope for a career in Software Testing has phenomenal growth.
So is Software QA Testing the right career path for you or not ??
Let me first explain in brief about software testing in lay man terms. Software testing and quality control are the processes by means of which application quality is improved. Software testing is done in each phase of product life cycle i.e from requirement specifications , design, coding, to the user acceptance.
Many complex software structures require in depth analytical and technical skill to test the applications. Knowledge of programming languages is required for unit testing, scripting skill essential for Automation testing.
Now we will speak about your career in software testing. No one can guide you choosing your career more than you! Its right and you are the only person to decide your career.
Do self-assessment to figure out where you can fit well. Do study of your skills, interests, strengths, weaknesses.
- Ask some questions to your self like:
- What is your goal in life?
- What will increase your satisfaction and skill?
- What is your interest?
- Which skills you have developed in your life till now?
- Which training you did that can be applied to future job?
By answering these questions you will automatically come to decision.
To switch to software testing career What skills you will require? Is the most important question I think.
In my previous post what makes a good test engineer, I mentioned some of the software testing required skills.
1. Communication: Customer communication as well as team communication most important for this job. Written communication as well!
2. Technical skill: As I mentioned earlier for testing technical domain skill in languages is important.
Some of the Testing skills are:
- Project life cycle,
- Testing concepts,
- Knowledge of testing types,
- Programming languages familiarity,
- Database concepts,
- Test plan idea,
- Ability to analyze requirements,
- Documentation skill,
- Testing tools
3. Leadership quality
4. Analytical and judging skill
Don’t worry if you don’t have some of the skills mentioned above. You can always learn the things if you have interest. Non-IT personas can also grow fast by gaining necessary skills.
All the best!!
1) How Does Run time data (Parameterization) is handled in QTP?
A). You can then enter test data into the Data Table, an integrated Spreadsheet with the full functionality of Excel, to manipulate data Sets and create multiple test iterations, without programming, to Expand test case coverage. Data can be typed in or imported from Databases, spreadsheets, or text files.
2) What is keyword view and Expert view in QTP?
A) Quick Test’s Keyword Driven approach, test automation experts Have full access to the underlying test and object properties, via an integrated scripting and debugging environment that is round-trip synchronized with the Keyword View.
Advanced testers can view and edit their tests in the Expert View, which reveals the underlying industry-standard VBScript that Quick Test Professional automatically generates. Any changes made in the Expert View are automatically synchronized with the Keyword View.
3) Explain about the Test Fusion Report of QTP?
A) Once a tester has run a test, a TestFusion report displays all aspects of the test run: a high-level results overview, an expandable Tree View of the test specifying exactly where application failures occurred, the test data used, application screen shots for every step that highlight any discrepancies, and detailed explanations of each checkpoint pass and failure. By combining TestFusion reports with QuickTest Professional, you can share reports across an entire QA and development team.
4) To which environments does a QTP support?
A) QuickTest Professional supports functional testing of all enterprise environments, including Windows, Web, NET, Java/J2EE, SAP, Siebel, Oracle, PeopleSoft, Visual Basic, ActiveX, mainframe terminal emulators, and Web services.
5) What is QTP?
A) QuickTest is a graphical interface record-playback automation tool. It is able to work with any web, java or windows client application. Quick Test enables you to test standard web objects and ActiveX controls. In addition to these environments, QuickTest Professional also enables you to test Java applets and applications and multimedia objects on Applications as well as standard Windows applications, Visual Basic 6 applications and .NET framework applications.
6) Explain QTP testing process?
A) The QuickTest testing process consists of 6 main phases:
1. Create your test plan: Prior to automating there should be a detailed description of the test including the exact steps to follow, data to be input, and all items to be verified by the test. The verification information should include both data validations and existence or state verifications of objects in the application.
2. Recording a session on your application: As you navigate through your application, QuickTest graphically displays each step you perform in the form of a collapsible icon-based test tree . A step is any user action that causes or makes a change in your site, such as clicking a link or image, or entering data in a form.
3. Enhancing your test: Inserting checkpoints into your test lets you search for a specific value of a page, object or text string, which helps you identify whether or not your application is functioning correctly.
NOTE: Checkpoints can be added to a test as you record it or after the fact via the Active Screen. It is much easier and faster to add the checkpoints during the recording process.
Broadening the scope of your test by replacing fixed values with parameters lets you check how your application performs the same operations with multiple sets of data. Adding logic and conditional statements to your test enables you to add sophisticated checks to your test.
4. Debugging your test: If changes were made to the script, you need to debug it to check that it operates smoothly and without interruption.
5. Running your test on a new version of your application: You run a test to check the behavior of your application. While running, QuickTest connects to your application and performs each step in your test.
6. Analyzing the test results: You examine the test results to pinpoint defects in your application.
7. Reporting defects: As you encounter failures in the application when analyzing test results, you will create defect reports in Defect Reporting Tool.
7) Explain the QTP Tool interface.
A) It contains the following key elements:
- Title bar, displaying the name of the currently open test
- Menu bar, displaying menus of QuickTest commands
- File toolbar, containing buttons to assist you in managing tests
- Test toolbar, containing buttons used while creating and maintaining tests
- Debug toolbar, containing buttons used while debugging tests.
Note: The Debug toolbar is not displayed when you open QuickTest for the first time. You can display the Debug toolbar by choosing View > Toolbars > Debug.
- Action toolbar, containing buttons and a list of actions, enabling you to view the details of an individual action or the entire test flow.
Note: The Action toolbar is not displayed when you open QuickTest for the first time. You can display the Action toolbar by choosing View > Toolbars > Action. If you insert a reusable or external action in a test, the Action toolbar is displayed automatically.
- Test pane, containing two tabs to view your test-the Tree View and the Expert View Test Details pane, containing the Active Screen.
- Data Table, containing two tabs, Global and Action, to assist you in parameterizing your test Debug Viewer pane, containing three tabs to assist you in debugging your test-Watch Expressions, Variables, and Command. (The Debug Viewer pane can be opened only when a test run pauses at a breakpoint.)
- Status bar, displaying the status of the test.
8) How QTP recognizes Objects in AUT?
A) QuickTest stores the definitions for application objects in a file called the Object Repository. As you record your test, QuickTest will add an entry for each item you interact with. Each Object Repository entry will be identified by a logical name (determined automatically by QuickTest), and will contain a set of properties (type, name, etc) that uniquely identify each object.
Each line in the QuickTest script will contain a reference to the object that you interacted with, a call to the appropriate method (set, click, check) and any parameters for that method (such as the value for a call to the set method). The references to objects in the script will all be identified by the logical name, rather than any physical, descriptive properties.
9) What are the types of Object Repository’s in QTP?
A) QuickTest has two types of object repositories for storing object information: shared object repositories and action object repositories. You can choose which type of object repository you want to use as the default type for new tests, and you can change the default as necessary for each new test.
The object repository per-action mode is the default setting. In this mode, QuickTest automatically creates an object repository file for each action in your test so that you can create and run tests without creating, choosing, or modifying object repository files. However, if you do modify values in an action object repository, your changes do not have any effect on other actions. Therefore, if the same test object exists in more than one action and you modify an object’s property values in one action, you may need to make the same change in every action (and any test) containing the object.
10) Explain the check points in QTP?
A). A checkpoint verifies that expected information is displayed in a Application while the test is running. You can add eight types of checkpoints to your test for standard web objects using QTP.
- A page checkpoint checks the characteristics of a Application.
- A text checkpoint checks that a text string is displayed in the appropriate place on a Application.
- An object checkpoint (Standard) checks the values of an object on a Application.
- An image checkpoint checks the values of an image on a Application.
- A table checkpoint checks information within a table on a Application.
- An Accessibility checkpoint checks the web page for Section 508 compliance.
- An XML checkpoint checks the contents of individual XML data files or XML documents that are part of your Web application.
- Adatabase checkpoint checks the contents of databases accessed by your web site.
11) In how many ways we can add check points to an application using QTP.
A) We can add checkpoints while recording the application or we can add after recording is completed using Active screen
(Note : To perform the second one The Active screen must be enabled while recording).
12) How does QTP identifies the object in the application?
A) QTP identifies the object in the application by Logical Name and Class.
13) If an application name is changes frequently i.e while recording it has name “Window1″ and then while running its “Windows2″ in this case how does QTP handles?
A) QTP handles those situations using “Regular Expressions”.
14) What is Parameterizing Tests?
A) When you test your application, you may want to check how it performs the same operations with multiple sets of data. For example, suppose you want to check how your application responds to ten separate sets of data. You could record ten separate tests, each with its own set of data. Alternatively, you can create a parameterized test that runs ten times: each time the test runs, it uses a different set of data.
15) What is test object model in QTP?
A) The test object model is a large set of object types or classes that QuickTest uses to represent the objects in your application. Each test object class has a list of properties that can uniquely identify objects of that class and a set of relevant methods that QuickTest can record for it.
A test object is an object that QuickTest creates in the test or component to represent the actual object in your application. QuickTest stores information about the object that will help it identify and check the object during the run session.
A run-time object is the actual object in your Web site or application on which methods are performed during the run session.
When you perform an operation on your application while recording, QuickTest identifies the test object class that represents the object on which you performed the operation and creates the appropriate test object reads the current value of the object’s properties in your application and stores the list of properties and values with the test object chooses a unique name for the object, generally using the value of one of its prominent properties records the operation that you performed on the object using the appropriate QuickTest test object method. For example, suppose you click on a Find button with the following HTML source code:
QuickTest identifies the object that you clicked as a WebButton test object. It creates a WebButton object with the name Find, and records the properties and values for the Find WebButton. It also records that you performed a Click method on the WebButton. QuickTest displays your step like this:
Browser(“Mercury Interactive”).Page(“Mercury Interactive”).
16) What is Object Spy in QTP?
A) Using the Object Spy, you can view the properties of any object in an open application. You use the Object Spy pointer to point to an object. The Object Spy displays the selected object’s hierarchy tree and its properties and values in the Properties tab of the Object Spy dialog box.
17) What is the Diff between Image check-point and Bit map Check point?
A) Image checkpoints enable you to check the properties of a Web image. You can check an area of a Web page or application as a bitmap. While creating a test or component, you specify the area you want to check by selecting an object. You can check an entire object or any area within an object. QuickTest captures the specified object as a bitmap, and inserts a checkpoint in the test or component. You can also choose to save only the selected area of the object with your test or component in order to save disk Space. For example, suppose you have a Web site that can display a map of a city the user specifies. The map has control keys for zooming. You can record the new map that is displayed after one click on the control key that zooms in the map.
Using the bitmap checkpoint, you can check that the map zooms in correctly. You can create bitmap checkpoints for all supported testing environments (as long as the appropriate add-ins is loaded). Note: The results of bitmap checkpoints may be affected by factors such as operating system, screen resolution, and color settings.
18) How many ways we can parameterize data in QTP?
A) There are four types of parameters:
– Test, action or component parameters enable you to use values passed from your test or component, or values from other actions in your test.
– Data Table parameters enable you to create a data-driven test (or action) that runs several times using the data you supply. In each repetition, or iteration, QuickTest uses a different value from the Data Table.
– Environment variable parameters enable you to use variable values from other sources during the run session. These may be values you supply, or values that QuickTest generates for you based on conditions and options you choose.
– Random number parameters enable you to insert random numbers as values in your test or component. For example, to check how your application handles small and large ticket orders, you can have QuickTest generate a random number and insert it in a number of tickets edit field.
19. How do u do batch testing in WR & is it possible to do in QTP, if so explain?
Ans: Batch Testing in WR is nothing but running the whole test set by selecting “Run Testset” from the “Execution Grid”. The same is possible with QTP also. If our test cases are automated then by selecting “Run Testset” all the test scripts can be executed. In this process the Scripts get executed one by one by keeping all the remaining scripts in “Waiting” mode.
20. What does it mean when a check point is in red color? what do u do?
Ans : A red color indicates failure. Here we analyze the cause for failure whether it is a Script Issue or Environment Issue or a Application issue.
21. What do you call the window test director – testlab?
Ans : “Execution Grid”. It is place from where we Run all Manual / Automated Scripts.
22. How does u create new test sets in TD?
- Login to TD.
- Click on “Test Lab” tab.
- Select the Desired folder under which we need to Create the Test Set. (Test Sets can be grouped as per module.)
- Click on “New Test Set or Ctrl+N” Icon to create a Test Set.
23. How do u do batch testing in WR & is it possible to do in QTP, if so explain?
Ans : You can use Test Batch Runner to run several tests in succession. The results for each test are stored in their default location.
Using Test Batch Runner, you can set up a list of tests and save the list as an .mtb file, so that you can easily run the same batch of tests again, at another time. You can also choose to include or exclude a test in your batch list from running during a batch run.
24. How to Import data from a “.xls” file to Data table during Runtime.
- Datatable.Import “…XLS file name…”
- DataTable.ImportSheet(FileName, SheetSource, SheetDest)
- DataTable.ImportSheet “C:\name.xls” ,1 ,”name”
25. How to export data present in Datatable to an “.xls” file?
Ans : DataTable.Export “….xls file name…”
26. Syntax for how to call one script from another and Syntax to call one “Action” in another?
Ans: RunAction ActionName, [IterationMode , IterationRange , Parameters]
Here the actions become reusable on making this call to any Action.
IterationRange String Not always required. Indicates the rows for which action iterations will be performed. Valid only when the IterationMode is rngIterations. Enter the row range (i.e. “1-7″), or enter rngAll to run iterations on all rows.
If the action called by the RunAction statement includes an ExitAction statement, the RunAction statement can return the value of the ExitAction’s RetVal argument.
27. How to export QTP results to an “.xls” file?
Ans : By default it creates an “XML” file and displays the results.
28. Differences between QTP & Winrunner?
- QTP is object bases Scripting ( VBS) where Winrunner is TSL (C based) Scripting.
- QTP supports “.NET” application Automation not available in Winrunner.
- QTP has “Active Screen” support which captures the application, not available in WR.
- QTP has “Data Table” to store script values , variables which WR does not have.
Using a “point and click” capability you can easily interface with objects, their definitions and create checkpoints after having recorded a script without having to navigate back to that location in your application like you have to with WinRunner. This greatly speeds up script development.
29. How to add a runtime parameter to a datasheet?
Ans: By using LocalSheet property. The following example uses the LocalSheet property to return the local sheet of the run-time Data Table in order to add a parameter (column) to it:
30. What scripting language is QTP of?
Ans : VB Script.
31. Analyzing the Checkpoint results
Standard Checkpoint: By adding standard checkpoints to your tests or components, you can compare the expected values of object properties to the object’s current values during a run session. If the results do not match, the checkpoint fails.
32. Table and DB Checkpoints:By adding table checkpoints to your tests or components, you can check that a specified value is displayed in a cell in a table on your application. By adding database checkpoints to your tests or components, you can check the contents of databases accessed by your application.
The results displayed for table and database checkpoints are similar. When you run your test or component, QuickTest compares the expected results of the checkpoint to the actual results of the run session. If the results do not match, the checkpoint fails.
You can check that a specified value is displayed in a cell in a table by adding a table checkpoint to your test or component. For ActiveX tables, you can also check the properties of the table object. To add a table checkpoint, you use the Checkpoint Properties dialog box.
Table checkpoints are supported for Web and ActiveX applications, as well as for a variety of external add-in environments.
You can use database checkpoints in your test or component to check databases accessed by your Web site or application and to detect defects. You define a query on your database, and then you create a database checkpoint that checks the results of the query.
Database checkpoints are supported for all environments supported by QuickTest, by default, as well as for a variety of external add-in environments.
There are two ways to define a database query:
- Use Microsoft Query. You can install Microsoft Query from the custom installation of Microsoft Office.
- Manually define an SQL statement.
The Checkpoint timeout option is available only when creating a table checkpoint. It is not available when creating a database checkpoint.
33. Checking Bitmaps:
A.) You can check an area of a Web page or application as a bitmap. While creating a test or component, you specify the area you want to check by selecting an object. You can check an entire object or any area within an object. QuickTest captures the specified object as a bitmap, and inserts a checkpoint in the test or component. You can also choose to save only the selected area of the object with your test or component in order to save disk space.
When you run the test or component, QuickTest compares the object or selected area of the object currently displayed on the Web page or application with the bitmap stored when the test or component was recorded. If there are differences, QuickTest captures a bitmap of the actual object and displays it with the expected bitmap in the details portion of the Test Results window. By comparing the two bitmaps (expected and actual), you can identify the nature of the discrepancy.
For example, suppose you have a Web site that can display a map of a city the user specifies. The map has control keys for zooming. You can record the new map that is displayed after one click on the control key that zooms in the map. Using the bitmap checkpoint, you can check that the map zooms in correctly.
You can create bitmap checkpoints for all supported testing environments (as long as the appropriate add-ins is loaded).
Note: The results of bitmap checkpoints may be affected by factors such as operating system, screen resolution, and color settings.
34. Text/Text Area Checkpoint: In the Text/Text Area Checkpoint Properties dialog box, you can specify the text to be checked as well as which text is displayed before and after the checked text. These configuration options are particularly helpful when the text string you want to check appears several times or when it could change in a predictable way during run sessions.
Note: In Windows-based environments, if there is more than one line of text selected, the Checkpoint Summary pane displays [complex value] instead of the selected text string. You can then click Configure to view and manipulate the actual selected text for the checkpoint.
QTP automatically displays the Checked Text in red and the text before and after the Checked Text in blue. For text area checkpoints, only the text string captured from the defined area is displayed (Text Before and Text After are not displayed).
To designate parts of the captured string as Checked Text and other parts as Text Before and Text After, click the Configure button. The Configure Text Selection dialog box opens.
35. Checking XML: XML (Extensible Markup Language) is a meta-markup language for text documents that is endorsed as a standard by the W3C. XML makes the complex data structures portable between different computer environments/operating systems and programming languages, facilitating the sharing of data.
XML files contain text with simple tags that describe the data within an XML document. These tags describe the data content, but not the presentation of the data. Applications that display an XML document or file use either Cascading Style Sheets (CSS) or XSL Formatting Objects (XSL-FO) to present the data.
You can verify the data content of XML files by inserting XML checkpoints. A few common uses of XML checkpoints are described below:
- An XML file can be a static data file that is accessed in order to retrieve commonly used data for which a quick response time is neededâ€”for example, country names, zip codes, or area codes. Although this data can change over time, it is normally quite static. You can use an XML file checkpoint to validate that the data has not changed from one application release to another.
- An XML file can consist of elements with attributes and values (character data). There is a parent and child relationship between the elements, and elements can have attributes associated with them. If any part of this structure (including data) changes, your application’s ability to process the XML file may be affected. Using an XML checkpoint, you can check the content of an element to make sure that its tags, attributes, and values have not changed.
- XML files are often an intermediary that retrieves dynamically changing data from one system. The data is then accessed by another system using Document Type Definitions (DTD), enabling the accessing system to read and display the information in the file. You can use an XML checkpoint and parameterize the captured data values in order to check an XML document or file whose data changes in a predictable way.
- XML documents and files often need a well-defined structure in order to be portable across platforms and development systems. One way to accomplish this is by developing an XML schema, which describes the structure of the XML elements and data types. You can use schema validation to check that each item of content in an XML file adheres to the schema description of the element in which the content is to be placed.
36. Object Repositories types, which & when to use?
A.) To choose the default object repository mode and the appropriate object repository mode for each test, you need to understand the differences between the two modes. In general, the object repository per-action mode is easiest to use when you are creating simple record and run tests, especially under the following conditions:
- You have only one, or very few, tests that correspond to a given application, interface, or set of objects.
- You do not expect to frequently modify test object properties.
- You generally create single-action tests.
Conversely, the shared object repository mode is generally the preferred mode when:
- You have several tests that test elements of the same application, interface, or set of objects.
- You expect the object properties in your application to change from time to time and/or you regularly need to update or modify test object properties.
- You often work with multi-action tests and regularly use the Insert Copy of Action and Insert Call to Action options.
37. Can we Script any test case with out having Object repository? or Using Object Repository is a must?
Ans: No. U can script with out Object repository by knowing the Window Handlers, spying and recognizing the objects logical names and properties available.
38. How to execute a WinRunner Script in QTP?
(a) TSLTest.RunTest TestPath, TestSet [, Parameters ] –> Used in QTP 6.0 used for backward compatibility
Parameters: The test set within Quality Center , in which test runs are stored. Note that this argument is relevant only when working with a test in a Quality Center project. When the test is not saved in Quality Center , this parameter is ignored.
e.g : TSLTest.RunTest “D:\test1″, “”
(b) TSLTest.RunTestEx TestPath, RunMinimized, CloseApp [, Parameters ]
TSLTest.RunTestEx “C:\WinRunner\Tests\basic_flight”, TRUE, FALSE, “MyValue”
CloseApp : Indicates whether to close the WinRunner application when the WinRunner test run ends.
Parameters : Up to 15 WinRunner function argument
39. How to handle Run-time errors?
Ans: On Error Resume Next : causes execution to continue with the statement immediately following the statement that caused the run-time error, or with the statement immediately following the most recent call out of the procedure containing the On Error Resume Next statement. This allows execution to continue despite a run-time error. You can then build the error-handling routine inline within the procedure. Using “Err” object msgbox “Error no: ” & ” ” & Err.Number & ” ” & Err.description & ” ” & Err.Source & Err.HelpContext
40. How to change the run-time value of a property for an object?
Ans : SetTOProperty changes the property values used to identify an object during the test run. Only properties that are included in the test object description can be set.
41. How to retrieve the property of an object?
Ans : using “GetRoProperty”.
42. How to open any application during Scripting?
Ans : SystemUtil, object used to open and close applications and processes during a run session. A SystemUtil.Run statement is automatically added to your test when you run an application from the Start menu or the Run dialog box while recording a test
E.g : SystemUtil.Run “Notepad.exe”
SystemUtil.CloseDescendentProcesses (Closes all the processes opened by QTP)
43. Types of properties that Quick Test learns while recording?
Ans : (a) Mandatory (b) Assistive .
In addition to recording the mandatory and assistive properties specified in the Object Identification dialog box, QuickTest can also record a backup ordinal identifier for each test object. The ordinal identifier assigns the object a numerical value that indicates its order relative to other objects with an otherwise identical description (objects that have the same values for all properties specified in the mandatory and assistive property lists). This ordered value enables QuickTest to create a unique description when the mandatory and assistive properties are not sufficient to do so.
44. What is the extension of script and object repository files?
Ans : Object Repository : .tsr , Script : .mts, Excel : Default.xls
45. How to supress warnings from the “Test results page”?
Ans : From the Test results Viewer “Tools > Filters > Warnings”…must be “Unchecked”.
46. When we try to use test run option “Run from Step”, the browser is not launching automatically why?
Ans : This is default behaviour.
47. How to “Turn Off” QTP results after running a Script?
Ans : Goto “Tools > Options > Run Tab” and Deselect “View results when run session ends”. But this supresses only the result window, but a og will be created and can viewed manulaly which cannot be restricted from getting created.
48. How to verify the Cursor focus of a certain field?
Ans : Use “focus” property of “GetRoProperty” method”
49. How to make arguments optional in a function?
Ans : this is not possible as default VBS doesn’t support this. Instead you can pass a blank scring and have a default value if arguments r not required.
50. How to covert a String to an integer?
Ans : CInt()—> a conversion function available.
51. Inserting a Call to Action is not importing all columns in Datatable of globalsheet. Why?
Ans : Inserting a call to action will only Import the columns of the Action called
One of the key skills a QA Tester should have is, the ability to use UNIX; the following are some of the basics that you need to know to be able to work and validate applications which have a UNIX middle tier. The post is re-blogged from the topic “Basic UNIX Command Line (shell) navigation” by Cliff at Feeengineer.org
File and directory paths in UNIX use the forward slash “/”
to separate directory names in a path.
/ “root” directory
/usr directory usr (sub-directory of / “root” directory)
/usr/STRIM100 STRIM100 is a subdirectory of /usr
Moving around the file system:
pwd Show the "present working directory", or current directory.
cd Change current directory to your HOME directory.
cd /usr/STRIM100 Change current directory to /usr/STRIM100.
cd INIT Change current directory to INIT which is a sub-directory of the current
cd .. Change current directory to the parent directory of the current directory.
cd $STRMWORK Change current directory to the directory defined by the environment
cd ~bob Change the current directory to the user bob's home directory (if you have permission).
Listing directory contents:
ls list a directory
ls -l list a directory in long ( detailed ) format
$ ls -l
drwxr-xr-x 4 cliff user 1024 Jun 18 09:40 WAITRON_EARNINGS
-rw-r--r-- 1 cliff user 767392 Jun 6 14:28 scanlib.tar.gz
^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^
| | | | | | | | | | |
| | | | | owner group size date time name
| | | | number of links to file or directory contents
| | | permissions for world
| | permissions for members of group
| permissions for owner of file: r = read, w = write, x = execute -=no permission
type of file: - = normal file, d=directory, l = symbolic link, and others...
ls -a List the current directory including hidden files. Hidden files start
ls -ld * List all the file and directory names in the current directory using
long format. Without the “d” option, ls would list the contents
of any sub-directory of the current. With the “d” option, ls
just lists them like regular files.
Changing file permissions and attributes
chmod 755 file Changes the permissions of file to be rwx for the owner, and rx for
the group and the world. (7 = rwx = 111 binary. 5 = r-x = 101 binary)
chgrp user file Makes file belong to the group user.
chown cliff file Makes cliff the owner of file.
chown -R cliff dir Makes cliff the owner of dir and everything in its directory tree.
You must be the owner of the file/directory or be root before you can do any of these things.
Moving, renaming, and copying files:
cp file1 file2 copy a file
mv file1 newname move or rename a file
mv file1 ~/AAA/ move file1 into sub-directory AAA in your home directory.
rm file1 [file2 …] remove or delete a file
rm -r dir1 [dir2…] recursivly remove a directory and its contents BE CAREFUL!
mkdir dir1 [dir2…] create directories
mkdir -p dirpath create the directory dirpath, including all implied directories in the path.
rmdir dir1 [dir2…] remove an empty directory
Viewing and editing files:
cat filename Dump a file to the screen in ascii.
more filename Progressively dump a file to the screen: ENTER = one line down
SPACEBAR = page down q=quit
less filename Like more, but you can use Page-Up too. Not on all systems.
vi filename Edit a file using the vi editor. All UNIX systems will have vi in some form.
emacs filename Edit a file using the emacs editor. Not all systems will have emacs.
head filename Show the first few lines of a file.
head -n filename Show the first n lines of a file.
tail filename Show the last few lines of a file.
tail -n filename Show the last n lines of a file.
The behavior of the command line interface will differ slightly depending on the shell program that is being used. Depending on the shell used, some extra behaviors can be quite nifty.
You can find out what shell you are using by the command:
Of course you can create a file with a list of shell commands and execute it like a program to perform a task. This is called a shell script. This is in fact the primary purpose of most shells, not the interactive command line behavior.
You can teach your shell to remember things for later using environment variables.
For example under the bash shell:
export CASROOT=/usr/local/CAS3.0 Defines the variable CASROOT with the value
export LD_LIBRARY_PATH=$CASROOT/Linux/lib Defines the variable LD_LIBRARY_PATH with
the value of CASROOT with /Linux/lib appended,
By prefixing $ to the variable name, you can evaluate it in any command:
cd $CASROOT Changes your present working directory to the value of CASROOT
echo $CASROOT Prints out the value of CASROOT, or /usr/local/CAS3.0
printenv CASROOT Does the same thing in bash and some other shells.
A feature of bash and tcsh (and sometimes others) you can use the up-arrow keys to access your previous commands, edit them, and re-execute them.
A feature of bash and tcsh (and possibly others) you can use the TAB key to complete a partially typed filename. For example if you have a file called constantine-monks-and-willy-wonka.txt in your directory and want to edit it you can type ‘vi const’, hit the TAB key, and the shell will fill in the rest of the name for you (provided the completion is unique).
Bash is the way cool shell.
Bash will even complete the name of commands and environment variables.
And if there are multiple completions, if you hit TAB twice bash will show
you all the completions. Bash is the default user shell for most Linux systems.
grep string filename > newfile Redirects the output of the above grep
command to a file ‘newfile’.
grep string filename >> existfile Appends the output of the grep command
to the end of ‘existfile’.
The redirection directives, > and >> can be used on the output of most commands to direct their output to a file.
The pipe symbol “|” is used to direct the output of one command to the input of another.
ls -l | more This commands takes the output of the long format directory list command
"ls -l" and pipes it through the more command (also known as a filter).
In this case a very long list of files can be viewed a page at a time.
du -sc * | sort -n | tail
The command "du -sc" lists the sizes of all files and directories in the
current working directory. That is piped through "sort -n" which orders the
output from smallest to largest size. Finally, that output is piped through "tail"
which displays only the last few (which just happen to be the largest) results.
You can use the output of one command as an input to another command in another way called command substitution. Command substitution is invoked when by enclosing the substituted command in backwards single quotes.
cat `find . -name aaa.txt`
which will cat ( dump to the screen ) all the files named aaa.txt that exist in the current directory or in any subdirectory tree.
Searching for strings in files: The grep command
grep string filename prints all the lines in a file that contain the string
Searching for files : The find command
find search_path -name filename
find . -name aaa.txt Finds all the files named aaa.txt in the current directory or
any subdirectory tree.
find / -name vimrc Find all the files named ‘vimrc’ anywhere on the system.
find /usr/local/games -name “*xpilot*”
Find all files whose names contain the string ‘xpilot’ which
exist within the ‘/usr/local/games’ directory tree.
Reading and writing tapes, backups, and archives: The tar command
The tar command stands for “tape archive”. It is the “standard” way to read and write archives (collections of files and whole directory trees).
Often you will find archives of stuff with names like stuff.tar, or stuff.tar.gz. This is stuff in a tar archive, and stuff in a tar archive which has been compressed using the gzip compression program respectively.
Chances are that if someone gives you a tape written on a UNIX system, it will be in tar format, and you will use tar (and your tape drive) to read it.
Likewise, if you want to write a tape to give to someone else, you should probably use tar as well.
tar xv Extracts (x) files from the default tape drive while listing (v = verbose)
the file names to the screen.
tar tv Lists the files from the default tape device without extracting them.
tar cv file1 file2
Write files 'file1' and 'file2' to the default tape device.
tar cvf archive.tar file1 [file2...]
Create a tar archive as a file "archive.tar" containing file1,
tar xvf archive.tar extract from the archive file
tar cvfz archive.tar.gz dname
Create a gzip compressed tar archive containing everything in the directory
'dname'. This does not work with all versions of tar.
tar xvfz archive.tar.gz
Extract a gzip compressed tar archive. Does not work with all versions of tar.
tar cvfI archive.tar.bz2 dname
Create a bz2 compressed tar archive. Does not work with all versions of tar
File compression: compress, gzip, and bzip2
The standard UNIX compression commands are compress and uncompress. Compressed files have a suffix .Z added to their name.
compress part.igs Creates a compressed file part.igs.Z
uncompress part.igs Uncompresseis part.igs from the compressed file part.igs.Z.
Note the .Z is not required.
Another common compression utility is gzip (and gunzip). These are the GNU compress and
uncompress utilities. gzip usually gives better compression than standard compress,
but may not be installed on all systems. The suffix for gzipped files is .gz
gzip part.igs Creates a compressed file part.igs.gz
gunzip part.igs Extracts the original file from part.igs.gz
The bzip2 utility has (in general) even better compression than gzip, but at the cost of longer
times to compress and uncompress the files. It is not as common a utility as gzip, but is
becoming more generally available.
bzip2 part.igs Create a compressed Iges file part.igs.bz2
bunzip2 part.igs.bz2 Uncompress the compressed iges file.
Looking for help: The man and apropos commands
Most of the commands have a manual page which give sometimes useful, often more or less detailed, sometimes cryptic and unfathomable discriptions of their usage.
man ls Shows the manual page for the ls command
You can search through the man pages using apropos
apropos build Shows a list of all the man pages whose discriptions contain the word "build"
Do a man apropos for detailed help on apropos.
Basics of the vi editor
Opening a file
Edit modes: These keys enter editing modes and type in the text
of your document.
i Insert before current cursor position
I Insert at beginning of current line
a Insert (append) after current cursor position
A Append to end of line
r Replace 1 character
R Replace mode
Terminate insertion or overwrite mode
Deletion of text
x Delete single character
dd Delete current line and put in buffer
ndd Delete n lines (n is a number) and put them in buffer
J Attaches the next line to the end of the current line (deletes carriage return).
u Undo last command
cut and paste
yy Yank current line into buffer
nyy Yank n lines into buffer
p Put the contents of the buffer after the current line
P Put the contents of the buffer before the current line
^d Page down
^u Page up
:n Position cursor at line n
:$ Position cursor at end of file
^g Display current line number
h,j,k,l Left,Down,Up, and Right respectivly. Your arrow keys should also work if
if your keyboard mappings are anywhere near sane.
:n1,n2:s/string1/string2/[g] Substitute string2 for string1 on lines
n1 to n2. If g is included (meaning global),
all instances of string1 on each line
are substituted. If g is not included,
only the first instance per matching line is
^ matches start of line
. matches any single character
$ matches end of line
These and other “special characters” (like the forward slash) can be “escaped” with \ i.e to match the string “/usr/STRIM100/SOFT” say “\/usr\/STRIM100\/SOFT”
:1,$:s/dog/cat/g Substitute 'cat' for 'dog', every instance
for the entire file - lines 1 to $ (end of file)
:23,25:/frog/bird/ Substitute 'bird' for 'frog' on lines
23 through 25. Only the first instance
on each line is substituted.
Saving and quitting and other “ex” commands
These commands are all prefixed by pressing colon (:) and then entered in the lower left corner of the window. They are called “ex” commands because they are commands of the ex text editor – the precursor line editor to the screen editor vi. You cannot enter an “ex” command when you are in an edit mode (typing text onto the screen) Press to exit from an editing mode.
:w Write the current file.
:w new.file Write the file to the name ‘new.file’.
:w! existing.file Overwrite an existing file with the file currently being edited.
:wq Write the file and quit.
:q! Quit with no changes.
:e filename Open the file ‘filename’ for editing.
:set number Turns on line numbering
:set nonumber Turns off line numbering
Testing the process of Extract Transform and Load for a Data-Warehouse applications has become pridominently popular and significant; businesses are increasingly focusing on the collection and organization of data for strategic decision-making. The ability to review historical trends and monitor near real-time operational data has become a key competitive advantage.
There is an exponentially increasing cost associated with finding software defects later in the development lifecycle. In data warehousing, this is compounded because of the additional business costs of using incorrect data to make critical business decisions. Given the importance of early detection of software defects, let’s first review some general goals of testing an ETL application
- Data completeness : Ensures that all expected data is loaded.
- Data transformation: Ensures that all data is transformed correctly according to business rules and/or design specifications.
- Data quality: Ensures that the ETL application correctly rejects, substitutes default values, corrects or ignores and reports invalid data.
- Performance and scalability: Ensures that data loads and queries perform within expected time frames and that the technical architecture is scalable.
- Integration testing: Ensures that the ETL process functions well with other upstream and downstream processes.
- User-acceptance testing: Ensures the solution meets users’ current expectations and anticipates their future expectations.
- Regression testing: Ensures existing functionality remains intact each time a new release of code is completed.
One of the most basic tests of data completeness is to verify that all expected data loads into the data warehouse. This includes validating that all records, all fields and the full contents of each field are loaded. Strategies to consider include:
- Comparing record counts between source data, data loaded to the warehouse and rejected records.
- Comparing unique values of key fields between source data and data loaded to the warehouse. This is a valuable technique that points out a variety of possible data errors without doing a full validation on all fields.
- Utilizing a data profiling tool that shows the range and value distributions of fields in a data set. This can be used during testing and in production to compare source and target data sets and point out any data anomalies from source systems that may be missed even when the data movement is correct.
- Populating the full contents of each field to validate that no truncation occurs at any step in the process. For example, if the source data field is a string(30) make sure to test it with 30 characters.
- Testing the boundaries of each field to find any database limitations. For example, for a decimal(3) field include values of -99 and 999, and for date fields include the entire range of dates expected. Depending on the type of database and how it is indexed, it is possible that the range of values the database accepts is too small.
Validating that data is transformed correctly based on business rules can be the most complex part of testing an ETL application with significant transformation logic. One typical method is to pick some sample records and “stare and compare” to validate data transformations manually. This can be useful but requires manual testing steps and testers who understand the ETL logic. A combination of automated data profiling and automated data movement validations is a better long-term strategy. Here are some simple automated data movement techniques:
- Create a spreadsheet of scenarios of input data and expected results and validate these with the business customer. This is a good requirements elicitation exercise during design and can also be used during testing.
- Create test data that includes all scenarios. Elicit the help of an ETL developer to automate the process of populating data sets with the scenario spreadsheet to allow for flexibility because scenarios will change.
- Utilize data profiling results to compare range and distribution of values in each field between source and target data.
- Validate correct processing of ETL-generated fields such as surrogate keys.
- Validate that data types in the warehouse are as specified in the design and/or the data model.
- Set up data scenarios that test referential integrity between tables. For example, what happens when the data contains foreign key values not in the parent table?
- Validate parent-to-child relationships in the data. Set up data scenarios that test how orphaned child records are handled.
For the purposes of this discussion, data quality is defined as “how the ETL system handles data rejection, substitution, correction and notification without modifying data.” To ensure success in testing data quality, include as many data scenarios as possible. Typically, data quality rules are defined during design, for example:
- Reject the record if a certain decimal field has nonnumeric data.
- Substitute null if a certain decimal field has nonnumeric data.
- Validate and correct the state field if necessary based on the ZIP code.
- Compare product code to values in a lookup table, and if there is no match load anyway but report to users.
Depending on the data quality rules of the application being tested, scenarios to test might include null key values, duplicate records in source data and invalid data types in fields (e.g., alphabetic characters in a decimal field). Review the detailed test scenarios with business users and technical designers to ensure that all are on the same page. Data quality rules applied to the data will usually be invisible to the users once the application is in production; users will only see what’s loaded to the database. For this reason, it is important to ensure that what is done with invalid data is reported to the users. These data quality reports present valuable data that sometimes reveals systematic issues with source data. In some cases, it may be beneficial to populate the “before” data in the database for users to view.
Performance and Scalability
As the volume of data in a data warehouse grows, ETL load times can be expected to increase, and performance of queries can be expected to degrade. This can be mitigated by having a solid technical architecture and good ETL design. The aim of the performance testing is to point out any potential weaknesses in the ETL design, such as reading a file multiple times or creating unnecessary intermediate files. The following strategies will help discover performance issues:
- Load the database with peak expected production volumes to ensure that this volume of data can be loaded by the ETL process within the agreed-upon window.
- Compare these ETL loading times to loads performed with a smaller amount of data to anticipate scalability issues. Compare the ETL processing times component by component to point out any areas of weakness.
- Monitor the timing of the reject process and consider how large volumes of rejected data will be handled.
- Perform simple and multiple join queries to validate query performance on large database volumes. Work with business users to develop sample queries and acceptable performance criteria for each query.
Typically, system testing only includes testing within the ETL application. The endpoints for system testing are the input and output of the ETL code being tested. Integration testing shows how the application fits into the overall flow of all upstream and downstream applications. When creating integration test scenarios, consider how the overall process can break and focus on touchpoints between applications rather than within one application. Consider how process failures at each step would be handled and how data would be recovered or deleted if necessary.
Most issues found during integration testing are either data related to or resulting from false assumptions about the design of another application. Therefore, it is important to integration test with production-like data. Real production data is ideal, but depending on the contents of the data, there could be privacy or security concerns that require certain fields to be randomized before using it in a test environment. As always, don’t forget the importance of good communication between the testing and design teams of all systems involved. To help bridge this communication gap, gather team members from all systems together to formulate test scenarios and discuss what could go wrong in production. Run the overall process from end to end in the same order and with the same dependencies as in production. Integration testing should be a combined effort and not the responsibility solely of the team testing the ETL application.
The main reason for building a data warehouse application is to make data available to business users. Users know the data best, and their participation in the testing effort is a key component to the success of a data warehouse implementation. User-acceptance testing (UAT) typically focuses on data loaded to the data warehouse and any views that have been created on top of the tables, not the mechanics of how the ETL application works. Consider the following strategies:
- Use data that is either from production or as near to production data as possible. Users typically find issues once they see the “real” data, sometimes leading to design changes.
- Test database views comparing view contents to what is expected. It is important that users sign off and clearly understand how the views are created.
- Plan for the system test team to support users during UAT. The users will likely have questions about how the data is populated and need to understand details of how the ETL works.
- Consider how the users would require the data loaded during UAT and negotiate how often the data will be refreshed.
Regression testing is revalidation of existing functionality with each new release of code. When building test cases, remember that they will likely be executed multiple times as new releases are created due to defect fixes, enhancements or upstream systems changes. Building automation during system testing will make the process of regression testing much smoother. Test cases should be prioritized by risk in order to help determine which need to be rerun for each new release. A simple but effective and efficient strategy to retest basic functionality is to store source data sets and results from successful runs of the code and compare new test results with previous runs. When doing a regression test, it is much quicker to compare results to a previous execution than to do an entire data validation again.
Taking these considerations into account during the design and testing portions of building a data warehouse will ensure that a quality product is produced and prevent costly mistakes from being discovered in production.