We have developed some UI automation test cases. Currently we are executing those on application which is under development. As per our observation, during execution, majority of scripts are failing due to application related performance issues (like window did not load properly / window took more time than expected to load etc.)
So to avoid this, during execution, we are planning to check which step is failed and planning to re-execute the same again, to check if window is loaded properly and if yes continue execution. But I have feeling that due to this approach some of the application performance related issues may get masked and am not sure whether we should follow such approach or not.
I would like to know whether it can be count as a best practice.

- 398,270
- 210
- 566
- 880

- 533
- 2
- 10
- 26
-
Related question: http://stackoverflow.com/questions/1916580/how-do-you-write-your-qtp-tests – Albert Gareev Nov 29 '10 at 15:32
4 Answers
If you implement some mechanism for re-trying the operation that just failed, you'll keep falling in holes because sometimes, a re-try is not possible due to the app being in an unexpected UI state, or similar things.
Usually, each application has an expected, and a worst, response time. Take that time and use it as the maximum timeout for playback configuration.
Always try to predict what should happen when, and script accordingly. Making your script tolerate unexpected UI states (like long delay, etc.) just makes your testing effort become more of an "passive" automation effort.
As a rather rude measure, you could design a recovery scenario that retries the operation at least once (or for a specific period of time). This could help you getting a "stable" playback without finding ou what timeouts to use.
But generally: If a windows takes too long to show up, it is a defect. If your timeout is too low, it is a bug -- in your test robot config. If it is not defined what "takes too long" means, get the performance requirements.
Thus: Fix accordingly.
That's my 2 (OK -- 3) cents :)

- 4,291
- 3
- 38
- 72
Not the "best" but working practice.
Scripts must be portable. From environment to environment (and we all know, that test environments are much slower than UAT/Pre-prod, or Production) - with minimal / zero effort on maintenance.
Therefore:
- use synchronization
- don't hard-code what can change
- make scripts configurable from the outside of QTP IDE
With regards to the little piece of GUI Step Automation, here's a general heuristic and acronym to remember: SEED NATALI.
SEED NATALI acronym stands for the following.
- Synchronize till object
- Exists
- Enabled
- Displayed
- verify Number of Arguments
- verify Type of Arguments
- Log test flow
- Investigate any issues occurred
Thank you,
Albert Gareev
http://automation-beyond.com/

- 790
- 6
- 13
If the objective is to perform functional test than, It would be helpful to define bench mark on the response time taken by the application in different Environment, For example, If you have an web application, the Max load time is defined as 20sec and for Other application it is 10 sec. Similarly Once you have a clear benchmark You are on the floor to catch the issues.
Please note while defining the benchmark of an application there are many criteria( like network bandwidth, Server Types) which needs to be taken into consideration while defining benchmark.

- 543
- 3
- 10
- 27
If you're adding the retries now for a phase in the application development where the performance isn't stable yet, you should make sure to remove them when the application stabilizes.
QTP is sufficient for testing the performance of desktop applications or client server applications for a single user, if you want to test the performance for many users on a client server applications (e.g. web) perhaps you should consider using a load testing tool like LoadRunner.

- 110,860
- 49
- 189
- 262