At OpenWorld, I was asked about the proper way to set up synthetic transactions for monitoring applications. It was a good question, and I wanted to document my answer in some sort of whitepaper or technote. So far I still haven't gotten around to writing the formal document, so I am going just post it on this blog. Perhaps I could evolve this into the actual document.
As I discussed in the post “Response Time Monitoring - Real User vs. Synthetic”, there is a place for both real user and synthetic monitoring of applications. There are several challenges in using synthetic transactions, however, and these challenges are not unique to Oracle Enterprise Manager. You would have to consider them no matter which tool you use.
First, unless carefully designed, the tests may not be representative of actual end user activities, reducing the usefulness of the measurements. Therefore, you must be very careful in defining those tests. It would be a good idea to sit down with real users to observe how they use the applications. If the application has not been launched, work with the developers, or if there is one, the UI interaction designer to define the flow. In addition, work with your business sponsors to understand where the application will be used and the distribution of user population. You would want to place your synthetic test drivers at locations where it is important to measure user experience.
Second, some synthetic transactions are very hard to create and may introduce noise into business data. While it is usually relatively easy to create query-based synthetic transactions, it is much harder to create transactions that create or update data. For example, if synthetic transactions are to test for successful checkouts on an e-commerce website, the tests must be constructed carefully so that the test orders are not mis-categorized as actual orders.
To mitigate these potential problems, you should set up dedicated test account(s) to make it easier to tell whether something running on the application came from real users or the synthetic tests. For operations that involve changing data, determine ways to exclude those data from your reports. If it is possible, look for ways to purge the test data out of the system. It is not always possible or easy to do this, as some business processes do not allow changes to data after a certain point. If you are working with a custom application, consider building a “test mode” into the application to make it easier to roll back changes.
Third, security and authorization policies might impact the tests as well. You need to make sure that whatever test application user account has the proper access privilege to access the application elements to be tested. If authorization policies change, you need to verify to make sure that the tests are not affected. The same kind of consideration applies to passwords as well. If you are required to change passwords due to password aging policies, you need to make sure that those changes are reflected in your test setup.
Fourth, synthetic tests may introduce load into your application, so be judicious when setting up test frequency to avoid overloading your application. This means that you may not want to just use all your functional or load test scripts for production monitoring. These scripts were created for different purposes – testing functionality, and stress testing the application, and they may be an overkill for what you need to do to just test the application enough so that you can tell whether key operations are working.
Lastly, make sure your monitoring scripts log out of the application at the end of execution. This is especially important for applications that maintain some sort of session state on the mid-tier. If you do not log out, the resources would not be freed up in a timely manner, and this may impact the scalability of the application. By the same token, be sure to allocate resource to account for test connections on top of connections made by regular users.