Tuesday, December 9, 2008

A Holistic Approach to Siebel CRM Monitoring

What should we monitor on Siebel CRM?

It turns out to be a rather common question, even for some of our long time customers. In fact, I was on a call with a customer this morning and heard a rather lively discussion amongst its staff on this topic. I probably should write a white paper about this. However, knowing how much work I have to finish before taking some time off for Christmas, it could be a while before I can publish a formal white paper, so let me try to share some of my thoughts in real time. Consider this a first installment of a best practice white paper.

Before I talk about what needs to be monitored, let me define what I mean by monitoring. Monitoring, as defined by the Webster Dictionary, is to watch, to keep track of, or check usually for a specific purpose. In technical sense, it is the set of activities to gather telemetry data from a piece of hardware or software, analyze the data, and provide some sort of notification if some sort of exceptions are found. Monitoring is closely related to diagnostic. In fact, the same piece of telemetry can be used for both purposes. One might want to monitor CPU usage using data gathered in real time, and examine a time series of CPU trend in diagnosing performance data. Personally, I tend to classify monitoring as the set of tasks that lead to the realization of an exception, and diagnostic as the set of tasks that follow to determine problem root causes. In ITIL terms, monitoring may lead to creation of an incident, while diagnostic is carried out in incident and problem management.

Now that I have defined what I mean by monitoring, let's talk about what needs to be monitored.

The obvious things to monitor are CPU, memory, disk space, and I/O (disk, network, etc...). These are the most basic computing resources that Siebel and its underlying database depend on, and they are finite resources, so it makes sense to monitor them. However, these are not the only things, nor are they necessary the most important things.

One thing that makes monitoring Siebel different from monitoring other technologies is that Siebel is an application. As an application, it interacts with users directly, whereas most users do not deal directly with the database, or the load balancer, or the storage devices, and so on. Consequently, the primary purpose of application monitoring is to make sure that the application is providing the service level that users expect in order to do their jobs.

Many things can impact application service level. In fact, every component in a Siebel environment, including but not limited to the Siebel application server, web server, gateway server, report server, CTI, database, storage device, server, network switch, router, load balancer, WAN, etc... can all impact service level. Therefore, it is important to monitor everything, right? Yes and no.

Traditionally, application monitoring means monitoring all the components, and the health of the application is the aggregate health of all the components. However, this kind of bottom up approach is increasingly ineffective because of the increasing amount of redundancy built into production application environments, and because many applications are becoming more and more service oriented. For example, with RAID, it is no big deal to lose a disk. With Oracle RAC, you can lose a database server node and the database will keep on running. With Siebel app server clustering, you can lose an app server altogether but the application would continue to function (yes, users logged onto that server would need to log on again). The point that I want to make is that while it is bad to have component failures, they are not the big catastrophes that they used to be in their service level impacts.

The starting point of Siebel monitoring should be from the top – monitor from the end user perspective by focusing on interactive user sessions and batch jobs, and then move downward to the components. If users have problems accessing application functionalities and getting good response times, or if batch jobs are not getting run within targeted batch window, you clearly have a problem with the application, and those problems may be caused by component level outages. On the other hand, if a server goes down but interactive user sessions and batch jobs are working just fine, you have less to worry about. You'll still want to find out and fix this problem, because the service level of your Siebel environment may drop below your target if another server goes down. Still, the server outage is less urgent than it used to be. In traditional component based monitoring approach, a server outage would be a fatal problem that demanded immediate action. In this top-down end user focused approach, a server outage would most likely be a warning unless there is no redundancy for the component.

Both active and passive approaches should be used for monitoring interactive user workload, and critical alerts should be generated if exceptions occur. I wrote about these two monitoring approaches in two previous postings (1, 2), so you can refer to those articles for more details. For batch workload, the key thing to focused on is whether the job finishes on time and whether errors or warnings are generated in processing the entries. Most of the data that you need to watch are in Siebel log files.

The next set of things to monitor are resources. They are important to monitor because resources tend to be finite. If they run out, processing either stops or is delayed. Keep in mind about the relative importance of these resource at the component level though – resource outage may not be a critical event in the grand scheme of things. Traditional resources to monitor include CPU, memory, disk space and I/O, but don't forget about Siebel-specific artifacts such as task count, and when monitoring traditional resource, you need to do it in the context of Siebel. In other words, you should monitor not only server level CPU, but also CPU consumption specific to the Siebel processes.

Lastly, monitor for exceptions, which can be errors showing up on log files, or summarized Siebel server and component statistics for number of level 0 and 1 errors, number of component crashes, restarts, or even number of database connection retries. These are important to monitor in the sense that while a single exception may not be a critical problem, a swamp of these errors happening within a relatively small time window is usually a bad sign, and may point to problems that could cause service level target to be missed.

What about the other Siebel server and component statistics? For the most part, the other statistics are useful for diagnostic and performance tuning purpose. They are not very useful for generating alerts. For example, it is not really practical to set an absolute threshold on a metric such as Average Reply Size, which shows the amount of data Siebel returns. What is a good value to set a threshold anyway? On the other hand, it would be useful to capture the information, and see how the value changes before and after a major application change in order to understand performance impacts. Statistics such as this one should be collected and saved into a database so that trend analysis can be performed.

I just touched on the surface of what should be monitored. There's more, as some of the more critical components require specific approaches. I guess I better add the white paper to my to-do list.

Friday, November 21, 2008

Three New Leading Practices of Application Management Webinars for December

Last month, Oracle launched a new webinar series on Leading Practices of Application Management. We are following up the initial set of events with new subjects for December.

Webinars in December include:

The first three are new subjects, while the last two are re-runs that are scheduled at times more suitable to the APAC audience. One of the challenges that we face when scheduling these webinars is the global nature of Oracle's customer base. There is no single time that works. In fact, we have to come up with three time slots - 11 a.m. Pacific for the Americas, 3 p.m. GMT for EMEA, which covers everything from U.K. to Turkey, and 5 a.m. GMT, which should be a suitable time for everyone from India to Australia.

Oracle E-Business Suite Install and Cloning Techniques Deep Dive
December 2, 2008 at 11 a.m. Pacific / 2 p.m. Eastern
Registration: https://strtc.oracle.com/imtapp/app/conf_enrollment.uix?mID=55734151&preLogin=true

Three Steps to Better Performance and User Adoption for Siebel CRM
December 9, 2008 at 10 a.m. Eastern / 3 p.m. GMT
Registration: https://strtc.oracle.com/imtapp/app/conf_enrollment.uix?mID=55734284&preLogin=true

PeopleSoft Service Level Management Best Practices
December 16, 2008 at 11 a.m. Pacific / 2 p.m. Eastern
Registration: https://strtc.oracle.com/imtapp/app/conf_enrollment.uix?mID=55734323&preLogin=true

Business Intelligence Management Pack Overview
December 23, 2008 at 5 a.m. GMT / 10:30 a.m. New Delhi / 1 p.m. Beijing / 4 p.m. Melbourne
Registration: https://strtc.oracle.com/imtapp/app/conf_enrollment.uix?mID=55734387&preLogin=true

Application Management Pack for Oracle E-Business Suite Overview
December 30, 2008 at 5 a.m. GMT / 10:30 a.m. New Delhi / 1 p.m. Beijing / 4 p.m. Melbourne
Registration: https://strtc.oracle.com/imtapp/app/conf_enrollment.uix?mID=55734394&preLogin=true

Thursday, November 13, 2008

People, Process, Technology - ITIL v3

From my previous post, you probably get the idea that I view ITIL favorably. It is a comprehensive framework that provides a lot of good advice, and it provides a common language for IT practitioners.

While it is useful, learning about ITIL can be a challenge by itself, as it is like learning another language even though the language may already be somewhat familiar. Last year, we conducted a survey at OpenWorld, which asked several questions about practices on service level management and change management. Many people checked the box indicating that they had some sort of processes in place. Yet, when we asked whether people were implementing ITIL, the same people who checked those boxes stated that they were not implementing ITIL. That was rather strange as service level management and change management were two of ITIL processes, so either people did not know what ITIL stood for, or did not think their process implementation was up to the standard that ITIL defined. We think the former reason was more probable.

So what is in the “ITIL v3 language?”

ITIL v3 is made up of five application lifecycle phases, which Wikipedia describes as:

Service Strategy - focuses on the identification of market opportunities for which services could be developed in order to meet a requirement on the part of internal or external customers. The output is a strategy for the design, implementation, maintenance and continual improvement of the service as an organizational capability and a strategic asset. Key areas of this volume are Service Portfolio Management and Financial Management.

Service Design - focuses on the activities that take place in order to develop the strategy into a design document which addresses all aspects of the proposed service, as well as the processes intended to support it. Key areas of this volume are Availability Management, Capacity Management, Continuity Management and Security Management.

Service Transition - focuses on the implementation of the output of the service design activities and the creation of a production service or modification of an existing service. There is an area of overlap between Service Transition and Service Operation. Key areas of this volume are Change Management, Release Management, Configuration Management and Service Knowledge Management.

Service Operation - focuses on the activities required to operate the services and maintain their functionality as defined in the Service Level Agreements with the customers. Key areas of this volume are Incident Management, Problem Management and Request Fulfillment. A new process added to this area is Event Management, which is concerned with normal and exception condition events. Events have been defined into three categories:
- Informational events -- which are logged
- Warning events -- also called alerts, where an event exceeds a specified threshold
- Critical events -- which typically will lead to the generation of Incidents

Continual Service Improvement - focuses on the ability to deliver continual improvement to the quality of the services that the IT organization delivers to the business. Key areas of this volume are Service Reporting, Service Measurement and Service Level Management.
If you are familiar with ITIL v2, you probably recognizes that many of these processes are similar to those in v2. I think one way to look at v3 is that it is an improved and superset of of v2.

For more details on these processes, you need to get the official books from Office of Government and Commerce, the United Kingdom agency who serves as the official publisher of this methodology.

Thursday, November 6, 2008

People, Process, Technology - Process Frameworks

In an earlier post, I made a point that people, process, and technology are all pre-requisites for achieving success in enterprise application projects. I am going to focus today's post on processes, specifically process frameworks around application lifecycle management.

Application Lifecycle Management is not a new thing, as there exists not just one, but many process frameworks that address its various aspects. Examples include:

- Information Technology Infrastructure Library (ITIL)
- Control Objectives for Information and Related Technology (COBIT)
- Oracle Unified Method (OUM)
- Oracle Application Implementation Method (AIM)
- Siebel Results Roadmap
- PeopleSoft Compass
- Microsoft Operations Framework (MOF)
- Rational Unified Process

You probably notice that Oracle alone has several methodologies, so there is no shortage of process to follow. By the way, in case you are wondering, the various Oracle methodologies eventually are supposed to get merged into the Oracle Unified Method. That can be a subject of a whole different discussion.

I am not sure if there is such a thing as the perfect methodology. Some of these methodologies are more development focused, while others are more operations focused, but I think there is increasing realization that an application project's complete lifecycle starts from the moment when the project is kicked off and ends when the application is retired, so a comprehensive application lifecycle management process framework needs to address both development as well as operational needs.

Take ITIL as an example. In ITIL v2, much of the focus was on operational management. The two core books of Service Delivery and Service Support addressed the processes of:

- Service Level Management
- Capacity Management
- Availability Management
- Continuity Management
- Financial Management
- Change Management
- Configuration Management
- Release Management
- Incident Management
- Problem Management

Infrastructure Management, Security Management, Asset Management and Application Management were in separate books. Except for the Application Management book, which depicted an application management lifecycle, ITIL v2 did not point out explicitly that many of the Service Management processes really need to start before the application goes into production.

In fact, in the Application Management book, it separated the pre-production activities in the first three phases as Application Development activities, while the activities in last three phases were classified as Service Management activities. That to me was a bit weird as it implied that the work to come up with a service level agreement do not take place until the application is ready to go into production. In reality, many of the considerations for defining service level goals should be done as part of overall planning in an application project.

In ITIL v3, which was released in May 2007, the lifecycle aspect of application projects took center stage. All the existing ITIL functional processes were re-oriented into five lifecycle phases, which include:

- Service Strategy
- Service Design
- Service Transition
- Service Operation
- Continual Service Improvement

These five phases cover everything from initial planning to on-going postmortem analysis needed to drive continual improvements. To me, this makes a lot more sense. If someone needs to manage capacity, or ensure availability of the application, which traditionally are seen as more of operational activities, the planning aspects of these really need to be carried out by the operations team up front as the functionalities of the application are getting implemented in parallel by the developers. Even something as “operations centric” as monitoring has a pre-production component – one needs to plan what needs to be monitored and instrument the application accordingly.

As the ITIL v3 evolution illustrates, a comprehensive framework that provides holistic recommendation to application lifecycle management best practices needs to cover all phases of the lifecycle and addresses both development and operational activities. As an added bonus, ITIL also provides a common vendor-neutral language to talk about various issues. Many terminologies are so overloaded, especially with vendors (Oracle included) all using them to suit their needs, that it can be difficult to talk about many issues without re-defining the terminologies at the beginning of the discussions first. ITIL pretty much eliminates this confusion. Therefore, I am going to make reference to it in my future blog posts.

Monday, October 27, 2008

Oracle Launches Leading Practices of Application Management Webinar Series

Oracle is launching a new webinar series on application management. We have seen that oftentimes, technology is not the only source of challenges for customers. To achieve targeted application service level cost effectively, one needs to consider organizational and process issues holistically also. In this weekly webinar series, we plan to talk about not only the technologies for managing applications, but more importantly, the leading practices and how various tools can be used to facilitate the implementation of practices.

In November, we plan to present overviews of our application management packs, the centerpiece of our management tools for packaged applications. After that, we will begin our deep dive into specific topics for each application domain. See the bottom of this post for some example topics.

You may click here to get the summarized list of upcoming webinars, and here to get a more detailed description for each event. These webinars will be recorded and made available for on demand playback.

November 2008 Schedule

For schedule and registration options visit webcasts page on OTN.

Upcoming Topics in December and Beyond
  • Oracle E-Business Suite Install and Cloning Techniques Deep Dive
  • Key Steps that You can Take to Improve the Performance and Availability of Siebel Applications
  • Service Level Management Best Practices for PeopleSoft Enterprise
  • Oracle@Oracle: Managing Oracle’s Internal Implementation of Front and Back Office Applications

Saturday, October 18, 2008

People, Process, Technology

Last week, I went to Europe to present our application management products at a field training event. To start off the presentation, I wanted to make an attention grabbing point in order to set the stage for the rest of the discussion, so I cited cases of enterprise application projects that failed spectacularly. If you wonder what those projects were, let's just say they involved software made by a German company, as well as software made by a U.S. company. I tried to maintain balance in my critique.

In everyone of those cases, the project failed not only because of the software involved, but also the organizations and their project management. Common problems were ill defined requirements, lack of testing (functional and load), unrealistic time lines, undersized capacity, lack of operational management discipline, lack of training (developers, administrators, end users) and last but not least, software technical problems (they were software implementation projects after all). The very annoying thing is that these enterprise application project failures were not isolated incidents. They still keep on happening.

In my own opinion, a key part of the problem is that while many organizations aspire to achieve the benefits that enterprise applications provide, they underestimate the amount of “homework” that they need to do to realize those benefits. Technology is only one of the components to consider in the homework assignments. The people and process aspects are equally, if not more important. Together, people, process, and technology form the basis of achieving success in enterprise application projects and they must be managed properly.

Take CRM application as an example. There are proven cases that the use of CRM applications, when implemented as part of a business process re-engineering effort to streamline marketing, sales or services activities, can lead to superior business results. However, those results are predicated on:

  • the proper business process design

  • the proper application design to support the business processes effectively

  • the proper implementation that adheres to the application design

  • the proper functional testing to make sure that the implementation is done according to the specification

  • the proper load testing to make sure that the application provides the required response time and scales to projected usage pattern

  • the proper service level management practices to set operational targets, measure actual service levels, report results, and make improvements so that the application provides the needed service levels required by the business

  • the proper availability management practices and technologies to ensure that business availability requirements are met

  • the proper performance management practices to ensure the necessary response time and batch performance to support various business activities and utilizing computing resources effectively

  • the proper configuration and change management practices to ensures that changes are made with proper impact analysis, control and accountability

  • the proper security management practices to ensure the confidentiality, integrity, and availability of the application system and the information

  • the proper data quality management practices to ensure the usefulness of the information

  • the proper training for developers, administrators and end users so people know how to implement, manage and use the software properly

  • the proper organization of people and effective communications amongst stakeholders,

  • the proper alignment of interests of various stakeholders and the organization's goals

  • the proper management of vendor relationships so tasks are carried out properly by the vendors and the proper support is provided; this is especially critical if contractors are used

and last but not least, the proper governance. Almost everyone of these factors involve getting the right people to do the right thing at the right time using the right tool, and many of these factors are important whether one uses SaaS, hosts the application with an application hosting provider, or runs the application in-house. These are all homework that organizations need to work on if they want to realize the full benefits of deploying enterprise applications.

Oftentimes, these homework do not get done, or they do not get done properly. One reason I believe organizations don't do the proper homework is that they underestimate the complexity of enterprise applications. A friend of mine, who happens to work for an aerospace firm designing rockets, once commented to me that he couldn't figure out why companies have such a hard time implementing business applications. Running an application to store and retrieve some data from the database didn't seem like rocket science to him.

Implementing application is definitely not rocket science. After all, an application is not a rocket, so the science involved should be “application science” instead of rocket science. Even though implementing an application is not rocket science, it doesn't mean it is easy. Getting a cruise missile to hit a target hundreds of miles away is hard, but so is maintaining sub-second response time for 10,000 concurrent call center users, or thousands of students who all try register for classes last minute, or thousands of users on an eCommerce website all trying to buy the on sale item at the peak of the Christmas shopping season. Complexities in enterprise applications exist because of the complexities of the problems that they try to solve. While many vendors, Oracle certainty included, are on a quest to simplify the applications, let's just say it will still be more complex to implement enterprise applications than installing a Nintendo Wii at home for the foreseeable future.

By the way, the complexities and the associated “homework requirement” is not that different from other complex systems. Even something as commonplace as the automobile requires regular maintenance – oil change, tune up, proper tire pressure, etc... – to run smoothly and maintain good gas mileage. Implementing applications without doing the proper homework is like running a car without maintenance. Sooner or later, the application would stop working just like a car would stop working.

It's almost noon as I get to this part of the blog, so I think I am going to head out to grab something to eat. Let me resume this discussion on people, process and technology on my next blog entry.

Saturday, September 27, 2008

Notes from Oracle OpenWorld 2008

OpenWorld is over!!! As much as I enjoyed the event, I felt a sense of relieve when I stepped out of my last meeting at the Customer Visit Center at Moscone North on Thursday afternoon. One of my colleagues saw me and commented that I looked “dead serious”. I told him to leave out the “serious” part. I was just “dead”, after being sleep deprived for the whole week. I slept on my bus ride back to headquarters.

Friday was a regular workday (no vacation!). There was a product review meeting for the next release of Enterprise Manager, a planning conference call for field events, a brief chat with my boss, conversation with my team on the to-do list for this quarter, and reviewing of the notes that I took at the event and following up with a long list of people I met to keep me quite busy.

Speaking of notes, I took plenty of it from the interesting conversations at the event. Here is a sample.

Help us standardize – One of the pain points that I heard from customers was that operating silos have made it difficult to manage their applications. Different teams use different tools, which don't work well with each other. Different teams developed different practices, which in some cases conflicted with each other. Common tools, and common best practice recommendations from Oracle are highly desirable.

Help us manage changes – I heard this over and over throughout the week whether it was discussion on Sunday's OAUG Change Management SIG, conversations at Demoground, or on Thursday's application management roundtable. Change is hard even with tools such as iSetup and ADM, as they do not yet cover the complete change workflow. Another dimension in change management is access control, as different members of the team different authorizations for changing the different parts of the applications, and our software needs to be smart enough to enforce the separations.

Help us figure out the proper way to use your software – One particular example was whether people should set up a single Enterprise Manager Grid Control environment or multiple environments. Our default recommendation is single instance, but there are technical as well as organizational factors that might make it better to have multiple instances. I will write about this more in a future article.

Speaking of organizational factors, I believe that it takes more than just software to solve many of the problems that were discussed throughout the week. Ultimately, it takes a combination of people, process and technology to get things done. People refer to all of us working at Oracle, at partner companies and at customer organizations to overcome the various application management challenges, and we need to keep on talking amongst ourselves to exchange ideas. Do the issues above sound familiar? Summit your comments either on my blog or on discussion forums.

We have created several forums on mix.oracle.com to carry out our conversations. These groups include:

Oracle E-Business Suite Lifecycle Management

Siebel Install / Manage / Upgrade


If you are an architect or senior IT manager and wish to talk about more strategic or policy issues, here are the groups for you.

E-Business Suite Architects

Siebel Architects

PeopleSoft Architects

Friday, September 19, 2008

Share Your Application Management Ideas with Oracle at Oracle Mix

When I logged onto Oracle.com this morning, I was greeted by this page. This is Oracle Mix, a new Oracle platform for connecting with the Oracle community, network, share ideas, and get answers.

I think this is a cool idea. As a product manager, one of the most important things that I have to do is to gain insights on our customers' needs. However, unlike my peers who are on the consumer side of the technology business, especially those that manage web-based products, it is a lot more difficult to conduct broad base customer research. Traditional mechanisms such as customer advisory boards, while important, can be rather slow. Oracle Mix could be a great additional tool for understanding our customer needs if it is used properly.

Here are a couple suggestions to make this an effective tool for all of us:
1. Participate. This tool is not going to work unless we all use it.
2. Give meaningful titles, use the proper tags, and associate with the right product when asking questions and making suggestions. This helps channeling your postings to the right people.
3. When proposing improvements, state the underlying problems that you need to solve.

#3 is especially important. We sometimes get enhancement requests that basically tell us to "add a knob" here or "take out something" there. While the requests may sound very specific, it could actually very hard to use the information. Different customers tend to have different ideas of solving the same problem. While it is good to hear the specific recommendations, following them blindly could lead to piecemeal product changes that undermines the integrity of the product. Therefore, it is much better to find out the underlying problems so that we can learn about the rationale behind the requests and come up with solution that address problem root causes.

Click here to access Oracle Mix.

Friday, September 12, 2008

Oracle Delivers Oracle Application Testing Suite

Lots of news are coming out of Oracle on the application lifecycle management front. In addition to announcing the ClearApp acquisition on 9/2 and Oracle IT Service Management Suite's PinkVERIFY certification on 9/9, Oracle also announced the availability of Oracle Application Testing Suite (ATS). With ATS, Oracle now provide tools that cover the complete application lifecycle, from development to test to production management.

Oracle Application Testing Suite is the first release of the product since it was acquired from Empirix earlier this year. Amongst many improvements is an open and integrated scripting platform for both load and functional testing. This is one of the product's key strengths, as some other competing products require users to set up separate scripts for functional and load tests, creating un-necessary rework.

In a way, ATS is not a completely new product to Oracle customers. Empirix was one of Siebel's test automation tool partners, and released one of the first tools designed specifically for managing Siebel. The latest ATS release continues this effort by providing functional and load test accelerators for Siebel. Along with Application Management Pack for Siebel, ATS is part of Oracle's complete solution for managing Siebel application lifecycle.

More information about ATS can be found here.

You may download a trial copy of ATS from Oracle Technology Network.

Wednesday, September 10, 2008

Oracle IT Service Management Suite Achieves PinkVERIFY Certification for ITIL Compatibility

At ItSMF Fusion 2008 conference, Oracle announced that its IT Service Management Suite has been certified as ITIL compatible through Pink Elephant's PinkVERIFY IT Service Management certification program. The certification is achieved for six core ITIL processes: Incident, Problem, Change, Configuration, Release and Service Level Management. Oracle's IT Service Management Suite is made up of Oracle Enterprise Manager, Siebel Helpdesk and Oracle Business Intelligence Enterprise Edition.

The call to adopt more vigorous business disciplines in running IT has become louder and louder each year, and the trend can be seen in the increasing adoption of ITIL practices. In a way, running IT in a business manner and making IT decisions according to business needs should really be a no brainer. Conceptually, IT management shares many common problems with other management domains, from project management to finance to operations.

Indeed, we have seen customers applying many Oracle technologies that they use to run various business functions to manage IT. Standardizing on the same technologies helps simplifying the IT environment, leading to better economy of scale and cost savings. Furthermore, it is easier to integrate IT management processes with core business processes when the same software is used.

More information about Oracle's IT Service Management Suite can be found here.

More information about the certification can be found here.

Tuesday, September 2, 2008

Oracle Buys ClearApp

Following the acquisitions of Moniforce, Auptuma and Empirix eTest Suite product line, Oracle announced today the acquisition of ClearApp, a supplier of application management software for composite applications. This acquisition, focusing on SOA application management, complement Oracle Enterprise Manager in creating a comprehensive application management solution to help Oracle customers achieve enhanced service levels, reduced system down-time and improved return on SOA investments.

More information about this latest acquisition can be found at www.oracle.com/ClearApp.

Tuesday, August 26, 2008

Oracle OpenWorld 2008 Application Management Preview

The Olympics is over. The political conventions are over. What is the next mega-event? Oracle OpenWorld of course. In two weeks, we will once again pack Downtown San Francisco with tens of thousands of Oracle customers, partners and employees to talk about the latest developments in enterprise software.

We will feature our strongest lineup of breakout sessions dedicated to application management ever. Here is a preview.

E-Business Suite Management

Customer Case Study: Centrally Managing Your Oracle E-Business Suite, Using Oracle Application Management Pack
Benjamin Cabanas, General Electric, Infrastructure ; Biju Mohan, Oracle

Oracle E-Business Suite Release 12: Install and Cloning Techniques Deep Dive
Max Arderius, Oracle; Biju Mohan, Oracle

Managing Oracle E-Business Suite Customizations and Patches, Using Oracle Enterprise Manager
Uma Prabhala, Oracle

Oracle E-Business Suite Management: Performance Optimization Best Practices Using Oracle Enterprise Manager
Chung Wu, Oracle

Improve Performance of Your Oracle E-Business Suite and Siebel Applications with Oracle's Real User Experience Insight
Henk de Koning, Oracle; Jurgen de Leijer, Oracle

Siebel Management

Improve Performance of Your Oracle E-Business Suite and Siebel Applications with Oracle's Real User Experience Insight
Henk de Koning, Oracle; Jurgen de Leijer, Oracle

Siebel Application Management: Three Steps to Better Performance and Better User Adoption
Chung Wu, Oracle

Performance Optimization Best Practices with Siebel Application Response Measurement and Oracle Enterprise Manager
Sandra Cheevers, Oracle; Chung Wu, Oracle

The Oracle Stack Promise for Siebel Customer Relationship Management: MAA, Oracle Clusterware, and Oracle Real Application Clusters
Richard Exley, Oracle; James Qiu, Oracle

PeopleSoft Management

Leveraging Oracle Enterprise Manager to Manage and Monitor Your PeopleSoft Applications
Scott Schafer, Oracle

Oracle Business Intelligence Management

Oracle Business Intelligence Management: Achieving High Performance and Availability with Oracle Enterprise Manager
Amjad Afanah, Oracle; Vishal Doshi, Fiberlink Communications

General Application Management Topics

Go Beyond Web Analytics: Build Business Intelligence with Oracle Real User Experience Insight
Rajiv Taori, Oracle

How Real User Monitoring Can Improve Application Performance: Go Beyond Web Analytics and Systems Monitoring
Michel Knops, Measureworks; Mark McGill, Oracle; Jurgen de Leijer, Oracle

Application Transaction Management with Oracle Enterprise Manager: The Key to End-to-End Monitoring virag Saksena, Oracle; Rajiv Taori, Oracle

Application Diagnostics for DBAs: Visibility into Your Application That the Middle-Tier Administrator Cannot Provide You
Shiraz Kanga, Oracle; Rajagopal Marripalli, Oracle

Optimizing Application Performance: Application Testing Suite to the Rescue
Matthew Demeusy, Oracle; Joe Fernandes, Oracle

Application Upgrade Secrets: Avoid Surprises While Making Database Changes
Jagan Athreya, Oracle; Sandra Cheevers, Oracle; Ravi Pattabhi, Oracle

Managing Your Service Bus with Oracle Enterprise Manager
Nadu Bharadwaj, Oracle; Arvind Maheshwari, Oracle

Tips and Tricks for Managing Your Oracle Forms and Web Applications with Oracle Enterprise Manager
Nadu Bharadwaj, Oracle; Richard Mertz, City of Evanston, Ill; Daniel brint, SUNY

Active User Monitoring: Measure Your User’s Experience Without Instrumenting Your Applications
Rajagopal Marripalli, Oracle; Richard Mertz, City of Evanston, Ill

More details on these sessions can be found on this website: http://www28.cplan.com/cc208/catalog.jsp

See you in San Francisco!

Monday, August 18, 2008

Application Management Pack for Oracle E-Business Suite 10gR4 is Available

Application Management Pack for Oracle E-Business Suite 10gR4 (version 2.0.2) is now available. This release runs on Oracle Enterprise Manager Grid Control and, and supports the following operating system platforms:

  • Linux x86 and x86-64 (the same patch is applicable to both Linux-based platforms)

  • Solaris SPARC

  • AIX5L-based


  • HP-UX Itanium

  • Windows (Note: The 2.0 and 2.0.2 versions of the pack are supported on the Windows platform only with Oracle Enterprise Manager 10g Release 4.)

The pack can be used to manage the following E-Business Suite releases

  • Release 11.5.10 CU2 ATG PF RUP4.H

  • Release 12.0

This version of the pack can be downloaded immediately through Oracle Metalink, as patch 6809246. It is an OPatch rollup update on top of the pack’s earlier 10gR3 release (version 2.0). This release of the pack is the first version that is certified to run on Oracle Enterprise Manager Grid Control 10gR4 and contains a cumulative collection of bug fixes.

Wednesday, July 30, 2008

Additional platforms supported for Enterprise Manager 10gR4 agents

Windows x64, Linux x64 and Linux Itanium based Enterprise Manager Grid Control 10gR4 agents are now available. To get them, download the Mass Agent Deployment package from OTN.


Friday, July 18, 2008

New Application Management Pack for Siebel Customer Self Study Training Available

Application Management Pack for Siebel’s eStudy is now available. The self pace online course is a tutorial for deploying, configuring and using our Siebel pack. The course assumes familiarity of base EM capabilities taught in the 5 days instructor led Enterprise Manager Grid Control training course, and complements other training that covers EM features such as Service Level Management and Configuration Management in greater depth. You may access this training at the following URL.


Thursday, July 10, 2008

What can do I with this SARM data?

I woke up this morning finding a message in my inbox coming from an ITtoolbox subscriber asking about SARM. Wow, I thought, someone is trying to use my baby. So I replied the message over breakfast. The information may be interesting to others who want to know more about SARM, so I am re-posting it here.


I was the original product manager for SARM. Let me provide some explainations.

Back in Siebel 6, it was easy to figure out performance problems, as Siebel client ran as a Win32 program on PC, and connected directly to the database. Each user connect via a separate database session, and all the business logic ran on the client PC.

Things got a bit more complicated with Siebel 7. With the web based interface, Siebel 7 enables organizations to be more agile, as they could revise their application more easily to reflect changing business processes without pushing the software out to thousands of users. However, the architecture also placed more demand on the mid-tier servers. In addition, a single transaction request (query, save, navigation, etc...) requiring going from web browser to web server, from web server to Siebel App Server, and from Siebel App Server to database server. Connection to database could be shared via database connection pooling. Tracing transaction from the user to the database in order to identify performance bottleneck root cause became very difficult, as it was very difficult to tell which user initiated what database request, and there was no way to figure out what the mid-tier was doing. In Siebel's own IT department, everytime a performance problem occured, the IT staff would summon a couple engineers from our product development organization to figure out the problem. This was very expensive in engineering productivity, and most customers did not have this option.

In Siebel 7.5, I asked our IT operations director what we could give his staff to make life easier. The answer was a way to see what goes on inside the Siebel app server environment. SARM was born as a result.

SARM is made up of three parts. The first is the instrumentation framework. The second is the collection of instrumentation. The third is the tool for analyzing the data. SARM instrumentations are strategically placed in various parts of the Siebel software stack.

When a transaction request enters the Siebel server layer, the first timer goes off. As the request makes it way down the stack (think of it as a call graph), additional timers go off. These instrumentation points capture timing information, CPU/memory utilization, and contextual information about that instrumentation point. Data for each instrumentation point makes up a single SARM entry.

Each SARM entry includes the identification of the instrumentation point (AreaDesc). For example, the workflow engine would be one of those instrumentation points. For workflow, the application string field also stores the name of the workflow, so that you can tell not only the workflow engine is invoked, but also the particular workflow that is being run, and the amount of time spent running that workflow. User id, business component name, view name, applet names are also stored in the entries of the respective areas.

Let's say you have a transaction request that ran for 15 seconds. You want to find out the breakdown of the time spent. Using the area description and the text string, you can find out how much time is spent at each Siebel layer, and find out the exact workflow, business service or script that is causing the problem. You would know who initiated the transaction request since the user id is recorded, and which part of the application (view and applet name) the request came from.

It took a couple releases to get SARM fully done. 7.5 was the first release. Naturally, with any version 1.0 technology, there were some short comings, and there were not too many instrumentation points to capture data. In 7.7, the technology became much more mature, with better optimization to minimize overhead, and more instumentation coverage after we took a companywide effort to ask every single development team to instrument their code with SARM. 7.8 was an application functional release so SARM in 7.7 pretty much was the same in 7.7. We made further improvements in 8.0 to give administrators more ways to fine tune SARM data collection.

In addition, one thing that had been missing in SARM was a good graphical tool to analyze all the rich information. The command line tool, which was intended to convert the binary SARM data to CSV so that people could import it into spreadsheet, just didn't cut it. The reason why we didn't come up with a graphical tool initially was resource constraint. We needed to focus our energy on making sure we could collect good SARM data first and did it in an efficient way. Otherwise, the best analytical tool in the world wouldn't help.

The lack of good graphical tool also changed in the Siebel 8 timeframe. One thing that is cool about being part of Oracle is that Oracle has a lot more people. We actually have a whole division of people focusing on building management tools. So after we became part of Oracle, we shipped a graphical tool (Siebel Diagnostic Tool) as part of Application Management Pack for Siebel. In fact, we have done more to the management tooling of Siebel in the past year than the 10+ years when Siebel was an independent company. The tool is now fully integrated with Application Management Pack for Siebel, which runs as part of Oracle Enterprise Manager 10gR4.

I wrote about SARM on my blog in May, and will probably write about it more in the coming months, so check it out for more discussion. I will also be presenting a session on SARM at this year's OpenWorld in September so stop by if you are going to attend the conference. http://appmanagementblog.blogspot.com/2008/05/demystifying-siebel-application.html

Visit this site if you want to learn more about Application Management Pack for Siebel. http://www.oracle.com/technology/products/oem/prod_focus/app_mgmt.html

Friday, June 6, 2008

Building Application Management into Your Capacity Plan

One of the most common questions that I get asked when showing Oracle Enterprise Manager to customers is the amount of processing overhead the tool introduces to the environment. It is a valid concern. After all, you don't want a management tool that is supposed to help prevent performance problems to introduce new performance problems of its own. However, treating management as purely “overhead” may not be a productive way to think about the problem either.

First, let me state that it does take resource to run management tools. It takes CPU cycles, memory, disk space and I/O bandwidth to collect and process information about the health of an application environment. Since all these resources cost money, it means that it costs money to use management tools. But costs isn't the problem. It is whether you recuperate the costs through the benefits that the tools deliver. In other words, it is about return on investment.

What would be the alternative of not incurring the management costs? You would have to try managing the systems manually with no data. You would have to sit in front of the terminal yourself to watch everything to make sure things are working. If something breaks, you would have to take many guesses to try to fix the problems, which would probably take you much longer, forcing you to stay longer at work and missing other important things in life. Meanwhile, your application's availability goes down, your end users productivity are impacted, and your organization might even lose business because of that. Would you rather incur these costs over the 5% or even 10% of CPU cycles that you dedicate to running your shop properly?

So as you do capacity planning for deploying or upgrading an application, build in a capacity budget for management as well. You'll be glad you did.

Friday, May 30, 2008

Oracle Enterprise Manager Grid Control 10gR4 is Available on HP/UX PA-RISC

EM Grid Control 10gR4 is available on HP/UX PA-RISC. With this release, 10gR4 is now available for Linux, Linux x86-64, Windows, Solaris, AIX, HP/UX Itanium and HP/UX PA-RISC. You may download them from OTN or ARU.


Thursday, May 22, 2008

Lessons from Extreme Data Center Makeover

Several months ago, I blogged about our OpenWorld hands-on lab set up project, which I referred as “Extreme Makeover, Data Center Edition”, as we had to set up a complex environment with a large number of complex enterprise class software in very short amount of time. It was both a fun and stressful project, and I learned a couple lessons from the exercise. I planned to blog about the lessons, but keep on postponing as other topics, from Gartner Conference to EM release to Collaborate took precedence. Well, here is a belated follow-up to the original post.

Lesson #1 – No project is too small when it comes to applying good IT practices

“How hard could setting up a demo environment be?” - that was the initial thought that came across my mind. However, it became very apparent very soon that it was a serious project with all the attributes of a real deployment. For example, after we came back from lunch on day 2 of setup, we found that one of the servers could no longer speak to the network. That was weird, as it worked just before lunch. After checking the network cable connection to the machine and agonizing all the network configuration parameters on the box, we discovered that it was actually a change that someone made on the network switch during lunch that caused the problem. We wasted half the afternoon troubleshooting. A little bit of discipline in the form of configuration management would have prevented that problem.

Lesson #2 – Be very careful about making assumptions

When we specified the hardware spec of our demo environment, we put down the usual requirements about CPU, memory, disk space, etc... What we did not specified, and assumed that we would get, were DVD drives on the server machines. We got CD-ROM drives instead, and we lost at least a day of time from this simple omission.

Lesson #3 – Expect the unexpected

A 5.6 earthquake hit the San Francisco Bay Area on one night while we were going to system test our client-server connection. That disrupted our work for the night as we didn't feel safe working in a mid-rise building not knowing whether there would be more shaking to come. I am not sure how to plan for something like this, but almost all projects run into “unforeseen” difficulties that are very hard to predict. In our case, there was very little that we could have done other than working longer hour the next day to make up for the lost time. If we had more time to work with at the beginning, we would have built extra buffer time into the schedule.
The examples above might seem trivial, but they introduced days of delay when their cumulative effects were added together. We managed to pull the project through thanks to hard work by the whole team, and we will have to keep these lessons in mind when we set up for the next OpenWorld or other similar projects.

Thursday, May 15, 2008

Demystifying Siebel Application Response Measurement

Siebel Application Response Measurement (SARM) is a performance-tracing framework that was originally introduced in Siebel 7.5. Even though the technology has existed for almost five years, it seems there are still some misconceptions about its design and intended use. Since I was the original product manager for SARM, I guess I can try to offer some explanations.

Myth #1 – SARM is Siebel ARM

Back was Siebel was an independent company, our strategy to provide Siebel management tools was to instrument the Siebel platform and work with 3rd party ISVs to adapt their tools to work with Siebel. As part of this strategy, we thought it would be a good thing to try to comply with industry standards such as Application Response Measurement (ARM) so that tools that support ARM can be used to monitor and diagnostic Siebel performance. Therefore, it is possible to consume SARM data by using an ARM-compliant tool.

However, strictly speaking, SARM is not an implementation of ARM. The problem with standards is that they often have to sacrifice capabilities for compatibility and provide the lowest common denominator solution. We found that ARM, specifically ARM 2.0, was not rich enough to capture Siebel-specific performance data. As a result, we built SARM to capture a superset of the information, and pass a subset of that to the ARM API. Specifically, contextual information such as the names of the Siebel UI views, business components, workflow processes and scripts are not passed through the ARM API, which would make it a bit difficult to tell what goes on in processing transaction requests.

In other words, to fully take advantage of the rich information captured by SARM, you need a tool that processes the native SARM data stream.

Myth #2 – SARM has high overhead

The driver behind SARM was the need for a way to identify transaction request performance bottlenecks, especially for interactive user workload. It used to be rather strict-forward to do this in the Siebel 2000 (version 6) days, as Siebel applications were deployed with 2-tier client/server topologies, with direct connections from clients to the database. In Siebel 7, the topology became truly multi-tiered, and with database connection pooling, there was no deterministic way to tie a database transaction to the user request. SARM was intended to be the remedy by providing a way to trace transaction request throughout the Siebel mid-tier.

As a performance management tool, the last thing that we needed was having SARM introduce more performance problems. Consequently, we were obsessed in squeezing every last bit of performance out of the tool and making its overhead as low as possible. This was achieved through several means:
- Record timing information while doing as little secondary processing as possible in real-time
- Use highly optimized buffered I/O to persist performance data
- Provide various throttling mechanisms to control the amount of SARM data captured

Prior to releasing SARM, we ran SARM through numerous load-testing scenarios. For example, in the Call Center 1 load tests, which simulated hundreds of simultaneous users running against a single Siebel app server, we observed SARM overhead to be less than 3%, well within our product performance requirement. We thought this was a reasonable cost to realize the benefit of having good management data for optimizing the application.

Myth #3 – SARM is only for production diagnostics

While a lot of the initial discussions about SARM were for performance diagnostics, we have always intended SARM to be a framework that supports the full set of application performance lifecycle activities.

SARM really is just a set of timers that measure the timing of transaction requests, as well as the timing within various points in the “call graph” of the Siebel software stack for processing the requests. SARM doesn’t care whether the timing came from actual user operations while the application is live or from activities generated from pre-production load tests. While in production, the data that it captures can be used for day-to-day monitoring as well as diagnostics, as well as longer-range capacity management.

Friday, May 9, 2008

Steps to Fusion - Centralize the Management of Your Applications on Oracle Enterprise Manager

eWeek recently reviewed Oracle Enterprise Manager Grid Control 10gR4. In the article, Cameron Sturdevant referred Enterprise Manager as “a high-powered ecosystem management platform that uses its home field advantage in Oracle shops to provide administrators with top-notch tools”. He went on to say that he recommends “administrators [to] consider a management strategy that brings Enterprise Manager in over time to take care of Oracle databases, application servers, web applications from Oracle and its fleet of acquired products from PeopleSoft, Siebel, JD Edwards and others.”

Wow, “home field advantage” . . . never thought of this metaphor when we planned our products, but it is the right idea. It is safe to say that Oracle, more than any other vendor, cares about whether customers can properly manage Oracle products, be they database, middleware or applications. They are all parts of our home field. It is also safe to say that Oracle, more than any other vendor, possesses the domain expertise for managing Oracle products. We built all these software in the first place, so we know how they work really well and we can build tools for managing them properly.

Oracle has made quite a bit of progress in solidifying its application management portfolio in the past 18 months. This started with the release of three application management packs that are designed specifically for managing Oracle E-Business Suite, PeopleSoft Enterprise and Siebel. It continued with the introduction of Application Diagnostics for Java, acquisition of Moniforce for end user monitoring, acquisition of the e-Test product suite from Empirix for application functional and load testing and release of Enterprise Manager Grid Control 10gR4, which included, amongst many things, improved service level management, SOA management, data masking, and a new management pack for Oracle Business Intelligence applications.

So what do all these development means if you are an Oracle application customer? It means you now have a new set of fantastic option to consider when acquiring tools for managing your application, as these Enterprise Manager tools cover everything from configuration management to monitoring to diagnostics to pre-production testing, and they are designed specifically for managing Oracle application products. It also means that you have one fewer set of vendor to deal with by choosing these tools from Oracle.

In addition, you would have taken the first step to Fusion from an IT operations management perspective by centralizing the management of your applications on Oracle Enterprise Manager today. Oracle Enterprise Manager is the tool for managing Fusion Middleware, the foundation for Fusion Applications. Since all these technologies may be managed through Oracle Enterprise Manager, it means that you may evolve your IT management setups incrementally as your modernize your application environments through products such as WebCenter, Business Intelligence and Oracle Application Integration Architecture.

Let's consider the following example with Siebel CRM for front office and Oracle E-Business Suite for back office. At the beginning, you manage these two applications separately using the bundled tools.

Step #1 is to connect these applications to Oracle Enterprise Manager Grid Control using Application Management Pack for Siebel and Application Management Pack for Oracle E-Business Suite, respectively. You gain advanced monitoring, centralized event management, configuration management, transaction diagnostic for Siebel, advanced cloning automation for E-Business Suite, end user monitoring and service level management.

For step #2, you decide to connect Siebel with E-Business Suite so that orders captured in Siebel could be submitted into E-Business Suite for fulfillment. You deploy Oracle Process Integration Pack for Order-to-Cash, running the integration processes on Oracle SOA Suite. For management, activate SOA Management Pack on the same Oracle Enterprise Manager Grid Control instance. You may now manage both Siebel and E-Business Suite, along with the integration between the two applications as a single logical system.

For step #3, you want to expose information to your users in a unified portal using Oracle WebCenter, and provide business insights using data from both front and back office systems using Oracle Business Intelligence. As you deploy these products, activate Oracle Middleware Management Packs and Oracle Business Intelligence Management Pack on the same Oracle Enterprise Manager Grid Control environment, and manage these Fusion middleware components along with Siebel, E-Business Suite, and SOA Suite as a single logical system.

Fusion Applications arrive, and in step #4, you decide that you want to start uptaking these new functionalities and run the new applications along with your existing Siebel and E-Business Suite applications. No problem, just add Fusion Applications to the same Oracle Enterprise Manager Grid Control, and you may then use it to manage your Siebel, E-Business Suite, SOA Suite, WebCenter, Business Intelligence, and Fusion Applications as a single logical system.

As you can see, as you evolve your application environment to meet your changing business needs, one thing may remain constant – the tool for managing your applications, as it evolves with you. This approach provides continuity for your IT operations while at the same time give you access to a comprehensive set of tools designed specifically for your application environment. Sounds good? Let's take your first step today.

p.s.: These diagrams came from the slide deck that I used at Collaborate. You may find the full presentation on the conference CD.

Wednesday, April 30, 2008

Oracle OpenWorld 2008 Registration is Now Open

Oracle OpenWorld 2008 is now open for registration. This year's event will be taking place quite a bit earlier from September 21-25 at the Moscone Center in San Francisco.

We have already started planning the breakout sessions. If there is any particular topic on Oracle Enterprise Manager and application management that you want us to cover, leave us a comment.

You may find out more about the event here: http://www.oracle.com/openworld/2008/index.html

Link to registration page: http://www.oracle.com/openworld/2008/registration.html

See you in San Francisco in September!

Thursday, April 24, 2008

Updated Oracle Maintenance Wizard for E-Business Suite

Oracle Maintenance Wizard 2.10, which provides step-by-step guidance for maintenance and upgrade tasks, is available. Enhancements include:

- A new, more secured encryption method
- Updates to Upgrade Assistance 12 that takes you directly to 12.0.4 in one upgrade
- Additional automation and bug fixes

Upgrade paths now included in the Maintenance Wizard are:
- 10.7 -> (via the Upgrade Assistant 11.5.10)
- 11.0.3 -> (via the Upgrade Assistant 11.5.10)
- 11.5.3+ -> (via the Maintenance Pack Assistant 11.5.10)
- 11.5.8+ -> 12.0.4 (via the Upgrade Assistant 12)
- RDBMS 8i -> 10g (via the Database Assistant 10g)
- RDBMS 9i -> 10g (via the Database Assistant 10g)

You need to start using this version of the tool if you are still on the older (v1.x) release, as 1.x versions are already de-supported.

For more information on the Maintenance Wizard, review note 215527.1 (login required). For information on training for the Maintenance Wizard, review note 418301.1 (login required).

Friday, April 18, 2008

Comparing Application Management and Traditional Systems Management

Collaborate 2008 is over. Presenting at Collaborate was a different experience from presenting at OpenWorld. OpenWorld was an Oracle's show, so I had to worry about a bunch of logistics of putting things together. On the other hand, Collaborate was run by our customers. I just had to show up, present, attend a couple sessions myself, party, and speak with people, which I seemed to have more time to do at this event.

In one of the conversations, a question came up on the difference between application management and traditional systems management. I thought this may be an interesting topic for readers of this blog, so I am going to share that discussion with you.

Gartner Group defines application management as the monitoring, diagnostic, tuning, administration and configuration of packaged and custom applications. This seems to make sense. Application Management is about the management of, well, applications. But what is an application, and how does the management of an application different from managing other IT components?

An application helps end users accomplish a specific task. Siebel CRM, PeopleSoft Enterprise, Oracle E-Business Suite, Oracle Collaboration Suite, and the custom Jave EE-based software that you write, are all examples of applications, since end users can use these tools directly to perform to day to day work. Oracle RDBMS and Oracle Applications Server are not applications since end users typically do not write SQL statements, or Java code on run on these infrastructure software. Therefore, application management must be about managing these end user visible software, right? Yes, but not so simple.

Consider this. The performance and availability of a modern distributed application, whether it is written in Java EE, .NET, or integrated application stacks such as those provided by Siebel, PeopleSoft and JD Edwards EnterpriseOne are determined not only by the application layer, but also the middleware, database, operating systems, network, and storage layers. Successful management of applications therefore call for a holistic approach of managing the entire environment that supports the application.

In addition, because applications are used by end users in support of business activities, it is very important to manage applications according to business requirements and potential impact to business operations. This means defining performance and availability requirement according to the particular tasks that end users perform. In other words, application management needs to be done from the top-down, from the top where the end users are down to the bottom of the technology stack. This is rather different from the traditional systems management, in which the approach was much more bottom-up, and the focus is much more on the health of the individual components. It also means a whole new set of information to track, such as the activities that users perform on the applications and the experience that they get out of the applications.

Friday, April 11, 2008

Application Management Pack for Oracle E-Business Suite is Available on HP/UX

The first update to Application Management Pack for Oracle E-Business Suite is now available for HP/UX PA-RISC as well as Itanium in addition to the other O/S platforms that the pack support. You need Enterprise Manager Grid Control 10gR3 ( to run this pack, and you may download it through Metalink as patch 5489352.

The pack extends Enterprise Manager Grid Control to manage Oracle E-Business Suite systems. It supports E-Business Suite R11i (requires 11.5.10 ATG RUP4) and R12. Key capabilities include service level management, application performance management, configuration management, and automation of cloning processes.

I will be covering this pack in my breakout session at Collaborate next week and at the Enterprise Manager demo booth at the Oracle demoground.

Step to Fusion – Centralize the Managing of Your Current Oracle Applications on Oracle Enterprise Manager
Wednesday, 4:30-5:30 p.m.

Drop by if you are at the conference. See you in Denver!

Thursday, April 10, 2008

Six New Monitoring Plug-in's Are Available for Oracle Enterprise Manager

Oracle just announced the availability of six new system monitoring plug-in's to extend Oracle Enterprise Manager Grid Control's ability to monitor third party applications and technologies. These plug-in's support two commonly used applications - Microsoft Exchange and SAP R/3. They also cover infrastructure technologies such as EMC CLARiiON, VMware ESX, Apache Tomcat, and Sybase Adaptive Server.

You may notice that many of these are products that compete against Oracle products. What's even more interesting is that five of these six plug-in's were developed by Oracle. You may ask, why would Oracle want to invest resource managing other company's products? The reason is simple. These are all technologies used in conjunction with Oracle products. In order for Oracle Enterprise Manager to provide a holistic view on the health of Oracle products, it needs to cover the adjacent technologies that are integrated with Oracle products as well. Unlike another infrastructure software vendor whose heterogeneous management strategy is to rely primarily on partners to do the work, Oracle has taken a much more hands on approach by investing its own resource.

Friday, April 4, 2008

Collaborate 2008 Preview

Collaborate 2008 is coming to Denver, Colorado in just over two weeks. For those of you who haven't attended the event, Collaborate is the combined annual conference of the three major independent Oracle user groups –IOUG (International Oracle User Group), OAUG (Oracle Application User Group), and Quest (PeopleSoft User Group). Contrary to what an IT trade magazine journalist recently reported, the Oracle community is alive and well. The early word is that the user group expects over 7,000 people attending the event. That's a double digit increase in attendence compared to last year, and quite a feat to pull off in this economy.

Oracle will be a guest at the event, and we have numerous sessions planned around manageability of various Oracle applications and technologies. Here is a preview.

Top-Down Application Management – Oracle's Blueprint for Managing Applications from the Business Perspectives
Monday, 10:30-11:30 a.m.

Application Chanage Management and Masking for DBAs
Monday, 9:15-10:15 a.m.

Performance Diagnostic and Tuning Best Practices: What DBAs Must Know about Managing DB Performance
Tuesday, 3:30-4:30 p.m.

Improving IT Operations: Automated Provisioning and Patching, and Managing Configurations of Oracle Fusion Middleware Deployments
Wednesday, 8:30 a.m.-9:30 a.m.

Fool-Proof and Fast Track Strategies for a Successful Upgrade: Database Replay, DBUA and More
Wednesday, 11:00 a.m.-noon

Step to Fusion – Centralize the Managing of Your Current Oracle Applications on Oracle Enterprise Manager
Wednesday, 4:30-5:30 p.m.

With the exception of the last session “Step to Fusion”, all the sessions are listed under the IOUG conference agenda. “Step to Fusion” is listed under the OAUG agenda.

The “Step to Fusion” session is targeted for people who run Siebel, PeopleSoft Enterprise and Oracle E-Business Suite applications. We will cover how you can use Oracle Enterprise Manager to manage these applications today, and discuss the roadmap for evolving your application management toolset to facilitate your eventual adoption of Fusion technologies.

See you in Denver!

Thursday, March 27, 2008

Oracle Acquires e-TEST from Empirix

Oracle announced this morning that it has entered into an agreement to acquire the e-TEST suite products from Empirix. This follows the acquisition of Moniforce, a maker of end user monitoring products, in December 2007. e-TEST is made up of three components: e-Load for scalability, performance and load testing, e-Tester for automated functional and regression testing, and e-Manager Enterprise, for test process management, including test requirements management, test management, test execution and defect tracking. The combination of e-TEST suite and Oracle Enterprise Manager is expected to create a best-of-breed application management portfolio spanning the entire application life cycle, from development and testing to production deployment and application performance management.

Monday, February 11, 2008

First Update to Application Management Pack for Oracle E-Business Suite Available

The first update of Application Management Pack for Oracle E-Business Suite is now available. This is an OPatch rollup update on top of the original release. This update contains bug fixes in the areas of cloning and also supports management of some of Oracle E-Business Suite advanced topologies.

The key fixes / capabilities include:

- 6141071: Ability for user to choose custom directories for installing APPL_TOP and DB TOP while cloning EBS R12.
- 5976900: Ability for users to perform scale up or scale down clone of DB TOP.
- 6155177: Ability for clone to support the capability to skip optional steps specified in the Clone Procedure.
- 5876590: Support cloning of Individual EBS components (Database Techstack, Data Top, Application Techstack, and Application Top).
- 5892625: Ability to apply an EBS image on an existing E-Business Suite Target.

Command Line Interface (CLI) for discovering and registering E-Business Suite system. In addition to the EM Grid Control User Interface based EBS discovery process, you can now choose to discover using CLI. However the discovery mechanism still remains the same in both the steps.

Certified Oracle E-Business Suite Topology
- EBS deployed on shared file system (NFS): Customers can now use Application Management Pack to monitor and manage Oracle E-Business Suite systems deployed on a shared file system. However the cloning capability is still pending certification.
- SSL enabled EBS System: Using this updated Application Management Pack, you can monitor and manage Oracle E-Business Suite systems that are SSL enabled.

This pack is available through Oracle Metalink, as patch 5969524. It requires Oracle Enterprise Manager 10gR3 (

Thursday, January 31, 2008

Best Practices for Active Response Time Monitoring

At OpenWorld, I was asked about the proper way to set up synthetic transactions for monitoring applications. It was a good question, and I wanted to document my answer in some sort of whitepaper or technote. So far I still haven't gotten around to writing the formal document, so I am going just post it on this blog. Perhaps I could evolve this into the actual document.

As I discussed in the post “Response Time Monitoring - Real User vs. Synthetic”, there is a place for both real user and synthetic monitoring of applications. There are several challenges in using synthetic transactions, however, and these challenges are not unique to Oracle Enterprise Manager. You would have to consider them no matter which tool you use.

First, unless carefully designed, the tests may not be representative of actual end user activities, reducing the usefulness of the measurements. Therefore, you must be very careful in defining those tests. It would be a good idea to sit down with real users to observe how they use the applications. If the application has not been launched, work with the developers, or if there is one, the UI interaction designer to define the flow. In addition, work with your business sponsors to understand where the application will be used and the distribution of user population. You would want to place your synthetic test drivers at locations where it is important to measure user experience.

Second, some synthetic transactions are very hard to create and may introduce noise into business data. While it is usually relatively easy to create query-based synthetic transactions, it is much harder to create transactions that create or update data. For example, if synthetic transactions are to test for successful checkouts on an e-commerce website, the tests must be constructed carefully so that the test orders are not mis-categorized as actual orders.

To mitigate these potential problems, you should set up dedicated test account(s) to make it easier to tell whether something running on the application came from real users or the synthetic tests. For operations that involve changing data, determine ways to exclude those data from your reports. If it is possible, look for ways to purge the test data out of the system. It is not always possible or easy to do this, as some business processes do not allow changes to data after a certain point. If you are working with a custom application, consider building a “test mode” into the application to make it easier to roll back changes.

Third, security and authorization policies might impact the tests as well. You need to make sure that whatever test application user account has the proper access privilege to access the application elements to be tested. If authorization policies change, you need to verify to make sure that the tests are not affected. The same kind of consideration applies to passwords as well. If you are required to change passwords due to password aging policies, you need to make sure that those changes are reflected in your test setup.

Fourth, synthetic tests may introduce load into your application, so be judicious when setting up test frequency to avoid overloading your application. This means that you may not want to just use all your functional or load test scripts for production monitoring. These scripts were created for different purposes – testing functionality, and stress testing the application, and they may be an overkill for what you need to do to just test the application enough so that you can tell whether key operations are working.

Lastly, make sure your monitoring scripts log out of the application at the end of execution. This is especially important for applications that maintain some sort of session state on the mid-tier. If you do not log out, the resources would not be freed up in a timely manner, and this may impact the scalability of the application. By the same token, be sure to allocate resource to account for test connections on top of connections made by regular users.

Tuesday, January 8, 2008

Response Time Monitoring - Real User vs. Synthetic

Response time monitoring is a very important aspect of application management. In a way, it is nothing new. People have been monitoring response time ever since the days of green screen mainframe terminal applications. The only variable is the technologies involved, both on the application and the tools for monitoring them.

When I speak with customers, I sometimes get these questions about the pros and cons of monitoring response times of real user vs. synthetic transactions, also referred as passive vs. active monitoring, and whether one approach should be used over the other. The truth is both approaches are relevant and they complement each other. Here's why.

Real user monitoring is obviously important because it measures the actual experience of actual end users. Despite of its appeal, however, it is not a one size fits all solution to measure the performance of an application. First, there can be a lot of noise in the actual end user response time data, which may make it difficult to determine the relative performance of an application over time. Usage pattern can vary a lot at different times, and the performance of transaction requests can vary a lot depending on the data that are processed. Consequently, we may end up with a lot of apple vs. oranges when trying to compare response time measurements. Second, real user monitoring only works only when there are real users on the system. For example, if there is no end user on the system at 2 a.m., data collected from real user monitoring won't tell if the application is working or not.

On the other hand, synthetic transactions can be used even when real users are not around. Because the tests are well defined and their executions are controlled, it is also easier to do long term trending analysis to see if application performance has improved or degraded over time. Synthetic transactions do have their own shortcomings, however. First, unless carefully designed, the tests may not be representative of actual end user activities, reducing the usefulness of the measurements. Second, some synthetic transactions are very hard to create and may introduce noise into business data. While it is usually relatively easy to create query-based synthetic transactions, it is much harder to create transactions that create or update data. For example, if synthetic transactions are to test for successful checkouts on an e-commerce website, the tests must be constructed carefully so that the test orders are not mis-categorized as actual orders.

As you can see, there are tradeoffs to both approaches, which are complementary. It is not real user vs. synthetic, it should be real user and synthetic.