Irs-advise.com
 
 
 
 
 
 

Designing WebSphere Application Server for performance: an evolutionary approach

Designing WebSphere Application Server for performance: an evolutionary approachWebSphere Application Server is IBM's Java-based Web application server that supports the deployment and management of Web applications, ranging from simple Web sites to powerful e-business solutions. (1) WebSphere Application Server performance, and in particular its scalability and resiliency, have consistently improved from release to release. We describe here our evolutionary approach to improving the performance of WebSphere Application Server and the design principles we adopted that enabled these performance improvements.

WebSphere Application Server is, at its heart, a Java 2 Platform, Enterprise Edition (J2EE **) (2) Web application server, similar to a number of other Web application servers, such as BEA WebLogic ** Server (3) and Oracle Application Server. (4) J2EE is a platform for building distributed enterprise applications that includes a specification, a reference implementation, and a set of testing suites. Its core components are Enterprise JavaBeans ** (EJBs **), JavaServer Pages** (JSPs **), Java ** servlets, and a variety of interfaces for linking to databases and other information resources in the enterprise. The components of various types are deployed in "containers" that provide services for those components: servlets are deployed into a Web container, whereas EJBs are deployed into an EJB container.

Most Web applications written for J2EE have a common architecture, commonly referred to as the model-view-controller (MVC) architecture. (5) Its main advantage, which is the reason for its widespread adoption, is the separation of design concerns: data persistence and behavior, presentation, and control. Thus, control is centralized, code duplication is decreased, and changes of code are more localized. For example, changes to the presentation (view) of the data are limited to the JSP components, whereas changes to the business logic (model) are limited to the business model components. The controller, usually implemented as a Java servlet and associated classes, mediates between the view and the model and coordinates application flow.

The challenge to vendors of J2EE application servers is to support the deployment of applications that serve hundreds or thousands of simultaneous users. Such a load is greater than any single machine can handle. Moreover, because a single machine is susceptible to hardware failure, the design of such systems has to include failover. Failover is a backup mode in which the functions of a system component (e.g., a processor, network, or database) are assumed by other components when the primary component becomes unavailable through either failure or scheduled down time. The design also has to support the administration of multiple servers, as well as the management of workload and performance across these servers.

In our work to improve application performance measures we have used three main design principles, or themes: (1) optimize performance for failure-free, normal operation (that is, treat failover as a special case), (2) make finite resources appear infinite, and (3) minimize cross-process calls. While this paper is primarily a historical overview of the evolution of these major themes in the product, we expect that developers of WebSphere Application Server applications will gain additional insight into the features of WebSphere Application Server and will thus be able to optimize their applications. We also expect that readers interested in distributed systems will gain some insight into the way the three design principles have been used in WebSphere Application Server and will be able to apply some of the lessons we learned to their own systems.

The rest of the paper consists of three main sections, each of which resonates with one of these three themes. In the next section, we examine the evolution of WebSphere Application Server scalability and resiliency (a.k.a. availability). This section is based on the principle "optimize for failure-free operation" as applied to workload management and data partitioning. In the following section, in which we examine the evolution of resource management, we apply the principle "make finite resources appear infinite." In this section, we also describe the interplay between application performance and the WebSphere Application Server infrastructure, and the role WebSphere Application Server has played in providing high-performing standard interfaces for application developers. The principle "minimize cross-process calls" anchors the section that follows, in which we cover the evolution of caching and EJBs. We continue to examine the relationship between the WebSphere Application Server infrastructure and the application, discussing application design and deployment topology. We also discuss the unique content-caching capabilities of WebSphere Application Server that go well beyond the existing J2EE specification. We conclude with a brief summary.

The evolution of scalability and resiliency: Optimize for normal processing

Figure 1 shows the major components in a WebSphere Application Server installation: network sprayer, Web server, WebSphere Application Server, Web container, EJB container, WebSphere Application Server plug-in, and ORB (Object Request Broker). (6) The workload, consisting of client HTTP (HyperText Transfer Protocol) requests, is routed through these components. Each client request is eventually mapped to an execution thread that processes this request. As shown in Figure 1, there are three major routing points in WebSphere Application Server: the network sprayer, the WebSphere Application Server plug-in, and the ORB. (7)

Figure 1 shows two deployment options for Web containers and EJB containers. Each container can be deployed by itself within an application server (the two application servers located in the upper part in Figure 1), or they can be co-deployed within a single application server (the application server in the bottom part of Figure 1).

The network sprayer routes the arriving HTTP request to a Web server. The Web server is "WebSphere Application Server-enabled"; that is, it is equipped with a WebSphere Application Server plug-in that forwards requests from the Web server to a WebSphere Application Server. Because of its strategic positioning as the first point of WebSphere Application Server presence in the installation, the plug-in has built-in functions for workload management, security, and caching.

The ORB, the third routing point in our typical installation, routes calls to EJB methods to an application server that hosts an EJB container. The ORB also supports failover by resending failed requests to another application server. The ORB makes use of the Internet InterORB Protocol (IIOP**), which has the advantage that requests not originating in a Web browser can still benefit from workload management. (7)

We describe now our approach to enhancing the scalability and resiliency of the system in Figure 1. As we will show, the techniques used to make a system scalable often make the system resilient (highly available) as well. Over the past three releases of WebSphere Application Server, a pattern has emerged that has become our blueprint for enhancing the scalability and resiliency of the product. Our approach involves the use of the following techniques: clustering, workload management, data partitioning, caching, and data replication.

Clustering. Our primary technique for achieving scalability and resiliency is clustering. (See Figure 2.) When a single application server cannot support a site's performance requirements, we can obtain significant improvements in application throughput and response time by running multiple copies of an application on a cluster of application servers (Figure 2B). If the application is well-written and the WebSphere Application Server system is properly configured and provisioned, then close to linear scalability can be achieved. Figure 2C illustrates a failover scenario, in which a failure of one of the nodes in the three-node cluster is handled by having the remaining nodes take over the entire load.