Weblogic Exercises PDF
Weblogic Exercises PDF
Contents
1 Introduction-------------------------------------------------------------------------------------------------------------------------------3
2 JVM Tuning------------------------------------------------------------------------------------------------------------------------------6
3 Deployment-------------------------------------------------------------------------------------------------------------------------------8
4 Diagnostic Framework----------------------------------------------------------------------------------------------------------------12
5 Class Loading---------------------------------------------------------------------------------------------------------------------------13
6 Security----------------------------------------------------------------------------------------------------------------------------------14
7 Configure Resources-------------------------------------------------------------------------------------------------------------------15
8 Clustering--------------------------------------------------------------------------------------------------------------------------------19
9 Scripting---------------------------------------------------------------------------------------------------------------------------------24
2
Introduction
1 Introduction
The install files, referenced in the assignments, are located in the ${COURSE_HOME}/installation directory. The
application files (ear and war), referenced in the assignments, are located in the ${COURSE_HOME}/software
directory.
Two users are present: root and weblogic (both with password magic12c). Install the WebLogic Server software under
the user weblogic.
• Use the hostname utility to change the host name: hostname middleware-magic.com
• Run it again without any parameter to see if the host name has been changed
• Restart the network to apply the changes: service network restart
• Logout and login again
3
Introduction
a process. Large pages improve performance of applications that access memory frequently. When large pages
are used the application uses the translation look-aside buffer (TLB) in the processor more effectively. The
TLB is a cache of recently used virtual-to-physical address space translations stored in the processor memory.
To obtain data from memory, the processor looks up the TLB to find out the physical addresses (RAM or hard
disk) that hold the required data. In the case of large pages, a single entry in the TLB could represent a large
contiguous address space and thereby potentially reducing the TLB look-up frequency and avoiding frequent
look-ups in the hierarchical page table stored in-memory.
In setting-up WebLogic Server, we will use the following installation and configuration directory structure:
/home/weblogic
/apache - ${APACHE_HOME}
/grinder-3.4 - ${GRINDER_HOME}
/jrockit-jdk1.6.0_29-R28.2.2-4.1.0 - ${JAVA_HOME}
/weblogic12.1.1 ${MIDDLEWARE_BASE}
/installation - ${MIDDLEWARE_HOME}
/coherence_3.6
/wlserver_12.1 - ${WL_HOME}
/configuration
/applications
/base_domain
/domains
/base_domain - ${DOMAIN_HOME}
/nodemanagers
/base_domain - ${NODEMGR_HOME}
Install JRockit
• Run the file jrockit-jdk1.6.0_29-R28.2.2-4.1.0-linux-x64.bin
• Click next in the welcome screen
• Define the product installation directory, the default will suffice in our case, click next
• Do not select optional components and click next
• Click done, when the installation is finished
Configure a domain
A WebLogic domain is a logical grouping of server instances that are controlled through an admin server. When
creating a domain our first step is to set-up the admin server, i.e., create the files that define the admin server. To this
end we run the configuration wizard. Navigate to the ${WL_HOME}/common/bin directory and run config.sh:
• Select Create a New WebLogic Domain and click next
• Select Generate a Domain Configured Automatically to Support the Following Products: Select WebLogic
Server (Required) and click next
• Specify domain information
◦ domain name: base_domain
◦ domain location: /home/weblogic/weblogic12.1.1/configuration/domains
• Click next and configure the admin user and password
◦ name: weblogic
◦ user password: magic12c
4
Introduction
Note that the admin server is used to configure, manage and monitor servers in a domain. The admin server is a
WebLogic server instance with extra applications deployed on it that provide administrative capabilities. Other
WebLogic server instances (managed servers) also contain extra applications that the admin server uses to send
information to them. The admin server further maintains an XML repository in the ${DOMAIN_HOME}/config
directory. One thing to note is that the admin server is not clusterable and when it goes down, we cannot administer our
domain. In general, we can just restart the admin server and if the node manager was used to start it, the node manager
will restart it for us.
Getting acquainted
If we choose production mode as the configuration mode, the command-line will prompt for a username and password
on start-up. To overcome this we add a boot.properties file. First create the directory $
{DOMAIN_HOME}/servers/AdminServer/security (mkdir -p) and add a new file: boot.properties. Open the file in a
text editor and add the following name-value pairs:
username=weblogic
password=magic12c
Note that when the server is started these values will be encrypted.
To start the admin server, open a command shell, navigate to the ${DOMAIN_HOME} directory and run
startWebLogic.sh. The admin console can be reached at http://hostname:7001/console. Sometimes it can happen that the
security initialization is slow. This is related to the machine's entropy and the JVM reading random bytes by using a
particular secure-random source, usually this is /dev/urandom. To overcome this problem edit the java.security file
(located in the ${JAVA_HOME}/jre/lib/security directory) and change securerandom.source=file:/dev/urandom to
securerandom.source=file:/dev/./urandom.
The node manager requires authentication to start and stop managed servers. The first time the node manager is started
it communicates with the admin server to obtain a user name and password that will be used by the admin server to
authenticate the node manager. As we created the domain in production mode, random node manager credits are
created. When we want to use, for example, nmConnect we have to set the node manager user name and password to
known values. To this end start the admin server and open the admin console:
• Click on base_domain, security, general and click on the advanced link
• Edit the NodeManager User name and Password. In our case, we set these to respectively nodemanager and
magic12c
5
JVM Tuning
2 JVM Tuning
When using WebLogic (or any other application server for that matter) it is beneficial to tune for application throughput.
WebLogic is a highly multi-threaded environment and to let this run as smoothly as possible we need to give the threads
as much resources as possible, hence the choice for the throughput optimization configuration. By choosing throughput
as optimization strategy the following defaults are present:
• The nursery size (-Xns) is automatically sized to 50% of free heap
• The compaction is configured as -XXcompaction:abortable=false, percentage=6.25, heapParts=4096,
maxReferences=299900
• The thread local area size is configured as -XXtlasize:min=2k, preferred=16k, wastelimit=2k (Note that the
preferred size depends on the heap size and lies between 16k and 64k)
Note that the USER_MEM_ARGS variable overrides the JVM parameters. To make the changes effective the admin
server must be restarted.
When the flight recording is finished, click on the memory, gc graph tab to see where the most garbage collection time
is spend. Note that the initial and final collections are due to the flight recorder collecting extra information. By using
the events bar you can zoom into certain events.
The code environment shows information about which packages and classes the application spend most time executing.
The code generation tab shows information about the JIT compilation and optimization.
In the events environment we can combine the different events in order to get insight what effect the garbage collection
had on the application. Enable the following events:
• Java application - Java blocked
• Java application - Java wait
6
JVM Tuning
7
Deployment
3 Deployment
Set-up a stand-alone managed server
To deploy an application, we are going to use a standalone managed server. Make sure the admin server is running and
open the admin console
• In the admin console, click environment, servers and click new
• Enter the following parameters:
◦ server name: security-server
◦ listen address: host name (or IP address) of the machine where the server will be running
◦ server listen port: 8001
◦ select no, this is a stand-alone server
• Click next, review the summary and click finish
Create a machine
• In the admin console, click environment, machines and click new
• Enter the following parameters:
◦ name: machine1
◦ machine os: unix
• Click next and enter the following parameters:
◦ type: ssl
◦ listen address: host name (or IP address) of the machine where the node manager will be running
◦ listen port: 5556
• Click finish
• Click machine1 and enable post-bind uid and post-bind gid and set both the post-bind uid and gid to weblogic
• Click save
• Click on the servers tab and add security-server to the machine
NODEMGR_HOME="/home/weblogic/weblogic12.1.1/configuration/nodemanagers/base_domain"
Run ./startNodeManager.sh in order to create the nodemanager.properties file and stop the node manager again (ctrl+c).
Open the nodemanager.properties file (located in the specified ${NODEMGR_HOME} directory) and set the property
StartScriptEnabled to false. When doing this, we need to copy the ${WL_HOME}/endorsed directory to the $
{JAVA_HOME}/jre/lib/endorsed directory.
Create a nodemanager.domains file in the ${NODEMGR_HOME} directory and add the following key-value pair:
base_domain=/home/weblogic/weblogic12.1.1/configuration/domains/base_domain
Note that when the configuration wizard is run such a file is created in the ${WL_HOME}/common/nodemanager
8
Deployment
directory. Start the nodemanager by using ./startNodeManager.sh. In order to test the set-up use the admin console to
start the managed server:
• Click on environment, server and subsequently on the control tab
• Select security-server, click start and click yes (you can click on the polling icon to monitor the starting
process)
In general, it is recommended to start the node manager when the machine boots. In this case, we need to know where
to put our custom commands that will be called when the system boots. Note that Unix-based systems specify so-called
run levels, and that for each run level, scripts can be defined that start a certain service. These scripts are located in
the /etc/rc.d/init.d directory. This allows for services to be started when the system boots or to be stopped on system
shutdown.
Shutdown all the servers by using the admin console (do it one by one, first the managed server than the admin server)
and than shutdown the node manager (ctrl+c).
In the ${MIDDLEWARE_BASE} directory, create a new directory scripts and copy the contents of the $
{COURSE_HOME}/voorbeelden/configuratie/linux/weblogic12.1.1/scripts directory to this directory. Open the
environment.properties and make sure the values reflect your environment, for example,
domain_name=base_domain
domain_home=/home/weblogic/weblogic12.1.1/configuration/domains/base_domain
listen_address_machine1=middleware-magic.com
listen_address_machine2=...
node_manager_username=nodemanager
node_manager_password=magic12c
node_manager_listen_port=5556
node_manager_listen_address=middleware-magic.com
node_manager_home=/home/weblogic/weblogic12.1.1/configuration/nodemanagers/base_domain
admin_username=weblogic
admin_password=magic12c
admin_server_listen_port=7001
admin_server_url=t3://middleware-magic.com:7001
Log in as root and create a node manager boot script in the /etc/rc.d/init.d directory (make sure it has execution rights,
for example, by using chmod 755 nodemanager). The scripts directory contains an example of a node manager boot
script (note that in the script the user weblogic is assumed). By using the chkconfig command we can update the run
level information for system services, for example, chkconfig --add nodemanager. To test the set-up shut the system
down and start it again. To check if the node manager is running we can use either ps -ef|grep java or netstat -anp|grep :
5556 which assumes the node manager is listening on port 5556.
Start and stop the admin server and managed server by using WLST
We can use the DomainStartService and DomainStopService script to respectively start and stop servers in the domain.
These scripts in turn call a so-called WLST script (later more on WLST). Check in the startDomain.py script if the
following lines are present (the rest needs to be in comments, by using # at the beginning of a line):
print 'CONNECT TO NODE MANAGER ON MACHINE1';
nmConnect(node_manager_username, node_manager_password, listen_address_machine1,
node_manager_listen_port, domain_name, domain_home, 'ssl');
When the properties in the environment.properties are correct, you can use ./DomainStartService.sh to start the servers
9
Deployment
in the domain. Note that the servers are now running in the background and can be stopped by using
./DomainStopService.sh.
Deploy application
Create the following directory structure:
/home/weblogic/weblogic12.1.1
/configuration
/applications
/base_domain
Next, within the base_domain directory create the following directory structure:
/applications/base_domain
/security
/app
/plan
By using this directory structure WebLogic Server will automatically create a deployment plan (or pick a deployment
plan when it is already present). Copy the SecurityComplete.war (${COURSE_HOME}/software) to the /app directory
and rename the war file to Security.war.
10
Deployment
<IfModule weblogic_module>
ConnectTimeoutSecs 10
ConnectRetrySecs 2
DebugConfigInfo ON
WLSocketTimeoutSecs 2
WLIOTimeoutSecs 300
Idempotent ON
FileCaching ON
KeepAliveSecs 20
KeepAliveEnabled ON
DynamicServerList ON
WLProxySSL OFF
</IfModule>
<Location /Security>
SetHandler weblogic-handler
WebLogicHost host name (or IP address)
WebLogicPort 8001
</Location>
To let the Apache HTTP Server pick up the configuration we add the following to the httpd.conf file:
# put it near the end of the file where all the other includes are present
# mod_wl configuration
Include conf/mod_wl.conf
Restart the HTTP Server (./apachectl -k stop and ./apachectl -k start). To test the configuration we can use the URL:
http://hostname:8888/Security/?__WebLogicBridgeConfig. To reach the application we can use the URL:
http://hostname:8888/Security/faces/overview.xhtml.
11
Diagnostic Framework
4 Diagnostic Framework
A detailed example can be found in the post: Performing Diagnostics in a WebLogic environment -
http://middlewaremagic.com/weblogic/?p=6016.
Logging
By using the admin console, we can see portions of the log files
• Click diagnostics, log files
On the operating system the log files are located in the ${DOMAIN_HOME}/servers/server-name/logs directory. Note
that when the node manager has been used to start the server, relevant information is located in the .out files.
Harvesting
• Click diagnostics, diagnostic modules and click new
• Enter the following parameters:
◦ name: security-server-module
• Click ok
• Click security-server-module, click the targets tab and target the module the security-server
• Click on the collected metrics, configuration tab, click new and select ServerRuntime
• Click next and select the WorkManagerRuntimeMBean from the list
• Click next and select all the attributes
• Click next and select com.bea:ApplicationRuntime=Security, Name=default, ServerRuntime=security-server,
Type=WorkManagerRuntime (in the admin console mouse over to view the full text)
• Click finish
An example that shows how to obtain run-time information using the WebLogic Scripting Tool is presented in the post:
Using WLST to obtain WebLogic Runtime Information - http://middlewaremagic.com/weblogic/?p=7505. An example
that uses JMX can be found in the post: Wicket Spring in Hibernate on WebLogic -
http://middlewaremagic.com/weblogic/?p=7478.
Dashboard
WebLogic collects run time information in the form of JMX run time MBeans. These MBeans can be accessed by using
WLST, an example of which can be found in the post Using WLST to obtain WebLogic Runtime Information -
http://middlewaremagic.com/weblogic/?p=7505.
A nice way to access the run time information is by using the dashboard (http://hostname:7001/console/dashboard). By
using the dashboard, we can create our own views based on different harvesters
• Open the dashboard
• Click my views and click on the new view icon (and enter a name)
• Click on the metric browser tab
• Select security-server from the servers list, select the collected metrics only option and click go
• Select WorkManager (in the types section), default, security (in the instances section) and drag and drop the
completedrequests and pendingrequests metrics (from the metrics section) on the graph
• Click start and issue some requests to the application
Note that by using the dashboard we can get a graphic insight in the load balancing when more than one server is
present.
12
Class Loading
5 Class Loading
The posts Classloading: Making Hibernate work on WebLogic - http://middlewaremagic.com/weblogic/?p=5861 and
Classloading and Application Packaging - http://middlewaremagic.com/weblogic/?p=6725 shows detailed information
how class loading works in a WebLogic Server works.
Shared libraries
Copy the libraries coherence3_7.war, JBossRichFaces3_3.war and jsf1_2.war from the ${COURSE_HOME}/software
directory to the ${WL_HOME}/common/deployable-libraries directory.
Stop and delete (undeploy) the security application. Copy the SecurityApplication.war file to the security/app directory,
delete the old Security.war file and rename SecurityApplication.war to Security.war.
Adjust the deployment plan to incorporate the shared libraries. Add the following to the weblogic.xml file (located in
the security/plan/WEB-INF directory)
<weblogic-web-app ...>
<library-ref>
<library-name>coherence</library-name>
<specification-version>3.7</specification-version>
<implementation-version>3.7.1</implementation-version>
<exact-match>true</exact-match>
</library-ref>
<library-ref>
<library-name>JSF</library-name>
<specification-version>1.2</specification-version>
<implementation-version>1.2.14</implementation-version>
<exact-match>true</exact-match>
</library-ref>
<library-ref>
<library-name>jbossrichfaces</library-name>
<specification-version>3.3</specification-version>
<implementation-version>3.3.1</implementation-version>
<exact-match>true</exact-match>
</library-ref>
</weblogic-web-app>
13
Security
6 Security
Detailed information about security can be found in the posts:
- Securing the WebLogic Server - http://middlewaremagic.com/weblogic/?p=6479
- WebLogic Identity Management - http://middlewaremagic.com/weblogic/?p=7527
- WebLogic Access Management - http://middlewaremagic.com/weblogic/?p=7558
- Using Access Manager to Secure Applications Deployed on WebLogic - http://middlewaremagic.com/weblogic/?
p=7600
Security mapping
Stop and delete (undeploy) the security application. Copy the SecurityApplicationSecured.war file to the security/app
directory, delete the old Security.war file and rename SecurityApplicationSecured.war to Security.war.
14
Configure Resources
7 Configure Resources
The resources to be configured will be used in the cluster to this end we first set-up a WebLogic cluster.
Create cluster
• Click environment, clusters and click new
• Enter the following parameters:
◦ name: loadtest-cluster
◦ messaging mode: unicast
• Click ok
Clone a server
• Click environment, servers, select cluster-server1 and click clone
• Enter the following parameters:
◦ server name: cluster-server2
◦ server listen address: host name (or IP address)
◦ server listen port: 9002
• Click ok
15
Configure Resources
On the connection pool, configuration tab we can fine tune the connection pool settings. Here, we can set the initial size
and the maximum size of the pool. When PreparedStatement objects are used, the statement cache can be configured.
The advanced area contains options to make the connection pool more resilient to failures, such as the test connections
on reserve attribute, enables a feature that validates connections before they are given to an application. Note that the
validation is done synchronously and will thus add some overhead. By using the test frequency option, the feature to
test unused connections on a frequent basis is enabled. Another important feature to enable is the connection retry
frequency. When this is set to another value than zero, WebLogic will try to create the data source every so often when
the database is temporarily unavailable.
Note that when using a cluster we can create a persistent store/JMS server pair for every server in the cluster.
Create a subdeployment (a mechanism to group resources of a JMS module and target them to a server, for example a
JMS server):
• Click on loadtest-jms-module, on the tab subdeployments, click new and enter the following parameters:
◦ Subdeployment name: loadtest-jms-subdeployment
16
Configure Resources
Deploy application
Use the admin console to start the managed servers in the cluster:
• Click environment, servers
• Click the control tab, select cluster-server1 and cluster-server2 and click start
By using this directory structure WebLogic Server will automatically create a deployment plan. Copy the LoadTest6.ear
(${COURSE_HOME}/software) to the /app directory.
17
Configure Resources
The Java EE specification requires that EJB components invoked through their remote interfaces must use pass-by-
value semantics, meaning that method parameters are copied during the invocation. Changes made to a parameter in the
bean method are not reflected in the caller's version of the object. Copying method parameters is required in the case of
a true remote invocation, of course, because the parameters are serialized by the underlying RMI infrastructure before
being provided to the bean method. Pass-by-value semantics are also required between components located in different
enterprise applications in the same Java virtual machine due to class loader constraints. EJB components located in the
same enterprise application archive (.ear) file are loaded by the same class loader and have the option of using pass-by-
reference semantics for all invocations, eliminating the unnecessary copying of parameters passed during the invocation
and improving performance. By setting the enable-call-by-reference parameter to true in weblogic-ejb-jar.xml, we
enable this feature for a specific bean in the application. Local references always use pass-by-reference semantics and
are unaffected by the enable-call-by-reference setting. When we deploy an EJB with a remote interface and do not
enable call by reference, WebLogic will issue a warning of the performance cost, i.e.,
<Feb 9, 2012 1:44:11 PM CET> <Warning> <EJB> <BEA-010202> <Call-by-reference is not enabled for EJB
Company. The server will have better performance if it is enabled. To enable call-by-reference, set
the enable-call-by-reference element to True in the weblogic-ejb-jar.xml deployment descriptor or
corresponding annotation for this EJB.>
When the deployment plan has been generated, the files Plan.xml and weblogic.xml are created automatically. The other
files, weblogic-application.xml and weblogic-ejb-jar.xml, we have to create ourselves according to the below directory
structure:
/loadtest
/app
LoadTest6.ear
/plan
/LoadTest6.ear
/META-INF
weblogic-application.xml
/Model.jar
/META-INF
weblogic-ejb-jar.xml
/Web.war
/WEB-INF
weblogic.xml
Plan.xml
in which the deployment overrides (weblogic-application.xml, weblogic-ejb-jar.xml and weblogic.xml) have the
following contents:
<weblogic-application xmlns="http://xmlns.oracle.com/weblogic/weblogic-application"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://xmlns.oracle.com/weblogic/weblogic-application
http://xmlns.oracle.com/weblogic/weblogic-application/1.4/weblogic-application.xsd">
</weblogic-application>
<weblogic-ejb-jar xmlns="http://xmlns.oracle.com/weblogic/weblogic-ejb-jar"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://xmlns.oracle.com/weblogic/weblogic-ejb-jar
http://xmlns.oracle.com/weblogic/weblogic-ejb-jar/1.1/weblogic-ejb-jar.xsd">
<weblogic-enterprise-bean>
<ejb-name>Company</ejb-name>
<enable-call-by-reference>True</enable-call-by-reference>
</weblogic-enterprise-bean>
</weblogic-ejb-jar>
<weblogic-web-app xmlns="http://xmlns.oracle.com/weblogic/weblogic-web-app"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://xmlns.oracle.com/weblogic/weblogic-web-app
http://xmlns.oracle.com/weblogic/weblogic-web-app/1.4/weblogic-web-app.xsd">
</weblogic-web-app>
18
Clustering
8 Clustering
The posts
- Setting-up a High Available Tuned Java EE environment using WebLogic - http://middlewaremagic.com/weblogic/?
p=7265
- WebLogic 12c in Action - http://middlewaremagic.com/weblogic/?p=7716
provide detailed information about clustering.
Load balancing
Add the following to the mod_wl.conf file (located in the ${APACHE_HOME}/conf directory):
<Location /LoadTest6>
SetHandler weblogic-handler
WebLogicCluster hostname:9001,hostname:9002
</Location>
Setting up automatic service migration requires a leasing policy. Leasing is the process WebLogic uses to manage
services that are required to run on only one server in the cluster at a time. In this case we will configure consensus-
based leasing. This style of leasing keeps the leasing table in-memory. One server in the cluster is designated as the
cluster leader. The cluster leader controls leasing in that it holds a copy of the leasing table in-memory and other servers
in the cluster communicate with the cluster leader to determine lease information. The leasing table is replicated across
the cluster to ensure high availability. To configure consensus-based leasing we have to follow the following steps:
• Use the cluster's migration, configuration tab and set the migration basis to consensus (consensus-based
leasing requires a node manager on every machine hosting managed servers within the cluster. The node
manager is required to get health monitoring information about the involved servers)
• Select machine1 as candidate machine for migratable servers
• Click save
To configure migration:
• Click environment and then migratable targets
• Select cluster-server1 (migratable) and click the migration, configuration tab
• Set the service migration policy to Auto-Migrate Exactly-Once Services
• Select the user-preferred server, i.e., the server to host the service (is automatically set to cluster-server1)
• Specify constrained candidate servers that can host the service should the user-preferred server fail, select both
servers
A remark is in order. When using a uniform distributed queue WebLogic creates the necessary members on the JMS
servers to which the uniform distributed queue is targeted. In our case the uniform distributed queue is targeted only to
one JMS server and thus in order to be highly available the JMS server needs to be migrated to the other server.
Normally, we would set up a JMS server on every managed server. When we do this it is also possible to use auto-
migrate failure recovery servers as the automatic service migration policy, i.e., when one managed server fails the JMS
environment will continue to function without the service because other members are still available.
Retarget the persistent store and JMS server to the cluster-server1 migratable target, to do this without errors
• Click services, messaging, JMS servers, select jms-server1 and set the target to none
• Click save
• Click services, persistent stores, select filestore1 and set the target to cluster-server1 (migratable)
• Click save
• Click services, messaging, JMS servers, select jms-server1 and set the target to cluster-server1 (migratable)
• Click save
• Click activate changes
19
Clustering
Next all the servers need to be restarted. Stop the servers by using the admin console. First stop the servers in the cluster
and when these are shutdown, stop the admin server. Adjust the WLST script to start (startDomain.py) and stop
(stopDomain.py) the servers, for example,
print 'CONNECT TO NODE MANAGER ON MACHINE1';
nmConnect(node_manager_username, node_manager_password, listen_address_machine1,
node_manager_listen_port, domain_name, domain_home, 'ssl');
Start the servers by using DomainStartService.sh. Start the HTTP Server by using apachectl -k start. Do not forget to
start-up the application. To see if it all works enter the URL: http://hostname:8888/LoadTest6/testservlet. To test the
migration, shutdown cluster-server1. To see if the migration worked, click environment, migratable targets and click the
control tab. You should see something like:
Cluster Current Hosting Server Candidate Servers Status of Last Migration
loadtest-cluster cluster-server2 cluster-server1, cluster-server2 Succeeded
When the migration succeeded again hit the URL: http://hostname:8888/LoadTest6/testservlet, to see that the requests
are being failed over to cluster-server2 (click deployments, LoadTest6, workload monitoring tab).
Check in the setGrinderEnv.sh script if JAVA_HOME is set correctly. In the test.py script adjust the test URL. To start
the test, first start startConsole.sh and subsequently start startAgent.sh. In the console click action, start processes to
start the test.
Let us perform some monitoring. For example, during the load test we are interested in monitoring the paging. The
Linux memory handler manages the allocation of physical memory by freeing portions of physical memory when
possible. All processes use memory, but each process does not need all its allocated memory all the time. Taking
advantage of this fact, the kernel frees up physical memory by writing some or all of a process' memory to disk until it
is needed again. The kernel uses paging and swapping to perform this memory management. Paging refers to writing
portions (pages) of a process' memory to disk. Swapping refers to writing the entire process to disk. When pages are
written to disk, the event is called a page-out, and when pages are returned to physical memory, the event is called a
page-in. A page fault occurs when the kernel needs a page, finds it does not exist in physical memory because it has
been paged-out, and re-reads it in from disk. When the kernel detects that memory is running low, it attempts to free up
memory by paging out. Though this may happen briefly from time to time, if page-outs are plentiful and constant, the
20
Clustering
kernel can reach a point where it is actually spending more time managing paging activity than running the applications,
and system performance suffers. To monitor paging we can use, for example, vmstat 60 10 (which runs vmstat with ten
updates, 60 seconds apart), the following shows an example output
# Output machine1 (where server1 and server3 are running)
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
2 0 0 1185576 40744 521408 0 0 63 10 384 347 6 3 90 1 0
2 0 0 1178368 40852 527824 0 0 0 132 2921 2509 21 10 68 0 0
1 0 0 1174384 40924 530588 0 0 0 63 2442 2130 17 9 74 0 0
0 0 0 1169796 41020 534336 0 0 0 77 2798 2408 19 10 71 0 0
0 0 0 1162728 41124 540188 0 0 1 130 2941 2614 18 11 70 0 0
1 0 0 1155028 41216 544020 0 0 0 76 2899 2549 21 10 69 0 0
0 0 0 1150812 41316 548804 0 0 0 105 3130 2790 21 11 68 0 0
0 0 0 1140256 41420 554224 0 0 0 120 2947 2595 19 11 69 0 0
1 0 0 1121904 41528 561132 0 0 0 140 2896 2557 19 11 70 0 0
3 0 0 1110760 41628 567212 0 0 0 123 2963 2657 18 13 68 0 0
# Output machine2 (where the admin server and server2 are running)
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 0 1622200 34556 574508 0 0 86 15 642 305 2 0 97 1 0
0 0 0 1617552 34656 578236 0 0 0 87 2685 1967 5 2 93 0 0
0 0 0 1615476 34748 580480 0 0 0 55 2309 1562 3 1 95 0 0
0 0 0 1612872 34844 582992 0 0 0 64 2616 1898 4 2 95 0 0
1 0 0 1609964 34948 585576 0 0 0 73 2735 2017 4 2 94 0 0
1 0 0 1605508 35052 589116 0 0 0 81 2712 1985 4 2 94 0 0
1 0 0 1602076 35152 592352 0 0 0 77 2913 2313 5 2 93 0 0
0 0 0 1596448 35260 595104 0 0 0 69 2773 2095 4 2 94 0 0
0 0 0 1590000 35360 599212 0 0 0 92 2693 2038 4 2 94 0 0
0 0 0 1587164 35456 601748 0 0 0 106 2633 1861 5 2 93 0 0
Memory
swpd: the amount of virtual memory used.
free: the amount of idle memory.
buff: the amount of memory used as buffers.
cache: the amount of memory used as cache.
inact: the amount of inactive memory. (-a option)
active: the amount of active memory. (-a option)
Swap
si: Amount of memory swapped in from disk (/s).
so: Amount of memory swapped to disk (/s).
IO
bi: Blocks received from a block device (blocks/s).
bo: Blocks sent to a block device (blocks/s).
System
in: The number of interrupts per second, including the clock.
cs: The number of context switches per second.
CPU
These are percentages of total CPU time.
us: Time spent running non-kernel code. (user time, including nice time)
sy: Time spent running kernel code. (system time)
id: Time spent idle. Prior to Linux 2.5.41, this includes IO-wait time.
wa: Time spent waiting for IO. Prior to Linux 2.5.41, included in idle.
st: Time stolen from a virtual machine. Prior to Linux 2.6.11, unknown.
The values for si and so are both zero, indicating there are no page-ins and page-outs.
Let us look at some of the diagnostics collected by WebLogic. To this end, open the admin console, click deployments
and click the monitoring tab. This page displays monitoring information for all applications deployed to the domain.
The JMS tab displays monitoring information for all JMS destinations (note that the application only sends a message
when a new person is added) an example output looks as follows:
21
Clustering
The EJB (stateless and message-driven) tab displays monitoring information for all the Enterprise JavaBeans (EJBs), an
example output looks as follows:
Company example - Pooled Beans Current Count: 3, Access Total Count: 348936,
Transactions Committed Total Count: 348899,
Transactions Rolled Back Total Count: 37
CompanyMDB example - Access Total Count: 6979, Processed Message Count: 2347,
Transactions Committed Total Count: 6979
The JDBC tab displays monitoring information for all JDBC data sources, an example output looks as follows:
DataSource server1 - Active Connections High Count: 2, Connection Delay Time: 155,
PrepStmt Cache Access Count: 105396, Reserve Request Count: 116715,
Waiting For Connection Total: 0
DataSource server2 - Active Connections High Count: 2, Connection Delay Time: 127,
PrepStmt Cache Access Count: 10173, Reserve Request Count: 116716,
Waiting For Connection Total: 0
DataSource server3 - Active Connections High Count: 2, Connection Delay Time: 143,
PrepStmt Cache Access Count: 99600, Reserve Request Count: 116698,
Waiting For Connection Total: 0
The workload tab shows statistics for the Work Managers, constraints, and policies that are configured for application
deployments, an example output looks as follows:
default server1 - Pending Requests: 0, Completed Requests: 118664
default server2 - Pending Requests: 0, Completed Requests: 118616
default server3 - Pending Requests: 0, Completed Requests: 118635
You can use the JRockit Mission Control Flight Recording, to see if there are any hick-ups due to garbage collections.
In general, JVM instances running on the same machine will typically not run the garbage collection at the same time.
This means that we will have a JVM available to process application requests on other available CPUs. This is an
advantage of vertical scaling that leads to a higher application throughput. To see how the application is performing we
can add the WebLogic pack to JRockit Mission Control. To this end
• If not already done so, start JRockit Mission Control (use java -Dhttp.proxyHost=your-proxy-host
-Dhttp.proxyPort=3128 -jar ${JAVA_HOME}/missioncontrol/mc.jar when behind a proxy)
• Click help and choose install plug-ins
• Open the tree JRockit Mission Control experimental update site, flight recorder plug-ins and check the
WebLogic tab pack option
• Click next and accept the license agreement
• Click next, review the features to be installed and click finish
• Restart JRockit Mission Control.
The WebLogic Diagnostic Framework (WLDF) can be configured to generate data events, from components such as
servlets, EJBs, JDBC, JTA and JMS. These events can be captured be a flight recording. The ability to generate event
data is controlled by the WLDF diagnostic volume configuration:
• In the WebLogic console, click environment, servers
• Choose a specific server, for example AdminServer
• Click on the general, configuration tab
• Set the diagnostic volume option to the desired option
As the load rises, WLDF automatically throttles the number of requests that are selected for event generation.
Start a flight recording (with the load test running) and let it run for about 30 minutes. To get some insight what effect
22
Clustering
the garbage collection had on the application we use the events environment of the flight recording. We enable the
following events:
• Java application - Java blocked
• Java application - Java wait
• Java virtual machine - GC - garbage collection
• WebLogic Server - EJB - EJB business method invocation
• WebLogic Server - JDBC - JDBC statement execute
• WebLogic Server - Servlet - Servlet invocation
The thread group 'thread group for queue: weblogic.socket.Muxer' contain the muxer threads. As a native muxer is used
and we have 2 CPUs available, the number of threads is equal to #CPUs + 1 = 3. These threads are always showing this
behavior, i.e., one thread at a time is active; picking requests of the sockets and put it in the execute queue. Here it will
be picked up by an execute thread from the thread group 'pooled threads', which will process it, i.e., the servlet
invocation (light green), the EJB business method invocation (blue) and the JDBC statement execution (purple). Note
that during the garbage collection the execute threads have to wait (yellow) and a pause in the processing of the work
defined for the execute thread is introduced.
Multiple machines
For this you need your neighbors collaboration (or a big enough machine to run two VMs). The post Deploy
WebLogic12c to Multiple Machines - http://middlewaremagic.com/weblogic/?p=7795 shows how to add extra managed
servers to different machines. Refer to the post for detailed steps.
23
Scripting
9 Scripting
The following shows an example of how to set-up a domain as was done in the assignments above. We first define a
number of parameters:
beahome = '/home/weblogic/weblogic12.1.1';
pathseparator = '/';
adminusername = 'weblogic';
adminpassword = 'magic12c';
adminservername='AdminServer';
adminserverurl='t3://hostname:7001';
domainname = 'script_domain';
domaindirectory = beahome + pathseparator + 'configuration' + pathseparator + 'domains' +
pathseparator + domainname;
domaintemplate = beahome + pathseparator + 'wlserver_12.1' + pathseparator + 'common' +
pathseparator + 'templates' + pathseparator + 'domains' + pathseparator + 'wls.jar';
jvmdirectory = '/home/weblogic/jrockit-jdk1.6.0_29-R28.2.2-4.1.0';
To make changes to the created domain, we start the admin server and connect to it
print 'START ADMIN SERVER';
startServer(adminservername, domainname, adminserverurl, adminusername, adminpassword,
domaindirectory);
24
Scripting
server1.getServerStart().setJavaVendor('Oracle');
server1.getServerStart().setArguments('-jrockit -Xms1024m -Xmx1024m -Xns256m -Xgc:throughput');
We created managed servers, that are coupled to machine1 and cluster. The JVM is tuned by using -jrockit -Xms1024m
-Xmx1024m -Xns256m -Xgc:throughput. By choosing throughput as optimization strategy the following defaults are
present:
• The compaction is configured as -XXcompaction:abortable=false, percentage=6.25, heapParts=4096,
maxReferences=299900
• The thread local area size is configured as -XXtlasize:min=2k, preferred=16k, wastelimit=2k. Note that the
preferred size depends on the heap size and lies between 16k and 64k
Additional tuning may be necessary when compaction causes long garbage collection pauses. To find out the impact
compaction has on the garbage collection pause time, we can run a flight recording and examine the compaction pause
parts of old garbage collections. In general, compaction pausetime depends on the compaction ratio (percentage or
externalPercentage and internalPercentage) and the maximum number of references. In multi-threaded applications
where threads allocate lots of objects, it might be beneficial to increase the TLA size. Caution must be taken, however,
to not make the TLA size too large as this increases the fragmentation and as a result more garbage collections need to
be run in order to allocate new objects.
migratabletargetserver = migratabletargets[0];
Configures a migration service that uses consensus based leasing. The migration policy is set to Auto-Migrate Exactly-
Once Services which means that the service will run if at least one candidate server is available in the cluster. Note that
this can lead to the case that all migratable targets are running on a single server. The migratable target performs health
25
Scripting
monitoring on the deployed migratable services and has a direct communication channel to the leasing system. When
bad health is detected the migratable target requests the lease to be released in order to trigger a migration:
• In the case of JTA, the server defaults to shutting down if the JTA system reports itself unhealthy, for example,
if an I/O error occurs when accessing the default store. When a server fails, JTA is migrated to a candidate
server
• In the case of JMS, the JMS server communicates its health to the monitoring system. When a dependent
service such as a persistent store fails, for example due to errors in the I/O layer, it is detected by the migration
framework. In this case the JMS server along with the persistent store (and path service when configured) is
migrated to a candidate server
targets.remove(migratabletargetserver);
targets.append(cluster);
targets.remove(cluster);
targets.append(jmsserver);
resource = module.getJMSResource();
targets.remove(jmsserver);
targets.append(cluster);
26
Scripting
targeted to them. A JMS server's primary responsibility for its destinations is to maintain information on what
persistent store is used for any persistent messages that arrive on the destinations, and to maintain the states of
durable subscribers created on the destinations
• Path Service - A path service is persistent map that can be used to store the mapping of a group of messages in
a Message Unit-of-Order to a messaging resource in a cluster. It provides a way to enforce ordering by pinning
messages to a member of a cluster hosting servlets, distributed queue members, or store-and-forward agents.
Note that the FileStore, JMSServer and PathService are all targeted to a migratable target
• JMS Module - JMS system resources are configured and stored as modules similar to standard Java EE
modules. Such resources include queues, topics, connection factories, templates, destination keys, quota,
distributed queues, distributed topics, foreign servers, and JMS store-and-forward (SAF) parameters. The JMS
Module is targeted to the cluster
• JMS Resources:
◦ Connection Factory is XA enabled and has the UnitOfOrder set to System. The connection factory is
targeted directly to the JMS module
◦ Uniform Distributed Queue with a round-robin load balancing policy and has a unit of order routing that
uses the PathService. Note that the uniform distributed queue is targeted to a SubDeployment that is
targeted to the JMSServer
Creates a data source that has global transactions enabled by using logging last resource.
Try to create a script, by using the examples above, that sets-up the cluster we created in the assignments and deploys
the load-test application. Also create start and stop scripts.
27