Wednesday, March 4, 2015

keep first 4 characters of email address and replace the rest with * in java code

String s = "msdgsalasdasdehi@gmail.com";
int atIndex = s.indexOf("@");
String localPart = String.format("%-" + atIndex + "s", s.substring(0, 4)).replaceAll("\\s", "*");
String domainPart = s.substring(atIndex);
System.out.println(localPart + domainPart);

Tuesday, February 17, 2015

Set debian to boot on shell

edit below file:
/etc/default/grub
$sudo vi /etc/default/grub

1 - find the following line and comment it out:
        GRUB_CMDLINE_LINUX_DEFAULT="quiet"

2- find the following line:
        GRUB_CMDLINE_LINUX=""
and change it to:
        GRUB_CMDLINE_LINUX="text"
if the line is not present you can  add it to the file.

3- Uncomment the following line:
GRUB_TERMINAL=console
if the line is not present you can  add it to the file.

4- Save the file and run:
$update-grub

i found it here:
ask.xmodulo.com/boot-into-command-line-ubuntu-debian.html

Sunday, December 29, 2013

heroku postresql connection extra properties needed java

ssl=true
sslfactory=org.postgresql.ssl.NonValidatingFactory

Wednesday, November 20, 2013

deploy a grails app on heroku using git

  1. cd <project-root-directory>
  2. grails run-app
  3. git init
  4. git add application.properties grails-app test web-app lib --force
  5. git commit -m init
  6. git push heroku master

how to install a third-party app you have binary for, to /usr/bin debian

  1. update-alternatives --install "/usr/bin/<app-name>" "<app-name>" "<full-path-to-app-executable>" 1
  2. update-alternatives --config <app-name>
For example assuming you are going to install an unzipped package of netbeans located in /opt/netbeans-7.4 :
  1.  update-alternatives --install "/usr/bin/netbeans" "netbeans" "/opt/netbeans-7.4/bin/netbeans" 1
  2. update-alternatives --config netbeans

Saturday, August 24, 2013

convert unicode string to hexadecimal in java code

    public static void main(String[] args) {
        try {
            System.out.println(toHex(JOptionPane.showInputDialog("input")));
        } catch (UnsupportedEncodingException ex) {
            Logger.getLogger(Notifier.class.getName()).log(Level.SEVERE, null, ex);
        }
    }

    public static String toHex(String arg) throws UnsupportedEncodingException {
        String res = "";
        char[] chars = new char[arg.length()];
        arg.getChars(0, arg.length(), chars, 0);
        for (int i = 0; i < chars.length; i++) {
            res += "|"+ String.format("%4s", Integer.toHexString(chars[i])).replace(' ', '0');;
        }
        return res;
    }

Saturday, May 25, 2013

db2 9.7 Diagnosing and resolving locking problems

To resolve a locking problem, you need to start by diagnosing the type of lock event causing the SQL query performance slowdown, or query completion failure, and the SQL statement or statements involved. The steps to help in diagnosing the type of locking problem and the steps that can then be taken to help resolve the locking issue are provided here.

Introduction

A locking problem is the proper diagnosis if you are experiencing a failure of applications to complete their tasks or a slow down in the performance of SQL queries due to locks. Therefore, the ideal objective is not to have any lock timeouts or deadlocks on a database system, both of which result in applications failing to complete their tasks.
Lock waits are normal expected events, but if the time spent waiting for a lock becomes large, then lock waits can slow down both SQL query performance and completion of an application. Excessive lock wait durations have a risk of becoming lock timeouts which result in the application not completing its tasks.
Lock escalations are a consideration as a locking problem when they contribute to causing lock timeouts. Ideally, the objective is not to have any lock escalations, but a small number can be acceptable if adverse effects are not occurring.
It is suggested that you monitor lock wait, lock timeout, and deadlock locking events at all times; typically at the workload level for lock waits, and at the database level for lock timeouts and deadlocks.
The diagnosis of the type of locking problem that is occurring and its resolution begins with the collection of information and looking for diagnostic indicators. The following sections help to guide you through this process.

Collect information

In general, to be able to objectively assess that your system is demonstrating abnormal behavior which can include processing delays and poor performance, you must have information that describes the typical behavior (baseline) of your system. A comparison can then be made between your observations of suspected abnormal behavior and the baseline. Collecting baseline data, by scheduling periodic operational monitoring tasks, is a key component of the troubleshooting process. For more detailed information about establishing the baseline operation of your system, see: "Operational monitoring of system performance".
To confirm what type of locking problem is the reason for your SQL query performance slowdown or query completion failure, it is necessary to collect information that would help to identify what type of lock event is involved, which application is requesting or holding this lock, what was the application doing during this event, and the SQL statement or statements that are involved in being noticeably slow.
The creation of a locking event monitor, use of a table function, or use of the db2pd command can collect this type of information. The information gathered by the locking event monitor can be categorized into three main categories:
  • Information about the lock in question
  • Information about the application requesting this lock and its current activities. In the case of a deadlock, this is information about the statement referred to as the victim.
  • Information about the application owning the lock and its current activities. In the case of a deadlock, this is information about the statement referred to as the participant.
For instructions about how to monitor lock wait, lock timeout, and deadlock locking events, see: Monitoring locking events.

Look for diagnostic indicators

The locking event monitor, a table function, or running the db2pd command can collect information that can help isolate the nature of a locking problem. Specifically, the following topics contain diagnostically indicative information to help you to diagnose and confirm the particular type of locking problem you are experiencing.
  • If you are experiencing long wait times and no lock timeouts, then you likely have a lock wait problem. To confirm: t0055234.html
  • If you are experiencing an increased number of deadlocks than the baseline number, then you likely have a deadlock problem. To confirm: t0055236.html
  • If you are experiencing an increased number of lock timeouts and the locktimeout database configuration parameter is set to a nonzero time value, then you likely have a lock timeout problem. To confirm (also consider lock wait problem): t0055235.html
  • If you are experiencing a higher than typical number of lock waits and the locking event monitor indicates that lock escalations are occurring (Yes), then you likely have a lock escalation problem. To confirm: t0055237.html

db2 9.7 Resolving lock escalation problems

After diagnosing a lock escalation problem, the next step is to attempt to resolve the issue resulting from the database manager automatically escalating locks from row level to table level. The guidelines provided here can help you to resolve the lock escalation problem you are experiencing and help you to prevent such future incidents.

About this task

The guidelines provided here can help you to resolve the lock escalation problem you are experiencing and help you to prevent such future incidents.
The objective is to minimize lock escalations, or eliminate them, if possible. A combination of good application design and database configuration for lock handling can minimize or eliminate lock escalations. Lock escalations can lead to reduced concurrency and potential lock timeouts, so addressing lock escalations is an important task. The lock_escals monitor element and messages written to the administration notification log can be used to identify and correct lock escalations.
First, ensure that lock escalation information is being recorded. Set the value of the mon_lck_msg_lvl database configuration parameter to 1. This is the default setting. When a lock escalation event occurs, information regarding the lock, workload, application, table, and error SQLCODEs are recorded. The query is also logged if it is a currently executing dynamic SQL statement.

Before you begin

Confirm that you are experiencing a lock escalation problem by taking the necessary diagnostic steps for locking problems outlined in Diagnosing and resolving locking problems.

Procedure

Use the following steps to diagnose the cause of the unacceptable lock escalation problem and to apply a remedy:
  1. Gather information from the administration notification log about all tables whose locks have been escalated and the applications involved. This log file includes the following information:
    • The number of locks currently held
    • The number of locks needed before lock escalation is completed
    • The table identifier and table name of each table being escalated
    • The number of non-table locks currently held
    • The new table-level lock to be acquired as part of the escalation. Usually, an S or X lock is acquired.
    • The internal return code that is associated with the acquisition of the new table-level lock
  2. Use the administration notification log information about the applications involved in the lock escalations to decide how to resolve the escalation problems. Consider the following options:
    • Check and possibly adjust either the maxlocks or locklist database configuration parameters, or both. In a partitioned database system, make this change on all database partitions. The value of the locklist configuration parameter may be too small for your current workload. If multiple applications are experiencing lock escalation, this could be an indication that the lock list size needs to be increased. Growth in workloads or the addition of new applications could cause the lock list to be too small. If only one application is experiencing lock escalations, then adjusting the maxlocks configuration parameter could resolve this. However, you may want to consider increasing locklist at the same time you increase maxlocks - if one application is allowed to use more of the lock list, all the other applications could now exhaust the remaining locks available in the lock list and experience escalations.
    • You might want to consider the isolation level at which the application and the SQL statements are being run, for example RR, RS, CS, or UR. RR and RS isolation levels tend to cause more escalations because locks are held until a COMMIT is issued. CS and UR isolation levels do not hold locks until a COMMIT is issued, and therefore lock escalations are less likely. Use the lowest possible isolation level that can be tolerated by the application.
    • Increase the frequency of commits in the application, if business needs and the design of the application allow this. Increasing the frequency of commits reduces the number of locks that are held at any given time. This helps to prevent the application from reaching the maxlocks value, which triggers a lock escalation, and helps to prevent all the applications from exhausting the lock list.
    • You can modify the application to acquire table locks using the LOCK TABLE statement. This is a good strategy for tables where concurrent access by many applications and users is not critical; for example, when the application uses a permanent work table (for example, not a DGTT) that is uniquely named for this instance of the application. Acquiring table locks would be a good strategy in this case as it will reduce the number of locks being held by the application and increase the performance because row locks no longer need to be acquired and released on the rows that are accessed in the work table.
      If the application does not have work tables and you cannot increase the values for locklist or maxlocks configuration parameters, then you can have the application acquire a table lock. However, care must be taken in choosing the table or tables to lock. Avoid tables that are accessed by many applications and users because locking these tables will lead to concurrency problems which can affect response time, and, in the worst case, can lead to applications experiencing lock timeouts.

What to do next

Rerun the application or applications to ensure that the locking problem has been eliminated by checking the administration notification log for lock-related entries.

db2 9.7 Monitoring database locking

Diagnosing and correcting lock contention situations in large DB2® environments can be complex and time consuming. The lock event monitor and other facilities are designed to simplify this task by collecting locking data.

Introduction

The lock event monitor is used to capture descriptive information about lock events at the time that they occur. The information captured identifies the key applications involved in the lock contention that resulted in the lock event. Information is captured for both the lock requestor (the application that received the deadlock or lock timeout error, or waited for a lock for more than the specified amount of time) and the current lock owner.
The information collected by the lock event monitor is written in binary format to an unformatted event table in the database. The captured data is processed in a post-capture step improving the efficiency of the capture process.
You can also directly access DB2 relational monitoring interfaces (table functions) to collect lock event information by using either dynamic or static SQL.
Determining if a deadlock or lock timeout has occurred is also simplified. Messages are written to the administration notification log when either of these events occurs; this supplements the SQL0911N (sqlcode -911) error returned to the application. In addition, a notification of lock escalations is also written to the administration notification log; this information can be useful in adjusting the size of the lock table and the amount of the table an application can use. There are also counters for lock timeouts (lock_timeouts), lock waits (lock_waits), and deadlocks (deadlocks) that can be checked.
The types of activities for which locking data can be captured include the following:
  • SQL statements, such as:
    • DML
    • DDL
    • CALL
  • LOAD command
  • REORG command
  • BACKUP DATABASE command
  • Utility requests
The lock event monitor replaces the deprecated deadlock event monitors (CREATE EVENT MONITOR FOR DEADLOCKS statement and DB2DETAILDEADLOCK) and the deprecated lock timeout reporting feature (DB2_CAPTURE_LOCKTIMEOUT registry variable) with a simplified and consistent interface for gathering locking event data, and adds the ability to capture data on lock waits.

Functional overview

Two steps are required to enable the capturing of lock event data using the locking event monitor:
  1. You must create a LOCK EVENT monitor using the CREATE EVENT MONITOR FOR LOCKING statement. You provide a name for the monitor and the name of an unformatted event table into which the lock event data will be written.
  2. You must specify the level for which you want lock event data captured by using one of the following methods:
    • You can specify particular workloads by either altering an existing workload, or by creating a new workload using the CREATE or ALTER WORKLOAD statements. At the workload level you must specify the type of lock event data you want captured (deadlock, lock timeout or lock wait), and whether you want the SQL statement history and input values for the applications involved in the locking. For lock waits you must also specify the amount of time that an application will wait for a lock, after which data is captured for the lock wait.
    • You can collect data at the database level and affect all DB2 workloads by setting the appropriate database configuration parameter:
      mon_lockwait
      This parameter controls the generation of lock wait events
      Best practice is to enable lock wait data collection at the workload level.
      mon_locktimeout
      This parameter controls the generation of lock timeout events
      Best practice is to enable lock timeout data collection at the database level if they are unexpected by the application. Otherwise enable at workload level.
      mon_deadlock
      This parameter controls the generation of deadlock events
      Best practice is to enable deadlock data collection at the database level.
      mon_lw_thresh
      This parameter controls the amount of time spent in lock wait before an event for mon_lockwait is generated
The capturing of SQL statement history and input values incurs additional overhead, but this level of detail is often needed to successfully debug a locking problem.
After a locking event has occurred, the binary data in the unformatted event table can be transformed into an XML or a text document using a supplied Java-based application called db2evmonfmt. In addition, you can format the binary event data in the unformatted event table BLOB column into either an XML report document, using the EVMON_FORMAT_UE_TO_XML table function, or into a relational table, using the EVMON_FORMAT_UE_TO_TABLES procedure.
To aid in the determination of what workloads should be monitored for locking events, the administration notification log can be reviewed. Each time a deadlock or lock timeout is encountered, a message is written to the log. These messages identify the workload in which the lock requestor and lock owner or owners are running, and the type of locking event. There are also counters at the workload level for lock timeouts (lock_timeouts), lock waits (lock_waits), and deadlocks (deadlocks) that can be checked.
Information collected for a locking event
Some of the information for lock events collected by the lock event monitor include the following:
  • The lock that resulted in an event
  • The application holding the lock that resulted in the lock event
  • The applications that were waiting for or requesting the lock that result in the lock event
  • What the applications were doing during the lock event
Limitations
  • There is no automatic purging of the lock event data written to the unformatted event table. You must periodically purge data from the table.
  • You can output the collected event monitor data to only the unformatted event table. Outputs to file, pipe, and table are not supported.
  • It is suggested that you create only one locking event monitor per database. Each additional event monitor only creates a copy of the same data.

Deprecated lock monitoring functionality

The deprecated detailed deadlock event monitor, DB2DETAILDEADLOCK, is created by default for each database and starts when the database is activated. The DB2DETAILDEADLOCK event monitor must be disabled and removed, otherwise both the deprecated and new event monitors will be collecting data and will significantly affect performance.
To remove the DB2DETAILDEADLOCK event monitor, issue the following SQL statements:
SET EVENT MONITOR DB2DETAILDEADLOCK state 0
DROP EVENT MONITOR DB2DETAILDEADLOCK

db2 9.7 Collecting lock event data and generating reports

You can use the lock event monitor to collect lock timeout, lock wait, and deadlock information to help identify and resolve locking problems. After the lock event data has been collected in an unreadable form in an unformatted event table, this task describes how to obtain a readable text report.

About this task

The lock event monitor collects relevant information that helps with the identification and resolution of locking problems. For example, some of the information the lock event monitor collects for a lock event is as follows:
  • The lock that resulted in a lock event
  • The applications requesting or holding the lock that resulted in a lock event
  • What the applications were doing during the lock event
This task provides instructions for collecting lock event data for a given workload. You might want to collect lock event data under the following conditions:
  • You notice that lock wait values are longer than usual when using the MON_GET_WORKLOAD table function.
  • An application returns a -911 SQL return code with reason code 68 in the administration notification log, stating that "The transaction was rolled back due to a lock timeout." See also message SQL0911N for further details.
  • You notice a deadlock event message in the administration notification log (-911 SQL return code with reason code 2, stating that "The transaction was rolled back due to a deadlock."). The log message indicates that the lock event occurred between two applications, for example, Application A and B, where A is part of workload FINANCE and B is part of workload PAYROLL. See also message SQL0911N for further details.
Restrictions
To view data values, you need the EXECUTE privilege on the EVMON_FORMAT_UE_* routines, which the SQLADM and DBADM authorities hold implicitly. You also need SELECT privilege on the unformatted event table table, which by default is held by users with the DATAACCESS authority and by the creator of the event monitor and the associated unformatted event table.

Before you begin

To create the locking event monitor and collect lock event monitor data, you must have DBADM, or SQLADM authority.

Procedure

To collect detailed information regarding potential future lock events, perform the following steps:
  1. Create a lock event monitor called lockevmon by using the CREATE EVENT MONITOR FOR LOCKING statement, as shown in the following example:
    CREATE EVENT MONITOR lockevmon FOR LOCKING
       WRITE TO UNFORMATTED EVENT TABLE
    Note: The following lists important points to remember when creating an event monitor:
    • You can create event monitors ahead of time and not worry about using up disk space since nothing is written until you activate the data collection at the database or workload level
    • In a partitioned database environment, ensure that the event monitors are placed in a partitioned table space across all nodes. Otherwise, lock events will be missed at partitions where the partitioned table space is not present.
    • Ensure that you set up a table space and bufferpool to minimize the interference on high performance work caused by ongoing work during accesses to the tables to obtain data.
  2. Activate the lock event monitor called lockevmon by running the following statement:
    SET EVENT MONITOR lockevmon STATE 1
  3. To enable the lock event data collection at the workload level, issue the ALTER WORKLOAD statement with one of the following COLLECT clauses: COLLECT LOCK TIMEOUT DATA, COLLECT DEADLOCK DATA, or COLLECT LOCK WAIT DATA. Specify the WITH HISTORY option on the COLLECT clause. Setting the database configuration parameter affects the lock event data collection at the database level and all workloads are affected.
    For lock wait events
    To collect lock wait data for any lock acquired after 5 seconds for the FINANCE application and to collect lock wait data for any lock acquired after 10 seconds for the PAYROLL application, issue the following statements:
    ALTER WORKLOAD finance COLLECT LOCK WAIT DATA WITH HISTORY AND VALUES
       FOR LOCKS WAITING MORE THAN 5 SECONDS
    ALTER WORKLOAD payroll COLLECT LOCK WAIT DATA 
       FOR LOCKS WAITING MORE THAN 10 SECONDS WITH HISTORY
    To set the mon_lockwait database configuration parameter with HIST_AND_VALUES input data value for the SAMPLE database, and to set the mon_lw_thresh database configuration parameter for 10 seconds, issue the following commands:
    db2 update db cfg for sample using mon_lockwait hist_and_values
    db2 update db cfg for sample using mon_lw_thresh 10000000
    For lock timeout events
    To collect lock timeout data for the FINANCE and PAYROLL applications, issue the following statements:
    ALTER WORKLOAD finance COLLECT LOCK TIMEOUT DATA WITH HISTORY
    ALTER WORKLOAD payroll COLLECT LOCK TIMEOUT DATA WITH HISTORY
    To set the mon_locktimeout database configuration parameter with HIST_AND_VALUES input data value for the SAMPLE database, issue the following command:
    db2 update db cfg for sample using mon_locktimeout hist_and_values
    For deadlock events
    To collect data for the FINANCE and PAYROLL applications, issue the following statements:
    ALTER WORKLOAD finance COLLECT DEADLOCK DATA WITH HISTORY
    ALTER WORKLOAD payroll COLLECT DEADLOCK DATA WITH HISTORY
    To set the mon_deadlock database configuration parameter with HIST_AND_VALUES input data value for the SAMPLE database, issue the following command:
    db2 update db cfg for sample using mon_deadlock hist_and_values
  4. Rerun the workload in order to receive another lock event notification.
  5. Connect to the database.
  6. Obtain the locking event report using one of the following approaches:
    1. Use the XML parser tool, db2evmonfmt, to produce a flat-text report based on the event data collected in the unformatted event table and using the default stylesheet, for example:
      java db2evmonfmt -d db_name -ue table_name -ftext -u user_id -p password
    2. Use the EVMON_FORMAT_UE_TO_XML table function to obtain an XML document.
    3. Use the EVMON_FORMAT_UE_TO_TABLES procedure to output the data into a relational table.
  7. Analyze the report to determine the reason for the lock event problem and resolve it.
  8. Turn OFF lock data collection for both FINANCE and PAYROLL applications by running the following statements or resetting the database configuration parameters:
    For lock wait events
    ALTER WORKLOAD finance COLLECT LOCK WAIT DATA NONE
    ALTER WORKLOAD payroll COLLECT LOCK WAIT DATA NONE
    To reset the mon_lockwait database configuration parameter with the default NONE input data value for the SAMPLE database, and to reset the mon_lw_thresh database configuration parameter back to its default value of 5 seconds, issue the following command:
    db2 update db cfg for sample using mon_lockwait none
    db2 update db cfg for sample using mon_lw_thresh 5000000
    For lock timeout events
    ALTER WORKLOAD finance COLLECT LOCK TIMEOUT DATA NONE
    ALTER WORKLOAD payroll COLLECT LOCK TIMEOUT DATA NONE
    To reset the mon_locktimeout database configuration parameter with the default NONE input data value for the SAMPLE database, issue the following command:
    db2 update db cfg for sample using mon_locktimeout none
    For deadlock events
    ALTER WORKLOAD finance COLLECT DEADLOCK DATA NONE
    ALTER WORKLOAD payroll COLLECT DEADLOCK DATA NONE
    To reset the mon_deadlock database configuration parameter with the default WITHOUT_HIST input data value for the SAMPLE database, issue the following command:
    db2 update db cfg for sample using mon_deadlock without_hist

db2 9.7 Types of data to collect for operational monitoring

Types of data to collect for operational monitoring

Several types of data are useful to collect for ongoing operational monitoring.
  • A basic set of DB2 systemperformance monitoring metrics.
  • DB2 configuration information
    Taking regular copies of database and database manager configuration, DB2 registry variables, and the schema definition helps provide a history of any changes that have been made, and can help to explain changes that arise in monitoring data.
  • Overall system load
    If CPU or I/O utilization is allowed to approach saturation, this can create a system bottleneck that might be difficult to detect using just DB2 snapshots. As a result, the best practice is to regularly monitor system load with vmstat and iostat (and possibly netstat for network issues) on Linux and UNIX-based systems, and perfmon on Windows. You can also use the administrative views, such as ENV_SYS_RESOURCES, to retrieve operating system, CPU, memory, and other information related to the system. Typically you look for changes in what is normal for your system, rather than for specific one-size-fits-all values.
  • Throughput and response time measured at the business logic level
    An application view of performance, measured above DB2, at the business logic level, has the advantage of being most relevant to the end user, plus it typically includes everything that could create a bottleneck, such as presentation logic, application servers, web servers, multiple network layers, and so on. This data can be vital to the process of setting or verifying a service level agreement (SLA).
 The DB2 system performance monitoring elements and system load data are compact enough that even if they are collected every five to fifteen minutes, the total data volume over time is irrelevant in most systems. Likewise, the overhead of collecting this data is typically in the one to three percent range of additional CPU consumption, which is a small price to pay for a continuous history of important system metrics. Configuration information typically changes relatively rarely, so collecting this once a day is usually frequent enough to be useful without creating an excessive amount of data.

db2 9.7 Basic set of system performance monitor elements

About 10 metrics of system performance provide a good basic set to use in an on-going operational monitoring effort.
There are hundreds of metrics to choose from, but collecting all of them can be counter-productive due to the sheer volume of data produced. You want metrics that are:
  • Easy to collect - You don't want to have to use complex or expensive tools for everyday monitoring, and you don't want the act of monitoring to significantly burden the system.
  • Easy to understand - You don't want to have to look up the meaning of the metric each time you see it.
  • Relevant to your system - Not all metrics provide meaningful information in all environments.
  • Sensitive, but not too sensitive - A change in the metric should indicate a real change in the system; the metric should not fluctuate on its own.
This starter set includes about 10 metrics:
  • The number of transactions executed:
    TOTAL_APP_COMMITS 
    This provides an excellent base level measurement of system activity.
  • Buffer pool hit ratios, measured separately for data, index, and temporary data:
    100 * (POOL_DATA_L_READS - POOL_DATA_P_READS) / POOL_DATA_L_READS
    100 * (POOL_INDEX_L_READS - POOL_INDEX_P_READS) / POOL_INDEX_L_READS
    100 * (POOL_TEMP_DATA_L_READS - POOL_TEMP_DATA_P_READS) / POOL_TEMP_DATA_L_READS 
    100 * (POOL_TEMP_INDEX_L_READS - POOL_TEMP_INDEX_P_READS)
      / POOL_TEMP_INDEX_L_READS
    Buffer pool hit ratios are one of the most fundamental metrics, and give an important overall measure of how effectively the system is exploiting memory to avoid disk I/O. Hit ratios of 80-85% or better for data and 90-95% or better for indexes are generally considered good for an OLTP environment, and of course these ratios can be calculated for individual buffer pools using data from the buffer pool snapshot.
    Although these metrics are generally useful, for systems such as data warehouses that frequently perform large table scans, data hit ratios are often irretrievably low, because data is read into the buffer pool and then not used again before being evicted to make room for other data.
  • Buffer pool physical reads and writes per transaction:
    (POOL_DATA_P_READS + POOL_INDEX_P_READS +
      POOL_TEMP_DATA_P_READS + POOL_TEMP_INDEX_P_READS)
      / TOTAL_APP_COMMITS
    
    (POOL_DATA_WRITES + POOL_INDEX_WRITES)
      / TOTAL_APP_COMMITS
    These metrics are closely related to buffer pool hit ratios, but have a slightly different purpose. Although you can consider target values for hit ratios, there are no possible targets for reads and writes per transaction. Why bother with these calculations? Because disk I/O is such a major factor in database performance, it is useful to have multiple ways of looking at it. As well, these calculations include writes, whereas hit ratios only deal with reads. Lastly, in isolation, it is difficult to know, for example, whether a 94% index hit ratio is worth trying to improve. If there are only 100 logical index reads per hour, and 94 of them are in the buffer pool, working to keep those last 6 from turning into physical reads is not a good use of time. However, if a 94% index hit ratio were accompanied by a statistic that each transaction did twenty physical reads (which could be further broken down by data and index, regular and temporary), the buffer pool hit ratios might well deserve some investigation.
    The metrics are not just physical reads and writes, but are normalized per transaction. This trend is followed through many of the metrics. The purpose is to decouple metrics from the length of time data was collected, and from whether the system was very busy or less busy at that time. In general, this helps ensure that similar values for metrics are obtained, regardless of how and when monitoring data is collected. Some amount of consistency in the timing and duration of data collection is a good thing; however, normalization reduces it from being critical to being a good idea.
  • The ratio of database rows read to rows selected:
    ROWS_READ / ROWS_RETURNED
    This calculation gives an indication of the average number of rows that are read from database tables in order to find the rows that qualify. Low numbers are an indication of efficiency in locating data, and generally show that indexes are being used effectively. For example, this number can be very high in the case where the system does many table scans, and millions of rows need to be inspected to determine if they qualify for the result set. On the other hand, this statistic can be very low in the case of access to a table through a fully-qualified unique index. Index-only access plans (where no rows need to be read from the table) do not cause ROWS_READ to increase.
    In an OLTP environment, this metric is generally no higher than 2 or 3, indicating that most access is through indexes instead of table scans. This metric is a simple way to monitor plan stability over time - an unexpected increase is often an indication that an index is no longer being used and should be investigated.
  • The amount of time spent sorting per transaction:
    TOTAL_SORT_TIME / TOTAL_APP_COMMITS
    This is an efficient way to handle sort statistics, because any extra overhead due to spilled sorts automatically gets included here. That said, you might also want to collect TOTAL_SORTS and SORT_OVERFLOWS for ease of analysis, especially if your system has a history of sorting issues.
  • The amount of lock wait time accumulated per thousand transactions:
    1000 * LOCK_WAIT_TIME / TOTAL_APP_COMMITS
    Excessive lock wait time often translates into poor response time, so it is important to monitor. The value is normalized to one thousand transactions because lock wait time on a single transaction is typically quite low. Scaling up to one thousand transactions simply provides measurements that are easier to handle.
  • The number of deadlocks and lock timeouts per thousand transactions:
    1000 * (DEADLOCKS + LOCK_TIMEOUTS) / TOTAL_APP_COMMITS
    Although deadlocks are comparatively rare in most production systems, lock timeouts can be more common. The application usually has to handle them in a similar way: re-executing the transaction from the beginning. Monitoring the rate at which this happens helps avoid the case where many deadlocks or lock timeouts drive significant extra load on the system without the DBA being aware.
  • The number of dirty steal triggers per thousand transactions:
    1000 * POOL_DRTY_PG_STEAL_CLNS / TOTAL_APP_COMMITS
    A "dirty steal" is the least preferred way to trigger buffer pool cleaning. Essentially, the processing of an SQL statement that is in need of a new buffer pool page is interrupted while updates on the victim page are written to disk. If dirty steals are allowed to happen frequently, they can have a significant impact on throughput and response time.
  • The number of package cache inserts per thousand transactions:
    1000 * PKG_CACHE_INSERTS / TOTAL_APP_COMMITS
    Package cache insertions are part of normal execution of the system; however, in large numbers, they can represent a significant consumer of CPU time. In many well-designed systems, after the system is running at steady-state, very few package cache inserts occur, because the system is using or reusing static SQL or previously prepared dynamic SQL statements. In systems with a high traffic of ad hoc dynamic SQL statements, SQL compilation and package cache inserts are unavoidable. However, this metric is intended to watch for a third type of situation, one in which applications unintentionally cause package cache churn by not reusing prepared statements, or by not using parameter markers in their frequently executed SQL.
  • The time an agent waits for log records to be flushed to disk:
    LOG_WRITE_TIME 
      / TOTAL_APP_COMMITS
    The transaction log has significant potential to be a system bottleneck, whether due to high levels of activity, or to improper configuration, or other causes. By monitoring log activity, you can detect problems both from the DB2® side (meaning an increase in number of log requests driven by the application) and from the system side (often due to a decrease in log subsystem performance caused by hardware or configuration problems).
  • In partitioned database environments, the number of fast communication manager (FCM) buffers sent and received between partitions:
    FCM_SENDS_TOTAL, FCM_RECVS_TOTAL
    These give the rate of flow of data between different partitions in the cluster, and in particular, whether the flow is balanced. Significant differences in the numbers of buffers received from different partitions might indicate a skew in the amount of data that has been hashed to each partition.

Cross-partition monitoring in partitioned database environments

Almost all of the individual monitoring element values mentioned above are reported on a per-partition basis.
In general, you expect most monitoring statistics to be fairly uniform across all partitions in the same DB2 partition group. Significant differences might indicate data skew. Sample cross-partition comparisons to track include:
  • Logical and physical buffer pool reads for data, indexes, and temporary tables
  • Rows read, at the partition level and for large tables
  • Sort time and sort overflows
  • FCM buffer sends and receives
  • CPU and I/O utilization

Thursday, February 14, 2013

Hate new Unity launcher menu of ubuntu 12.4 ?

If you don't like the launcher menu, being allways available vertically on your deskop.

If your screen is not wide.

If you dont like the global application menu of unity.

If you want your classic gnome menus back!

Just open a terminal and type:

sudo apt-get install gnome-panel

You will be able to select the "Gnome Classic" as your session type in ubuntu login page.

Friday, January 4, 2013

read image from file, resize BufferedImage and write back the thumbnail to another file all in java code

import java.awt.Graphics;
import java.awt.Image;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import java.util.logging.Level;
import java.util.logging.Logger;
import javax.imageio.ImageIO;

/**
 *
 * @author Masoud Salehi Alamdari
 */
public class ImageResizer {

    private final Logger logger = Logger.getLogger(ImageResizer.class.getName());

    /**
     * @param args the command line arguments
     */
    public static void main(String[] args) {
        ImageResizer imageResizer = new ImageResizer();
    }

    public ImageResizer() {
        BufferedImage image;
        try {
            image = ImageIO.read(new File("p1.jpg"));
            BufferedImage resizedImage = resize(image, 150, 150);
            ImageIO.write(resizedImage, "jpg", new File("p1_s.jpg"));
        } catch (IOException e) {
            logger.log(Level.WARNING, e.getMessage(), e);
        }
    }

    private BufferedImage resize(BufferedImage source, int width, int height) {
        BufferedImage newImage = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
        Graphics g = newImage.getGraphics();
        g.drawImage(source.getScaledInstance(width, height, Image.SCALE_SMOOTH), 0, 0, null);
        g.dispose();
        return newImage;
    }
}

Wednesday, December 19, 2012

javascript sync callback

js syncronise callback example :

<html>
    <head>
        <title>Haghe ostadi</title>
    </head>
    <body>

        <script language="javascript">
            function before(){
                this.call=function call(){
                    alert('call back!!');
                }
               
                after(this);
                alert('this would have been called first if it was async');
            }
            function after(x){
                for (i=0; i<100000000;i++){
                    var t= 'x2'=='x1';
                }
                x.call();
            }
            document.load=before();
        </script>

    </body>
</html>

Saturday, September 22, 2012

Subflow sample code for spring web flow 2.3.1.RELEASE

 The outer flow's path:
       src/main/webapp/WEB-INF/flows/outer/outer-flow.xml

 The inner flow's path:
       src/main/webapp/WEB-INF/flows/inner/inner-flow.xml


Inside outer flow:


    <subflow-state id="current-step" subflow="inner">
        <input name="actionrequest" value="100" type="java.lang.String"/>
        <transition on="submit" to="next-step"/>
    </subflow-state>


Inside inner flow:

    <input name="actionrequest"/>
    <view-state id="select-customer" >
        <transition on="submit" to="submit"/>
    </view-state>

    <end-state id="submit" commit="true" />

decision-state sample for spring web flow 2.3.1.RELEASE

Flow definition:

    <decision -state="-state" id="customerselectionRequired">
        <if
test="xBean.conditionCheck(flowRequestContext)"                                 then="nextStep1"
                else="nextStep2">
   
     </if>    
    </decision> 

    <view-state id="nextStep1"> 

    <view-state id="nextStep2"> 

      
xBean is a JSF bean and we send "flowRequestContext" to the bean to have access to the flow details, inside "conditionCheck" method.

here is the body of  "conditionCheck":


    public boolean customerSelectionRequired(RequestContext code){
        code.getActiveFlow().getId(); // The name of the flow!

        /*
         * implementation
         */
    }

Monday, June 25, 2012

XML versus JSON - What is Best for Your App?

One of the biggest debates in Ajax development today is JSON versus XML. This is at the heart of the data end of Ajax since you usually receive JSON or XML from the server side (although these are not the only methods of receiving data). Below I will be listing pros and cons of both methods.
If you have been developing Ajax applications for any length of time you will more than likely be familiar with XML data. You also know that XML data is very powerful and that there are quite a few ways to deal with the data. One way to deal with XML data is to simply apply a XSLT style sheet to the data (I won't have time in this post to go over the inconsistent browser support of XSLT, but it is something to look into if you want to do this). This is useful if you just want to display the data. However, if you want to do something programmatically with the data (like in the instance of a web service) you will need to parse the data nodes that are returned to the XMLHTTPRequest object (this is done by going through the object tag by tag and getting the needed data). Of course there are quite a few good pre-written libraries that can make going through the XML data easier and I recommend using a good one (I won't go into depth as to what libraries I prefer here, but perhaps in a future post). One thing to note is that if you want to get XML data from another domain you will have to use a server side proxy as the browser will not allow this type of receiving data across domains.
JSON is designed to be a more programmatic way of dealing with data. JSON (JavaScript Object Notation) is designed to return data as JavaScript objects. In an Ajax application using JSON you would receive text through the XMHTTPRequest object (or by directly getting the data through the script tag which I will touch on later) and then pass that text through an eval statement or use DOM manipulation to pass it into a script tag (if you haven't already read my post on using JSON without using eval click here to read the post). The power of this is that you can use the data in JavaScript without any parsing of the text. The down side would be if you just wanted to display the data there is no easy way to do this with JSON. JSON is great for web services that are coming from different domains since if you load the data through a script tag then you can get the data without a domain constraint.
The type of data that you use for your application will depend on quite a few factors. If you are going to be using the data programmatically then in most cases JSON is the better data method to use. On the other hand if you just want to display the data returned I would recommend XML. Of course there may be other factors such as if you are using a web service, which could dictate the data method. If you are getting data from a different domain and JSON is available this may be the better choice. For Ruby on Rails developers, if you would prefer to use JSON and XML is all that is available the 2.0 release allows you to change XML into JSON. One of the biggest reasons that people use JSON is the size of the data. In most cases JSON uses a lot less data to send to your application (of course this may very depending on the data and how the XML is formed).
I would recommend that you take a good look at the application that you are building and decide based on the above which type of data you should deal with. There may be more factors than the above including corporate rules and developer experience, but the above should have given you a good idea as to when to use either data method.
If you would like to contact me regarding any of the above you can make me your friend on Social Ajaxonomy and send a message to me through the service (Click here to go to my profile on Social Ajaxonomy).

By David Hurth
Source : http://www.ajaxonomy.com/2007/xslt/xml-versus-json-what-best-your-app

WSDL and WADL

Defining the Contract
An important part of any web service is the contract (or interface) which it defines between the service and any clients that might use it. This is important for a number of reasons: visualization with tools, interaction with other specifications (e.g., web service choreography), code generation, and enforcing a high-level agreement between the client and service provided (that still gives the service freedom to change the underlying implementation). Taken together, they give pretty compelling use cases for having web services contracts, although advocates of minimalism may disagree.
When IBM, Microsoft, and Ariba submitted WSDL 1.1 to the W3C in 2001 as a language for describing web services in conjunction with SOAP 1.1, HTTP POST and GET, and MIME, it quickly became a standard used by every SOAP toolkit. This happened in spite of the fact that it never progressed beyond being a W3C Note (which, according to W3C, is a document available for "discussion" and not officially endorsed by the W3C). In fact, though there is both a WSDL 1.1 and 1.2, WSDL 2.0 is the only version of the specification officially endorsed by the W3C.
With the rise in popularity of RESTful web services, there also became a need to describe contracts for these types of web services as well. Although WSDL 2.0 attempts to fill the gap by providing support for HTTP binding, another specification fills this need in an arguably better way: WADL, a specification developed at Sun by Marc Hadley. Though it has not been submitted to any official standards body (OASIS, W3C, etc.), WADL is promising because of its more comprehensive support for REST-style services.

Contract-First Development
In general there are two different approaches to development of web services in the real world: code-first or contract-first. Code-first is where existing code (generally methods/functions) is turned into a web service using tooling, e.g. the java2wsdl script in Apache Axis. Contract-first is where the actual web services contract is developed first (usually in WSDL), then this is then associated with the appropriate implementation--often using code generation with a tool such as the wsdl2java script in Apache Axis.
Though code-first is a highly popular approach, contract-first is generally considered to be best practice in order to shield the consumers of a service from changes in the underlying code base. By providing an XML-based contract, you are also protecting the client from the vagaries of how different Web Service toolkits generate contracts from code, differences in the way that language types are translated to XML types, etc. Though writing WSDL or WADL rather than code may involve some additional learning curve at the beginning, it pays off in the long run with more robustly designed services.

WSDL 1.1
An official W3C standard, the Web Services Description Language (WSDL) is an XML language for describing web services. WSDL 1.1 (which is still in wide use) has five major elements-
 types, message, portType, binding, and service
-in that order (figure 1 below); all these major elements may be defined 0 or more times in a WSDL document, except for <types>, which may be 0 or 1 time. Here's a short description of each:
  • <types>: This is where XML types to be used in the WSDL document are defined. Traditionally, this has meant using XML Schema, but newer versions of WSDL also support Relax NG.
  • <message>: This is the section where the input or output parts of an operation are defined, i.e. the "parameters" or "return types". It may have multiple child <part> elements, though WS-I forbids the use of more than one part per message in a document literal style service. The <part> itself may have an element (referring to a qualified XML element) or a type (referring to an XML Schema type) attribute; the later is use in RPC/encoded style services, the former in RPC/literal or Document/literal style services (see WSDL Styles).
  • <portType>: Here is where the operations that a web service offers are defined in terms of messages (input and output, with faults). Faults (referring to SOAP faults here) are the web service equivalent of the exception in languages like C++ or Java; most SOAP toolkits will translate SOAP faults into exceptions at runtime.
  • <binding>: This is the "how" of a service, specifying the binding of the operations defined in the portType(s) to specific protocols, such as SOAP.
  • <service>: This is the "where" of the service, specifying the address where a bound operation may be found.
These sections do not necessarily have to reside in the same XML document. In fact, it is common for there to be at least two different WSDL files, where one imports the other (see Abstract and Concrete WSDLs).
<definitions>
     <types>?
        <!-- Defines the XML types used in the WSDL -->
     </types>
     <message>*
        <part element="..." or type="..."/>*
     </message>
     <portType>*
       <!-- Defines the web service "methods" -->
       <operation>*
            <input message="..."/>?
            <output message="..."/>?
            <fault message="..."/>*
       </operation>
     </portType>
     <binding>*
        <operation>
           <!-- Binding of the operation to a protocol, e.g. SOAP -->
        </operation>
     </binding>
     <service>*
        <port name="..." binding="...">
            <!-- Specifies the address of a service,
            e.g., with soap:address -->
        </port>
     </service>
</definitions>
Figure 1: Major elements of WSDL 1.1. (1)
At first blush, having all these different parts of WSDL seems a bit overly complex--after all, do you really need to define both a part (message) for an operation as well as an operation separately (this was my first reaction...)? Well, WSDL 1.1 was created to be highly decoupled, and to maximize reuse of every possible piece; for example, one can define a message that can be used both as an input or an output, or can be used by multiple port type operations. The end result of this structure, however, was a bit unnecessary and hard to read, so the authors of the WSDL 2.0 improved this by removing the <message> section, and using defined elements instead.

WSDL 2.0
WSDL underwent a major renovation in version 2.0, changing the root tag to <description>, and ushering in many other changes and additions. I've already covered much of the structure in WSDL 1.1, so here I will describe mainly the differences:
  • <interface>: As the name implies, this section tends to resemble interfaces in Java, which makes sense since they serve very similar purposes. Like interfaces, they can define multiple operation "signatures" and can be extended for reusability. The <interface> replaces the <portType> of WSDL 1.1, and adds explicit input faults and output faults. The child <operation> elements here can also explicitly define message-exchange patterns in their pattern attribute (see below).
  • <binding>: This element has children that are identical to those of the interface, so that a binding can be specified for each. The major difference over version 1.1 is that bindings are re-usable. To be re-usable the binding simply omits the interface attribute; it may be specified later in the service declaration.
  • <service>: Child <port> are replaced by similar <endpoint> elements.
WSDL 2.0 also defines an explicit HTTP binding to all the methods: GET, POST, PUT, and DELETE. This becomes important for RESTful style web services. In essence, though, WSDL is service rather than resource oriented, so the fit with RESTful services is not as natural as it is in WADL.
<description>
     <types>?
        <!-- Defines the XML types used in the WSDL, as in 1.1 -->
     </types>
     <interface name="..." extends="...">*
          <fault element="..."/>*
          <operation pattern="..message pattern uri..">*
             <input element="..."/>*
             <output element="..."/>*
             <infault ref="..some fault..."/>*
             <outfault ref="..some fault"/>*
          </operation>
     </interface>
     <binding interface="..."?>
        <!-- The binding of a protocol to an interface, same structure
             as the interface element -->
     </binding>
     <service interface="...">
        <!-- Defines the actual addresses of the bindings, as in 1.1,
             but now "ports" are called "endpoints" -->
        <endpoint binding="..." address="..."/>*
     </service>
</description>
Figure 2: Major elements of WSDL 2.0. (1)

WSDL Styles
This developerWorks articledoes a great job of explaining the different styles of WSDL, so I will only summarize briefly here.
In general, an RPC (or "remote procedure call") style service will define the references in its message parts as XML Schema types; a Document style service will define element references on its message parts (the soap:binding will use the appropriate style attribute). An Encoded style will encode the types of the children of the soap:body in the SOAP message; a literal style will not encode them, but leave them as literal XML elements (the binding of input and output messages will have the appropriate use attribute).
RPC vs. Document (where "ns:myElement" is a reference to a defined element) WSDL definitions
<!-- RPC request message -->
<message name="input">
     <part name="param" type="xsd:int"/>
</message>

<!-- Document request message -->
<message name="input">
     <part name="param" element="ns:myElement"/>
</message>

Encoded Vs. Literal SOAP Messages

<!-- Encoded SOAP request -->
<soap:body>
   <param xsi:type="xsd:int">1</param>
<soap:body>

<!-- Literal SOAP request -->
<soap:body>
   <param>1<param>
<soap:body>
e, generally speaking, four distinct styles of WSDL: RPC/Encoded, RPC/Literal, Document/Literal, and Document/Literal Wrapped. As explained in Part 1, RPC/Encoded, once ubiquitous, is now pretty much dead: unless you have to interact with a legacy web service, use something else. Of the remaining styles, RPC/Literal has the drawback that you cannot really validate the types in the SOAP message. With Document/Literal, you can validate the types but you lose the name of the operation in the SOAP. This is where the Document/Literal Wrapped style comes in handy: it "wraps" the body of the document payload in an element that represents the operation name (it also has the additional benefit of enforcing only one child of soap:body as mandated by WS-I). The only real drawback of Document/Literal Wrapped is that you cannot "overload" web service operation names, but this is a minor quibble. Generally speaking, using this style of WSDL is your best bet, unless your SOAP toolkit is unable to work with it.

Message Exchange Patterns
Message exchange patterns are the "handshake protocol" of web services. They let a client know what type (in/out) of messages or faults must be exchanged, and in what order.
WSDL 1.1 defined 4 basic message exchange patterns:
  • One-way: An operation only receives an <input>.
  • Request-response: An operation receives a request, then issues a response. Here the <input> child of <operation> is defined before the <output>
  • Solicit-response: An operation sends a request, then waits for a response. Here the <output> would be defined before the <input>.
  • Notification: An operation sends a message only.
Using the document ordering of elements to establish the message exchange pattern was obviously a little too subtle, so WSDL 2.0 uses an explicit pattern attribute to define it. WSDL 2.0 also expands the number to 8 message exchange patterns, which can be categorized as inbound MEP (if the service receives the first message) or outbound MEP (if the service sends the first message):
  • In-only: Here a service operation only receives an inbound message, but does not reply. This MEP cannot use a fault. When referred to by an operation's pattern attribute, it has the value "http://www.w3.org/ns/wsdl/in-only".
  • Robust In-only: Identical to In-only, except that this type of MEP can trigger a fault. When referred to by an operation's pattern attribute, it has the value "http://www.w3.org/ns/wsdl/robust-in-only".
  • In-Out: Identical to the request-response of WSDL 1.1. A fault here replaces the out message. When referred to by an operation's pattern attribute, it has the value "http://www.w3.org/ns/wsdl/in-out".
  • In-Optional Out: Similar to In-Out, except that the out message is optional. When referred to by an operation's pattern attribute, it has the value "http://www.w3.org/ns/wsdl/in-opt-out".
  • Out-Only: The service operation produces an out-only message, and cannot trigger a fault. When referred to by an operation's pattern attribute, it has the value "http://www.w3.org/ns/wsdl/out-only".
  • Robust Out-Only: Similar to Out-Only, except that this type of MEP can trigger a fault. When referred to by an operation's pattern attribute, it has the value "http://www.w3.org/ns/wsdl/robust-out-only".
  • Out-Optional In: The service produces an out message first, which may optionally be followed by an inbound response. When referred to by an operation's pattern attribute, it has the value "http://www.w3.org/ns/wsdl/out-opt-in".
Abstract and Concrete WSDLs
A WSDL document can be divided into "abstract" and "concrete" portions that by convention often are defined in two or more files (where the concrete file imports the abstract one). The abstract elements are <types>, <message>, and <portType> (or <interface> in 2.0); the concrete ones are <binding> and <service>. Separating these two sections allows for maximal reuse and flexibility in defining services.
A great illustration of this principle is with WS-RP (Web Services for Remote Portlets), a specification essentially for exchanging portlet content between different servers (e.g., a Java application server and, say, Microsoft Sharepoint). WS-RP defines in its specifications all of the types and operations that will be used in the web service of the "producer". The producer server only has to specify the actual concrete WSDL.

WADL
WADL, or Web Application Description Language, is a specification developed to be an alternative to WSDL with specific support for RESTful web services. Whether or not WADL will be widely adopted is still an open question-- certainly it would help if it were submitted to a standards body--but it is interesting nevertheless to present it in contrast with WSDL. Here, instead of providing a more comprehensive overview (the 11/09/2006 specification is very easy to read), I'll provide an example to give a flavor of how it works in the form of the ever-popular stock quote example (Figure 3 below). Notice how it defines both resources and representations, as well as the methods that can be used to manipulate the resources.
<application xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
     xsi:schemaLocation="http://research.sun.com/wadl/2006/10 wadl.xsd"
     xmlns:xsd="http://www.w3.org/2001/XMLSchema"
     xmlns:ex="http://www.example.org/types"
     xmlns="http://research.sun.com/wadl/2006/10">

     <grammars>
        <include href="ticker.xsd"/>
     </grammars>

     <resources base="http://www.example.org/services/">
        <resource path="getStockQuote">
           <method name="GET">
                <request>
                   <param name="symbol" style="query" type="xsd:string"/>
                </request>
                <response>
                   <representation mediaType="application/xml"
                       element="ex:quoteResponse"/>
                   <fault status="400" mediaType="application/xml"
                       element="ex:error"/>
                </response>
           </method>
        </resource>
     </resources>

</application>
Figure 3: A WADL example.
WADL does a nice job of capturing the style of REST. As with any other technology, though, most will wait to use it until it sees some significant adoption.

Conclusion
This has certainly been a whirlwind tour of WSDL and WADL. We've covered some of the most important points here in a fairly concise fashion, but there is quite a lot which can be said about the subject. I encourage anyone who wants to dive deeper to look at the References below (You can find References at the bottom of the original version of the article on the web).

By Brennan Spies
Source : http://www.ajaxonomy.com/2008/xml/web-services-part-2-wsdl-and-wadl

SOAP vs. REST

Developers new to web services are often intimidated by parade of technologies and concepts required to understand it: REST, SOAP, WSDL, XML Schema, Relax NG, UDDI, MTOM, XOP, WS-I, WS-Security, WS-Addressing, WS-Policy, and a host of other WS-* specifications that seem to multiply like rabbits. Add to that the Java specifications, such as JAX-WS, JAX-RPC, SAAJ, etc. and the conceptual weight begins to become heavy indeed. In this series of articles I hope to shed some light on the dark corners of web services and help navigate the sea of alphabet soup (1). Along the way I'll also cover some tools for developing web services, and create a simple Web Service as an example. In this article I will give a high-level overview of both SOAP and REST.

Introduction
There are currently two schools of thought in developing web services: the traditional, standards-based approach (SOAP) and conceptually simpler and the trendier new kid on the block (REST). The decision between the two will be your first choice in designing a web service, so it is important to understand the pros and cons of the two. It is also important, in the sometimes heated debate between the two philosophies, to separate reality from rhetoric.

SOAP
In the beginning there was...SOAP. Developed at Microsoft in 1998, the inappropriately-named "Simple Object Access Protocol" was designed to be a platform and language-neutral alternative to previous middleware techologies like CORBA and DCOM. Its first public appearance was an Internet public draft (submitted to the IETF) in 1999; shortly thereafter, in December of 1999, SOAP 1.0 was released. In May of 2000 the 1.1 version was submitted to the W3C where it formed the heart of the emerging Web Services technologies. The current version is 1.2, finalized in 2005. The examples given in this article will all be SOAP 1.2.
Together with WSDL and XML Schema, SOAP has become the standard for exchanging XML-based messages. SOAP was also designed from the ground up to be extensible, so that other standards could be integrated into it and there have been many, often collectively referred to as WS-*: WS-Addressing, WS-Policy, WS-Security, WS-Federation, WS-ReliableMessaging, WS-Coordination, WS-AtomicTransaction, WS-RemotePortlets, and the list goes on. Hence much of the perceived complexity of SOAP, as in Java, comes from the multitude of standards which have evolved around it. This should not be reason to be too concerned: as with other things, you only have to use what you actually need.The basic structure of SOAP is like any other message format (including HTML itself): header and body. In SOAP 1.2 this would look something like :

<env:Envelope xmlns:env="http://www.w3.org/2003/05/soap-envelope">
<env:Header>
<!- Header information here ->
</env:Header>
<env:Body>
<!- Body or "Payload" here, a Fault if error happened ->
</env:Body>
</env:Envelope> 

Note that the <Header> element is optional here, but the <Body> is mandatory.

The SOAP <Header>
SOAP uses special attributes in the standard "soap-envelope" namespace to handle the extensibility elements that can be defined in the header. The most important of these is the mustUnderstand attribute. By default, any element in the header can be safely ignored by the SOAP message recipient unless the the mustUnderstand attribute on the element is set to "true" (or "1", which is the only value recognized in SOAP 1.1). A good example of this would be a security token element that authenticates the sender/requestor of the message. If for some reason the recipient is not able to process these elements, a fault should be delivered back to the sender with a fault code of MustUnderstand.
Because SOAP is designed to be used in a network environment with multiple intermediaries (SOAP "nodes" as identified by the <Node> element), it also defines the special XML attributes role to manage which intermediary should process a given header element and relay, which is used to indicate that this element should be passed to the next node if not processed in the current one.
The SOAP <Body>
The SOAP body contains the "payload" of the message, which is defined by the WSDL's <Message> part. If there is an error that needs to be transmitted back to the sender, a single <Fault> element is used as a child of the <Body>.
The SOAP <Fault>The <Fault> is the standard element for error handling. When present, it is the only child element of the SOAP<Body>. The structure of a fault looks like :

<env:Fault xmlns:m="http://www.example.org/timeouts">
<env:Code>
<env:Value>env:Sender</env:Value>
<env:Subcode>
<env:Value>m:MessageTimeout</env:Value>
</env:Subcode>
</env:Code>
<env:Reason>
<env:Text xml:lang="en">Sender Timeout</env:Text>
</env:Reason>
<env:Detail>
<m:MaxTime>P5M</m:MaxTime>
</env:Detail>
</env:Fault>
 
Here, only the <Code> and <Reason> child elements are required, and the <Subcode> child of <Code> is also optional. The body of the Code/Value element is a fixed enumeration with the values:
  • VersionMismatch: this indicates that the node that "threw" the fault found an invalid element in the SOAP envelope, either an incorrect namespace, incorrect local name, or both.
  • MustUnderstand: as discussed above, this code indicates that a header element with the attribute mustUnderstand="true" could not be processed by the node throwing the fault. A NotUnderstood header block should be provided to detail all of the elements in the original message which were not understood.
  • DataEncodingUnknown: the data encoding specified in the envelope's encodingSytle attribute is not supported by the node throwing the fault.
  • Sender: This is a "catch-all" code indicating that the message sent was not correctly formed or did not have the appropriate information to succeed.
  • Receiver: Another "catch-all" code indicating that the message could not be processed for reasons attributable to the processing of the message rather than to the contents of the message itself.
Subcodes, however, are not restricted and are application-defined; these will commonly be defined when the fault code is Sender or Receiver. The <Reason> element is there to provide a human-readable explanation of the fault. The optional <Detail> element is there to provide additional information about the fault, such as (in the example above) the timeout value. <Fault> also has optional children <Node> and <Role>, indicating which node threw the fault and the role that the node was operating in (see role attribute above) respectively.
SOAP Encoding
Section 5 of the SOAP 1.1 specification describes SOAP encoding, which was originally developed as a convenience for serializing and de-serializing data types to and from other sources, such as databases and programming languages. Problems, however, soon arose with complications in reconciling SOAP encoding and XML Schema, as well as with performance. The WS-I organization finally put the nail in the coffin of SOAP encoding in 2004 when it released the first version of the WS-I Basic Profile, declaring that only literal XML messages should be used (R2706). With the wide acceptance of WS-I, some of the more recent web service toolkits do not provide any support for (the previously ubiquitous) SOAP encoding at all.
A Simple SOAP Example
Putting it all together, below is an example of a simple request-response in SOAP for a stock quote. Here the transport binding is HTTP.

The request:GET /StockPrice HTTP/1.1
Host: example.org
Content-Type: application/soap+xml; charset=utf-8
Content-Length: nnn
<?xml version="1.0"?>
<env:Envelope xmlns:env="http://www.w3.org/2003/05/soap-envelope"
xmlns:s="http://www.example.org/stock-service">
<env:Body>
<s:GetStockQuote>
<s:TickerSymbol>IBM</s:TickerSymbol>
</s:GetStockQuote>
</env:Body>
</env:Envelope>
The response:HTTP/1.1 200 OK
Content-Type: application/soap+xml; charset=utf-8
Content-Length: nnn
<?xml version="1.0"?>
<env:Envelope xmlns:env="http://www.w3.org/2003/05/soap-envelope"
xmlns:s="http://www.example.org/stock-service">
<env:Body>
<s:GetStockQuoteResponse>
<s:StockPrice>45.25</s:StockPrice>
</s:GetStockQuoteResponse>
</env:Body>
</env:Envelope>
 
If you play your cards right, you may never have to actually see a SOAP message in action; every SOAP engine out there will do its best to hide it from you unless you really want to see it. If something goes wrong in your web service, however, it may be useful to know what one looks like for debugging purposes.

REST
Much in the way that Ruby on Rails was a reaction to more complex web application architectures, the emergence of the RESTful style of web services was a reaction to the more heavy-weight SOAP-based standards. In RESTful web services, the emphasis is on simple point-to-point communication over HTTP using plain old XML (POX).
The origin of the term "REST" comes from the famous thesis from Roy Fielding describing the concept of Representative State Transfer (REST). REST is an architectural style that can be summed up as four verbs (GET, POST, PUT, and DELETE from HTTP 1.1) and the nouns, which are the resources available on the network (referenced in the URI). The verbs have the following operational equivalents:HTTP CRUD Equivalent
==============================
GET read
POST create,update,delete
PUT create,update
DELETE delete
A service to get the details of a user called 'dsmith', for example, would be handled using an HTTP GET to http://example.org/users/dsmith. Deleting the user would use an HTTP DELETE, and creating a new one would mostly likely be done with a POST. The need to reference other resources would be handled using hyperlinks (the XML equivalent of HTTP's href, which is XLinks' xlink:href) and separate HTTP request-responses.
A Simple RESTful Service
Re-writing the stock quote service above as a RESTful web service provides a nice illustration of the differences between SOAP and REST web services.

The request:GET /StockPrice/IBM HTTP/1.1
Host: example.org
Accept: text/xml
Accept-Charset: utf-8
The response:HTTP/1.1 200 OK
Content-Type: text/xml; charset=utf-8
Content-Length: nnn
<?xml version="1.0"?>
<s:Quote xmlns:s="http://example.org/stock-service">
<s:TickerSymbol>IBM</s:TickerSymbol>
<s:StockPrice>45.25</s:StockPrice>
</s:Quote>
 
Though slightly modified (to include the ticker symbol in the response), the RESTful version is still simpler and more concise than the RPC-style SOAP version. In a sense, as well, RESTful web services are much closer in design and philosophy to the Web itself.

Defining the Contract
Traditionally, the big drawback of REST vis-a-vis SOAP was the lack of any way of specifying a description/contract for the web service. This, however, has changed since WSDL 2.0 defines a full compliment of non-SOAP bindings (all the HTTP methods, not just GET and POST) and the emergence of WADL as an alternative to WSDL. This will be discussed in more detail in coming articles.

Summary and Pros/Cons
SOAP and RESTful web services have a very different philosophy from each other. SOAP is really a protocol for XML-based distributed computing, whereas REST adheres much more closely to a bare metal, web-based design. SOAP by itself is not that complex; it can get complex, however, when it is used with its numerous extensions (guilt by association).
To summarize their strengths and weaknesses:

** SOAP **
Pros:
  • Langauge, platform, and transport agnostic
  • Designed to handle distributed computing environments
  • Is the prevailing standard for web services, and hence has better support from other standards (WSDL, WS-*) and tooling from vendors
  • Built-in error handling (faults)
  • Extensibility
Cons:
  • Conceptually more difficult, more "heavy-weight" than REST
  • More verbose
  • Harder to develop, requires tools
** REST **
Pros:
  • Language and platform agnostic
  • Much simpler to develop than SOAP
  • Small learning curve, less reliance on tools
  • Concise, no need for additional messaging layer
  • Closer in design and philosophy to the Web
Cons:
  • Assumes a point-to-point communication model--not usable for distributed computing environment where message may go through one or more intermediaries
  • Lack of standards support for security, policy, reliable messaging, etc., so services that have more sophisticated requirements are harder to develop ("roll your own")
  • Tied to the HTTP transport model
By Brennan Spies
Source : http://www.ajaxonomy.com/2008/xml/web-services-part-1-soap-vs-rest