Friday, October 1, 2010

How to mount NTFS drives in ubuntu

you have to get the  ntfs-config to fo that , run in the terminal :

sudo apt-get install ntfs-config

Then go to System --> Administration --> NTFS Configuration Tool

There you will see a list of your NTFS drives. Check wichever you would like to mount and have them mounted .

Thats easy

Thursday, September 9, 2010

how to reset mysql password

If you know the root password, but want to change it, see Section 12.4.1.6, “SET PASSWORD Syntax”.
If you set a root password previously, but have forgotten it, you can set a new password. The following sections provide instructions for Windows and Unix systems, as well as generic instructions that apply to any system.
B.5.4.1.1. Resetting the Root Password: Windows Systems
On Windows, use the following procedure to reset the password for all MySQL root accounts:
  1. Log on to your system as Administrator.
  2. Stop the MySQL server if it is running. For a server that is running as a Windows service, go to the Services manager: From the Start menu, select Control Panel, then Administrative Tools, then Services. Find the MySQL service in the list and stop it.
    If your server is not running as a service, you may need to use the Task Manager to force it to stop.
  3. Create a text file containing the following statements. Replace the password with the password that you want to use.

    UPDATE mysql.user SET Password=PASSWORD('MyNewPass') WHERE User='root';
    FLUSH PRIVILEGES;
    Write the UPDATE and FLUSH statements each on a single line. The UPDATE statement resets the password for all root accounts, and the FLUSH statement tells the server to reload the grant tables into memory so that it notices the password change.
  4. Save the file. For this example, the file will be named C:\mysql-init.txt.
  5. Open a console window to get to the command prompt: From the Start menu, select Run, then enter cmd as the command to be run.
  6. Start the MySQL server with the special --init-file option (notice that the backslash in the option value is doubled):

    C:\> C:\mysql\bin\mysqld --init-file=C:\\mysql-init.txt
    If you installed MySQL to a location other than C:\mysql, adjust the command accordingly.
    The server executes the contents of the file named by the --init-file option at startup, changing each root account password.
    You can also add the --console option to the command if you want server output to appear in the console window rather than in a log file.
    If you installed MySQL using the MySQL Installation Wizard, you may need to specify a --defaults-file option:

    C:\> "C:\Program Files\MySQL\MySQL Server 5.1\bin\mysqld.exe"
             --defaults-file="C:\\Program Files\\MySQL\\MySQL Server 5.1\\my.ini"
             --init-file=C:\\mysql-init.txt
    The appropriate --defaults-file setting can be found using the Services Manager: From the Start menu, select Control Panel, then Administrative Tools, then Services. Find the MySQL service in the list, right-click it, and choose the Properties option. The Path to executable field contains the --defaults-file setting.
  7. After the server has started successfully, delete C:\mysql-init.txt.
You should now be able to connect to the MySQL server as root using the new password. Stop the MySQL server, then restart it in normal mode again. If you run the server as a service, start it from the Windows Services window. If you start the server manually, use whatever command you normally use.

see :
http://dev.mysql.com/doc/refman/5.1/en/resetting-permissions.html#resetting-permissions-windows

Tuesday, August 31, 2010

How to parse a PDF

PDFBox is a Java API from Ben Litchfield that will let you access the contents of a PDF document. It comes with integration classes for Lucene to translate a PDF into a Lucene document.
 
JPedal is a Java API for extracting text and images from PDF documents.
 
PDFTextStream is a Java API for extracting text, metadata, and form data from PDF documents. It also comes with an integration module making it easier to convert a PDF document into a Lucene document.
 
XPDF is an open source tool that is licensed under the GPL. It's not a Java tool, but there is a utility called pdftotext that can translate PDF files into text files on most platforms from the command line.
 
Based on xpdf, there is a utility called pdftohtml that can translate PDF files into HTML files. This is also not a Java application.

How to change the encoding of a java String

String newStr = new String(someString.getBytes("UTF-8"));

Monday, August 30, 2010

JAVA -- write a java.sql.blob to File

public void saveToFile(Blob blob) {
                try {
                    File file = new File("c:/someFileName.ext");
                    FileOutputStream os = new FileOutputStream(file);
                    os.write(getBlobBytes(blob));
                } catch (Exception ex) {
                    ex.printStackTrace();
                    JOptionPane.showMessageDialog(null, "Error!");
                }
}



public byte[] getBlobBytes(Blob blob) throws Exception {
        final int MAXBUFSIZE = 4096;
        if (blob != null) {
            try {
                BufferedInputStream bis = new BufferedInputStream(blob
                        .getBinaryStream());
                ByteArrayOutputStream bo = new ByteArrayOutputStream();
                byte[] buf = new byte[MAXBUFSIZE];
                int n = 0;
                while ((n = bis.read(buf, 0, MAXBUFSIZE)) != -1) {
                    bo.write(buf, 0, n);
                }
                bo.flush();
                bo.close();
                bis.close();
                buf = null;
                return bo.toByteArray();
            } catch (Exception ex) {
                ex.printStackTrace();
            }
        }
        return null;
    }

JDBC -- Inserting binary data

Inserting Data :

    public void saveMedia(File file, short type) {
        FileInputStream io=null;
        try{
            io = new FileInputStream(file);
        }catch(IOException ioEx){
            ioEx.printStackTrace();
        }
        try {
            PreparedStatement statement = connection.prepareStatement("insert into media (content,mediatype,fileName) values(?,?,?)");
            statement.setBinaryStream(1, io, file.length());
            statement.setShort(2, type);
            statement.setString(3,file.getName());
            statement.executeUpdate();
            connection.close();
        } catch (SQLException sqlEx) {
            sqlEx.printStackTrace();
        } catch(ClassNotFoundException cnfEx){
            cnfEx.printStackTrace();
        }
    }

Monday, August 23, 2010

Important matter in indexing --> Solr

Data sent to Solr is not immediately searchable, nor do deletions take immediate
effect. Like a database, changes must be committed frst. Unlike a database, there
are no distinct sessions (that is transactions) between each client, and instead there
is in-effect one global modifcation state. This means that if more than one Solr client
were to submit modifcations and commit them at similar times, it is possible for part
of one client's set of changes to be committed before that client told Solr to commit.
Usually, you will have just one process responsible for updating Solr. But if not, then
keep this in mind.

From :
Solr 1.4 Enterprise Search Server (Packt, 2009, 1847195881) 

index-time-boosting while posting an xml to solr

Here is a sample XML fle you can HTTP POST to Solr:

<add allowDups="false">
<doc boost="2.0">
<field name="id">5432a</field>
<field name="type" ...</field>
<field name="a_name" boost="0.5"></field>
<!-- the date/time syntax MUST look just like this (ISO-8601)-->
<field name="begin_date">2007-12-31T09:40:00Z</field>
</doc>
<doc>
<doc>
<field name="id">5432a</field>
<field name="type" ...
<field name="begin_date">2007-12-31T09:40:00Z</field>
</doc>
<!-- more here as needed -->
</add>


The allowDups defaults to false to guarantee the uniqueness of values in the feld
that you have designated as the unique feld in the schema (assuming you have such
a feld). If you were to add another document that has the same value for the unique
feld, then this document would override the previous document, whether it is
pending a commit or it's already committed. You will not get an error.
If you are sure that you will be adding a document that is not
a duplicate, then you can set allowDups to true to get a
performance improvement.

Boosting affects the scores of matching documents in order to affect ranking in 
score-sorted search results. Providing a boost value, whether at the document or
feld level, is optional. The default value is 1.0, which is effectively a non-boost.
Technically, documents are not boosted, only felds are. The effective boost value 
of a feld is that specifed for the document multiplied by that specifed for the feld.

Specifying boosts here is called index-time boosting, which is rarely
done as compared to the more fexible query-time boosting. Index-time
boosting is less fexible because such boosting decisions must be decided
at index-time and will apply to all of the queries.



From :
Solr 1.4 Enterprise Search Server (Packt, 2009, 1847195881)

Friday, August 20, 2010

The OSGi Architecture

The OSGi technology is a set of specifications that define a dynamic component system for Java. These specifications enable a development model where applications are (dynamically) composed of many different (reusable) components. The OSGi specifications enable components to hide their implementations from other components while communicating through services, which are objects that are specifically shared between components. This surprisingly simple model has far reaching effects for almost any aspect of the software development process.

Though components have been on the horizon for a long time, so far they failed to make good on their promises. OSGi is the first technology that actually succeeded with a component system that is solving many real problems in software development. Adopters of OSGi technology see significantly reduced complexity in almost all aspects of development. Code is easier to write and test, reuse is increased, build systems become significantly simpler, deployment is more manageable, bugs are detected early, and the runtime provides an enormous insight into what is running. Most important, it works as is testified by the wide adoption and use in popular applications like Eclipse and Spring. 

see :
http://www.osgi.org/About/WhatIsOSGi

Thursday, August 19, 2010

How to deploy Solr on Tomcat

  • Step 2 : Make a folder somewhere in your computer and name it 'solr_home' (it can have any name). I assume that you have made a folder with the path : C:\ solr-home.
  • Step 3 : Copy the following folders to you solr-home directory which you made in the last step. 
    1. apache-solr-x.x.x/example/lib
    2. apache-solr-x.x.x/example/solr/conf
    3. apache-solr-x.x.x/example/solr/bin
  • Step 4 : Copy the war file placed in the ./apache-solr-x.x.x/dist folder which has a name like apache-solr-x.x.x.war (where x.x.x is the version of your solr core) and paste it in your tomcat webapps directory. 
  • Step 5 : Rename the file 'solr-x.x.x.war'  to solr.zip.
  • Step 6 : Now you have to set the Solr home page in order to tell tomcat where to save your indexes.  The first way to approach this aim is to open the web.xml file in Notepad located in the Solr.zip/WEB-INF directory. Find the <env-entry> element (it should be commented by default). Copy it whole and paste it to the bottom of your xml doc. something like :
<env-entry>
<env-entry-name>solr/home</env-entry-name>
<env-entry-value>C:\ solr-home</env-entry-value>
<env-entry-type>java.lang.String</env-entry-type>
</env-entry> .
Also you can set the solr home directory in tomcat configuration panel. To do that , right-click on the icon of tomcat in the notification area , select configure,  go to the  java tab, add the following line to the java options :
-Dsolr.solrhome=C:\solr-home

Friday, August 13, 2010

Java Message Service API (the JMS API)

General idea of messaging

Messaging is a form of loosely coupled distributed communication, where in this context the term 'communication' can be understood as an exchange of messages between software components. Message-oriented technologies attempt to relax tightly coupled communication (such as TCP network sockets, CORBA or RMI) by the introduction of an intermediary component, which in this case would be a queue. The latter approach allows software components to communicate 'indirectly' with each other. Benefits of this include message senders not needing to have precise knowledge of their receivers, since communication is performed using this queue. This is the first of two types: point to point and publish and subscribe.

Java Message Service API Overview

The Java Message Service (JMS) defines the standard for reliable Enterprise Messaging. Enterprise messaging, often also referred to as Messaging Oriented Middleware (MOM), is universally recognized as an essential tool for building enterprise applications. By combining Java technology with enterprise messaging, the JMS API provides a powerful tool for solving enterprise computing problems.

Enterprise messaging provides a reliable, flexible service for the asynchronous exchange of critical business data and events throughout an enterprise. The JMS API adds to this a common API and provider framework that enables the development of portable, message based applications in the Java programming language.

The JMS API improves programmer productivity by defining a common set of messaging concepts and programming strategies that will be supported by all JMS technology-compliant messaging systems.

The JMS API is an integral part of the Java 2, Enterprise Edition (J2EE) platform, and application developers can use messaging with components using J2EE APIs ("J2EE components").

Version 1.1 of the JMS API in the J2EE 1.4 platform has the following features:
  • Message-driven beans enable the asynchronous consumption of JMS messages.
  • Message sends and receives can participate in Java Transaction API (JTA) transactions.
  • J2EE Connector Architecture interfaces that allow JMS implementations from different vendors to be externally plugged into a J2EE 1.4 application server.
The addition of the JMS API enhances the J2EE platform by simplifying enterprise development, allowing loosely coupled, reliable, asynchronous interactions among J2EE components and legacy systems capable of messaging. As a developer, you can easily add new behavior to a J2EE application with existing business events by adding a new message-driven bean to operate on specific business events.

The J2EE platform's Enterprise JavaBeans (EJB) container architecture, moreover, enhances the JMS API in two ways:
  • By allowing for the concurrent consumption of messages
  • By providing support for distributed transactions, so that database updates, message processing, and connections to EIS systems using the J2EE Connector Architecture can all participate in the same transaction context.

See : 
http://www.oracle.com/technetwork/java/overview-137943.html

See also: Message-oriented middleware and Message passing

And a complete useful tutorial you cant miss :
http://download-llnw.oracle.com/javaee/1.3/jms/tutorial/1_3_1-fcs/doc/overview.html 

The Java Management Extensions (JMX) API

The JMX API is a standard API for management and monitoring of resources such as applications, devices, services, and the Java virtual machine.
Typical uses of the JMX technology include:
  • Consulting and changing application configuration.
  • Accumulating and publishing statistics about application behavior.
  • Notifying users or applications of state changes and erroneous conditions.
The JMX API includes remote access, so a remote management program can interact with a running application for the above purposes.

see :
http://openjdk.java.net/groups/jmx/
http://en.wikipedia.org/wiki/Java_Management_Extensions

tutorial for starting Spring Roo

This is where you can find a very good tutorial for Spring Roo version 1.0.2 , creating a Roo based project from the scratch :

http://www.lalitbhatt.com/tiki-index.php?page=Spring+Roo

Tuesday, August 10, 2010

how to prevent lack of memory while executing large jasper reports

        JRSwapFile swapFile =
                    new JRSwapFile(getServletContext().getRealPath("/report/swap/"), 1024 * 50/* 50 KB */, 2);
        virtualizer = new JRSwapFileVirtualizer(40, swapFile);
        virtualizer.setReadOnly(false);
        reportParam_.put(JRParameter.REPORT_VIRTUALIZER, virtualizer);

Monday, August 9, 2010

Software versioning

Software versioning is the process of assigning either unique version names or unique version numbers to unique states of computer software. Within a given version number category (major, minor), these numbers are generally assigned in increasing order and correspond to new developments in the software.

see :
http://en.wikipedia.org/wiki/Software_versioning

Apache Incubator

The Incubator project is the entry path into The Apache Software Foundation (ASF) for projects and codebases wishing to become part of the Foundation's efforts. All code donations from external organisations and existing external projects wishing to join Apache enter through the Incubator.
The Apache Incubator has two primary goals:

see :
http://incubator.apache.org/

Spring Roo

Spring Roo is an open source software tool that uses convention-over-configuration principles to provide rapid application development of Java-based enterprise software[1]. The resulting applications use common Java technologies such as Spring Framework, Java Persistence API, Java Server Pages, Apache Maven and AspectJ[2]. Spring Roo is a member of the Spring portfolio of projects.

see :
http://en.wikipedia.org/wiki/Spring_Roo
http://www.springsource.org/roo

Convention over configuration

Convention over Configuration (also known as Coding by convention) is a software design paradigm which seeks to decrease the number of decisions that developers need to make, gaining simplicity, but not necessarily losing flexibility.
The phrase essentially means a developer only needs to specify unconventional aspects of the application. For example, if there's a class Sale in the model, the corresponding table in the database is called sales by default. It is only if one deviates from this convention, such as calling the table "products_sold", that one needs to write code regarding these names.
When the convention implemented by the tool you are using matches your desired behavior, you enjoy the benefits without having to write configuration files. When your desired behavior deviates from the implemented convention, then you configure your desired behavior.

see :
http://en.wikipedia.org/wiki/Convention_over_configuration

Sunday, July 4, 2010

how to convert a java.awt.BufferedImage to Image

public static BufferedImage toBufferedImage(Image image) {
  if (image instanceof BufferedImage) {
    return (BufferedImage) image;
  }

  // This code ensures that all the pixels in the image are loaded
  image = new ImageIcon(image).getImage();

  // Determine if the image has transparent pixels
  boolean hasAlpha = hasAlpha(image);

  // Create a buffered image with a format that's compatible with the
  // screen
  BufferedImage bimage = null;
  GraphicsEnvironment ge = GraphicsEnvironment
  .getLocalGraphicsEnvironment();
  try {
      // Determine the type of transparency of the new buffered image
      int transparency = Transparency.OPAQUE;
      if (hasAlpha == true) {
        transparency = Transparency.BITMASK;
      }

      // Create the buffered image
      GraphicsDevice gs = ge.getDefaultScreenDevice();
      GraphicsConfiguration gc = gs.getDefaultConfiguration();
      bimage = gc.createCompatibleImage(image.getWidth(null), image
      .getHeight(null), transparency);
    } catch (HeadlessException e) {
  } // No screen

  if (bimage == null) {
    // Create a buffered image using the default color model
    int type = BufferedImage.TYPE_INT_RGB;
    if (hasAlpha == true) {
      type = BufferedImage.TYPE_INT_ARGB;
    }
    bimage = new BufferedImage(image.getWidth(null), image
    .getHeight(null), type);
  }

  // Copy image to buffered image
  Graphics g = bimage.createGraphics();

  // Paint the image onto the buffered image
  g.drawImage(image, 0, 0, null);
  g.dispose();

  return bimage;
}

public static boolean hasAlpha(Image image) {
  // If buffered image, the color model is readily available
  if (image instanceof BufferedImage) {
    return ((BufferedImage) image).getColorModel().hasAlpha();
  }

  // Use a pixel grabber to retrieve the image's color model;
  // grabbing a single pixel is usually sufficient
  PixelGrabber pg = new PixelGrabber(image, 0, 0, 1, 1, false);
  try {
    pg.grabPixels();
  } catch (InterruptedException e) {
  }

  // Get the image's color model
  return pg.getColorModel().hasAlpha();
}

Saturday, July 3, 2010

how to sign a jar file

build a keystore :

keytool -genkey -alias signFiles -keypass theKeyPass 
        -keystore theKeystore -storepass theStorePass



and sign your jar file using that keytool :


jarsigner -keystore mykey -signedjar sJarName.jar 
        JarName.jar signFiles 

you should run these commands, each on a single separate line

Wednesday, June 23, 2010

What Is Solr?

Solr is the popular, blazing fast open source enterprise search platform from the Apache Lucene project. Its major features include powerful full-text search, hit highlighting, faceted search, dynamic clustering, database integration, and rich document (e.g., Word, PDF) handling. Solr is highly scalable, providing distributed search and index replication, and it powers the search and navigation features of many of the world's largest internet sites.
Solr is written in Java and runs as a standalone full-text search server within a servlet container such as Tomcat. Solr uses the Lucene Java search library at its core for full-text indexing and search, and has REST-like HTTP/XML and JSON APIs that make it easy to use from virtually any programming language. Solr's powerful external configuration allows it to be tailored to almost any type of application without Java coding, and it has an extensive plugin architecture when more advanced customization is required.
See the complete feature list for more details.
For more information about Solr, please see the Solr wiki.



see:
http://lucene.apache.org/solr/

Comparison Between Solr And Sphinx Search Servers

Similarities

  • Both Solr and Sphinx satisfy all of your requirements. They're fast and designed to index and search large bodies of data efficiently.
  • Both have a long list of high-traffic sites using them (Solr, Sphinx)
  • Both offer commercial support. (Solr, Sphinx)
  • Both offer client API bindings for several platforms/languages (Sphinx, Solr)
  • Both can be distributed to increase speed and capacity (Sphinx, Solr)

Here are some differences



see:
http://beerpla.net/2009/09/03/comparison-between-solr-and-sphinx-search-servers-solr-vs-sphinx-fight/

Tuesday, June 22, 2010

Get mouse pointer position in gwt

int mouseX;
int mouseY;
           

Event.addNativePreviewHandler(new NativePreviewHandler() {
                public void onPreviewNativeEvent(NativePreviewEvent event) {
                    if (event.getNativeEvent().getType().equals("mousemove")) {
                        mouseX = event.getNativeEvent().getClientX();
                        mouseY = event.getNativeEvent().getClientY();
                    }
                }
});

Friday, June 18, 2010

Crystal Reports for Eclipse

PRODUCT: Crystal Reports for Visual Studio 2010 or Eclipse.  Just a Click Away
COMPANY: SAP BusinessObjects
TERMS OF TRIAL LICENSE: See Test and Evaluation agreement off 'Download' link.

TARGET USER: Developers

TRY IT OUT:
http://go.techtarget.com/r/11750975/9846010/1

Enables developers to create professional reports without leaving the familiar environment of Visual Studio. Supports more than 35 data sources, major browsers, and operating systems.

The award-winning Crystal Reports designer is just a click away. For .NET and Java developers, Crystal Reports remains at your fingertips – no registration, no cost, and just a click away. Benefits of Crystal Reports for Visual Studio include:

* Integrating into .NET and Java applications
* Embeds into familiar Visual Studio and Eclipse environments
* Supports more than 35 data sources
* Offers simplified royalty-free runtime licensing
* And more

What IS AppScale?

AppScale is an open-source implementation of the Google App Engine cloud computing interface. It is being developed by researchers in the UC Santa Barbara RACELab. AppScale enables execution of Google App Engine (GAE) applications on virtualized cluster systems. In particular, AppScale enables users to execute GAE applications using their own clusters with greater scalability and reliability than the GAE SDK provides. Moreover, AppScale executes automatically and transparently over cloud infrastructures such as the Amazon Web Services (AWS) Elastic Compute Cloud (EC2) and Eucalyptus, the open-source implementation of the AWS interfaces.

read more at :
    http://appscale.cs.ucsb.edu/
    http://code.google.com/p/appscale/

Cloud computing

Cloud computing is Internet-based computing, whereby shared resources, software and information, are provided to computers and other devices on-demand, like the electricity grid.

It is a paradigm shift following the shift from mainframe to client–server that preceded it in the early 1980s. Details are abstracted from the users who no longer have need of expertise in, or control over the technology infrastructure "in the cloud" that supports them. Cloud computing describes a new supplement, consumption and delivery model for IT services based on the Internet, and it typically involves the provision of dynamically scalable and often virtualized resources as a service over the Internet. It is a byproduct and consequence of the ease-of-access to remote computing sites provided by the Internet.
The term "cloud" is used as a metaphor for the Internet, based on the cloud drawing used in the past to represent the telephone network, and later to depict the Internet in computer network diagrams as an abstraction of the underlying infrastructure it represents. Typical cloud computing providers deliver common business applications online which are accessed from another web service or software like a web browser, while the software and data are stored on servers.

Most cloud computing infrastructure consists of services delivered through data centers and built on servers. Clouds often appear as single points of access for all consumers' computing needs. Commercial offerings are generally expected to meet quality of service (QoS) requirements of customers and typically offer SLAs. The major cloud-only service providers include Bluelock, Salesforce, Amazon and Google.

read more at : http://en.wikipedia.org/wiki/Cloud_computing

What Is Google App Engine?

Google App Engine lets you run your web applications on Google's infrastructure. App Engine applications are easy to build, easy to maintain, and easy to scale as your traffic and data storage needs grow. With App Engine, there are no servers to maintain: You just upload your application, and it's ready to serve your users.
You can serve your app from your own domain name (such as http://www.example.com/) using Google Apps. Or, you can serve your app using a free name on the appspot.com domain. You can share your application with the world, or limit access to members of your organization.
Google App Engine supports apps written in several programming languages. With App Engine's Java runtime environment, you can build your app using standard Java technologies, including the JVM, Java servlets, and the Java programming language—or any other language using a JVM-based interpreter or compiler, such as JavaScript or Ruby. App Engine also features a dedicated Python runtime environment, which includes a fast Python interpreter and the Python standard library. The Java and Python runtime environments are built to ensure that your application runs quickly, securely, and without interference from other apps on the system.
With App Engine, you only pay for what you use. There are no set-up costs and no recurring fees. The resources your application uses, such as storage and bandwidth, are measured by the gigabyte, and billed at competitive rates. You control the maximum amounts of resources your app can consume, so it always stays within your budget.
App Engine costs nothing to get started. All applications can use up to 500 MB of storage and enough CPU and bandwidth to support an efficient app serving around 5 million page views a month, absolutely free. When you enable billing for your application, your free limits are raised, and you only pay for resources you use above the free levels.


The Application Environment

Google App Engine makes it easy to build an application that runs reliably, even under heavy load and with large amounts of data. App Engine includes the following features:
  • dynamic web serving, with full support for common web technologies
  • persistent storage with queries, sorting and transactions
  • automatic scaling and load balancing
  • APIs for authenticating users and sending email using Google Accounts
  • a fully featured local development environment that simulates Google App Engine on your computer
  • task queues for performing work outside of the scope of a web request
  • scheduled tasks for triggering events at specified times and regular intervals
Your application can run in one of two runtime environments: the Java environment, and the Python environment. Each environment provides standard protocols and common technologies for web application development.

The Sandbox

Applications run in a secure environment that provides limited access to the underlying operating system. These limitations allow App Engine to distribute web requests for the application across multiple servers, and start and stop servers to meet traffic demands. The sandbox isolates your application in its own secure, reliable environment that is independent of the hardware, operating system and physical location of the web server.
Examples of the limitations of the secure sandbox environment include:

  • An application can only access other computers on the Internet through the provided URL fetch and email services. Other computers can only connect to the application by making HTTP (or HTTPS) requests on the standard ports.
  • An application cannot write to the file system. An app can read files, but only files uploaded with the application code. The app must use the App Engine datastore, memcache or other services for all data that persists between requests.
  • Application code only runs in response to a web request, a queued task, or a scheduled task, and must return response data within 30 seconds in any case. A request handler cannot spawn a sub-process or execute code after the response has been sent.

The Java Runtime Environment

You can develop your application for the Java runtime environment using common Java web development tools and API standards. Your app interacts with the environment using the Java Servlet standard, and can use common web application technologies such as JavaServer Pages (JSPs).
The Java runtime environment uses Java 6. The App Engine Java SDK supports developing apps using either Java 5 or 6.
The environment includes the Java SE Runtime Environment (JRE) 6 platform and libraries. The restrictions of the sandbox environment are implemented in the JVM. An app can use any JVM bytecode or library feature, as long as it does not exceed the sandbox restrictions. For instance, bytecode that attempts to open a socket or write to a file will throw a runtime exception.
Your app accesses most App Engine services using Java standard APIs. For the App Engine datastore, the Java SDK includes implementations of the Java Data Objects (JDO) and Java Persistence API (JPA) interfaces. Your app can use the JavaMail API to send email messages with the App Engine Mail service. The java.net HTTP APIs access the App Engine URL fetch service. App Engine also includes low-level APIs for its services to implement additional adapters, or to use directly from the application. See the documentation for the datastore, memcache, URL fetch, mail, images and Google Accounts APIs.
Typically, Java developers use the Java programming language and APIs to implement web applications for the JVM. With the use of JVM-compatible compilers or interpreters, you can also use other languages to develop web applications, such as JavaScript, Ruby, or Scala.
For more information about the Java runtime environment, see The Java Runtime Environment.

The Python Runtime Environment

With App Engine's Python runtime environment, you can implement your app using the Python programming language, and run it on an optimized Python interpreter. App Engine includes rich APIs and tools for Python web application development, including a feature rich data modeling API, an easy-to-use web application framework, and tools for managing and accessing your app's data. You can also take advantage of a wide variety of mature libraries and frameworks for Python web application development, such as Django.
The Python runtime environment uses Python version 2.5.2. Additional support for Python 3 is being considered for a future release.
The Python environment includes the Python standard library. Of course, not all of the library's features can run in the sandbox environment. For instance, a call to a method that attempts to open a socket or write to a file will raise an exception. For convenience, several modules in the standard library whose core features are not supported by the runtime environment have been disabled, and code that imports them will raise an error.
Application code written for the Python environment must be written exclusively in Python. Extensions written in the C language are not supported.
The Python environment provides rich Python APIs for the datastore, Google Accounts, URL fetch, and email services. App Engine also provides a simple Python web application framework called webapp to make it easy to start building applications.
You can upload other third-party libraries with your application, as long as they are implemented in pure Python and do not require any unsupported standard library modules.
For more information about the Python runtime environment, see The Python Runtime Environment.

The Datastore

App Engine provides a powerful distributed data storage service that features a query engine and transactions. Just as the distributed web server grows with your traffic, the distributed datastore grows with your data.
The App Engine datastore is not like a traditional relational database. Data objects, or "entities," have a kind and a set of properties. Queries can retrieve entities of a given kind filtered and sorted by the values of the properties. Property values can be of any of the supported property value types.
Datastore entities are "schemaless." The structure of data entities is provided by and enforced by your application code. The Java JDO/JPA interfaces and the Python datastore interface include features for applying and enforcing structure within your app. Your app can also access the datastore directly to apply as much or as little structure as it needs.
The datastore is strongly consistent and uses optimistic concurrency control. An update of a entity occurs in a transaction that is retried a fixed number of times if other processes are trying to update the same entity simultaneously. Your application can execute multiple datastore operations in a single transaction which either all succeed or all fail, ensuring the integrity of your data.
The datastore implements transactions across its distributed network using "entity groups." A transaction manipulates entities within a single group. Entities of the same group are stored together for efficient execution of transactions. Your application can assign entities to groups when the entities are created.

Google Accounts

App Engine supports integrating an app with Google Accounts for user authentication. Your application can allow a user to sign in with a Google account, and access the email address and displayable name associated with the account. Using Google Accounts lets the user start using your application faster, because the user may not need to create a new account. It also saves you the effort of implementing a user account system just for your application.
If your application is running under Google Apps, it can use the same features with members of your organization and Google Apps accounts.
The Users API can also tell the application whether the current user is a registered administrator for the application. This makes it easy to implement admin-only areas of your site.
For more information about integrating with Google Accounts, see the Users API reference.

App Engine Services

App Engine provides a variety of services that enable you to perform common operations when managing your application. The following APIs are provided to access these services:

URL Fetch

Applications can access resources on the Internet, such as web services or other data, using App Engine's URL fetch service. The URL fetch service retrieves web resources using the same high-speed Google infrastructure that retrieves web pages for many other Google products.

Mail

Applications can send email messages using App Engine's mail service. The mail service uses Google infrastructure to send email messages.

Memcache

The Memcache service provides your application with a high performance in-memory key-value cache that is accessible by multiple instances of your application. Memcache is useful for data that does not need the persistence and transactional features of the datastore, such as temporary data or data copied from the datastore to the cache for high speed access.

Image Manipulation

The Image service lets your application manipulate images. With this API, you can resize, crop, rotate and flip images in JPEG and PNG formats.

Scheduled Tasks and Task Queues

An application can perform tasks outside of responding to web requests. Your application can perform these tasks on a schedule that you configure, such as on a daily or hourly basis. Or, the application can perform tasks added to a queue by the application itself, such as a background task created while handling a request.
Scheduled tasks are also known as "cron jobs," handled by the Cron service. For more information on using the Cron service, see the Python or Java cron documentation.
Task queues are currently released as an experimental feature. At this time, only the Python runtime environment can use task queues. A task queue interface for Java applications will be released in the near future. For information about the task queue service and the Python API, see the Python documentation.

Development Workflow

The App Engine software development kits (SDKs) for Java and Python each include a web server application that emulates all of the App Engine services on your local computer. Each SDK includes all of the APIs and libraries available on App Engine. The web server also simulates the secure sandbox environment, including checks for attempts to access system resources disallowed in the App Engine runtime environment.
Each SDK also includes a tool to upload your application to App Engine. Once you have created your application's code, static files and configuration files, you run the tool to upload the data. The tool prompts you for your Google account email address and password.
When you build a new major release of an application that is already running on App Engine, you can upload the new release as a new version. The old version will continue to serve users until you switch to the new version. You can test the new version on App Engine while the old version is still running.
The Java SDK runs on any platform with Java 5 or Java 6. The SDK is available as a Zip file. If you use the Eclipse development environment, you can use the Google Plugin for Eclipse to create, test and upload App Engine applications. The SDK also includes command-line tools for running the development server and uploading your app.
The Python SDK is implemented in pure Python, and runs on any platform with Python 2.5, including Windows, Mac OS X and Linux. The SDK is available as a Zip file, and installers are available for Windows and Mac OS X.
The Administration Console is the web-based interface for managing your applications running on App Engine. You can use it to create new applications, configure domain names, change which version of your application is live, examine access and error logs, and browse an application's datastore.

Quotas and Limits

Not only is creating an App Engine application easy, it's free! You can create an account and publish an application that people can use right away at no charge, and with no obligation. An application on a free account can use up to 500MB of storage and up to 5 million page views a month. When you are ready for more, you can enable billing, set a maximum daily budget, and allocate your budget for each resource according to your needs.
You can register up to 10 applications per developer account.
Each app is allocated resources within limits, or "quotas." A quota determines how much of a given resource an app can use during a calendar day. In the near future, you will be able to adjust some of these quotas by purchasing additional resources.
Some features impose limits unrelated to quotas to protect the stability of the system. For example, when an application is called to serve a web request, it must issue a response within 30 seconds. If the application takes too long, the process is terminated and the server returns an error code to the user. The request timeout is dynamic, and may be shortened if a request handler reaches its timeout frequently to conserve resources.
Attempts to subvert or abuse quotas, such as by operating applications on multiple accounts that work in tandem, are a violation of the Terms of Service, and could result in apps being disabled or accounts being closed.
For a list of quotas and an explanation of the quota system, including which quotas can be increased by enabling billing, see Quotas.

see :
    http://code.google.com/appengine/
    http://code.google.com/appengine/docs/whatisgoogleappengine.html
    http://en.wikipedia.org/wiki/Google_App_Engine

Thursday, June 17, 2010

MyISAM Vs. InnoDB


  1. InnoDB recovers from a crash or other unexpected shutdown by replaying its logs. MyISAM must fully scan and repair or rebuild any indexes or possibly tables which had been updated but not fully flushed to disk. Since the InnoDB approach is approximately fixed time while the MyISAM time grows with the size of the data files, InnoDB offers greater availability and reliability as database sizes grow.
  2. MyISAM relies on the operating system for caching reads and writes to the data rows while InnoDB does this within the engine itself, combining the row caches with the index caches. Dirty (changed) database pages are not immediately sent to the operating system to be written by InnoDB, which can make it substantially faster than MyISAM in some situations.[citation needed]
  3. InnoDB will store rows in primary key if present, else first unique key order. This can be significantly faster if the key is chosen to be good for common operations.[citation needed] If there is no primary key or unique key InnoDB will use an internally generated unique integer key and will physically store records in roughly insert order, as MyISAM does. Alternatively, an autoincrementing primary key field can be used to achieve the same effect.
  4. InnoDB currently does not provide the compression and terse row formats provided by MyISAM, so both the disk and cache RAM required may be larger. A lower overhead format is available for MySQL 5.0, reducing overhead by about 20% and use of page compression is planned for a future version.
  5. When operating in fully ACID-compliant modes, InnoDB must do a flush to disk at least once per transaction, though it will combine flushes for inserts from multiple connections. For typical hard drives or arrays, this will impose a limit of about 200 update transactions per second. For applications which require higher transaction rates, disk controllers with write caching and battery backup will be required in order to maintain transactional integrity. InnoDB also offers several modes which reduce this effect, naturally leading to a loss of transactional integrity guarantees, though still retaining greater reliability than MyISAM. MyISAM has none of this overhead, but only because it does not support transactions.
  6. MyISAM uses table-level locking on writes to any existing row, whereas InnoDB uses row-level locking. For large database applications where many rows are often updated, row-level locking is crucial because a single table-level lock significantly reduces concurrency in the database.
  7. MyISAM is still widely used in web applications as it has traditionally been perceived as faster than InnoDB in situations where most DB access is reads. Features like the adaptive hash index and insert buffer often mean that InnoDB is faster even if concurrency isn't an issue.[citation needed]
  8. Unlike InnoDB, MyISAM has built-in full-text search.


read more at : http://en.wikipedia.org/wiki/Myisam

    Monday, June 14, 2010

    How to zip a directory in java

    This is a sample method To zip a directory in java :



    public static boolean zipDirectory(File directory) {
            ZipOutputStream out = null;
            BufferedInputStream in = null;
            try {
                File zippedDirectory = new File(directory.getParent()
                        + File.separatorChar + directory.getName() + ".zip");
                out = new ZipOutputStream(new BufferedOutputStream(
                        new FileOutputStream(zippedDirectory)));
                byte[] data = new byte[1000];
                String files[] = directory.list();
                for (int i = 0; i < files.length; i++) {
                    final FileInputStream fileInputStream = new FileInputStream(directory
                            .getPath()
                            + File.separatorChar + files[i]);
                    in = new BufferedInputStream(fileInputStream, 1000);
                    out.putNextEntry(new ZipEntry(files[i]));
                    int count;
                    while ((count = in.read(data, 0, 1000)) != -1) {
                        out.write(data, 0, count);
                    }
                    out.closeEntry();
                    fileInputStream.close();
                }
                return true;
            } catch (Exception ex) {
                ex.printStackTrace();
                return false;
            } finally {
                try {
                    in.close();
                    out.flush();
                    out.close();
                } catch (Exception ex) {
                    ex.printStackTrace();
                }
            }
        }

    delete a non-empty folder recursively in java

    How to delete a non-empty folder recursively in java. Here is a sample recursive method to delete a directory including all its files and the folders :

    private boolean deleteDirectory(File dir) {
        try {
            if (dir.isDirectory()) {
                for (File f : dir.listFiles()) {
                    deleteDirectory(f);
                }
            }
            System.out.println("deleteing file... ==>" + dir.getPath());
            return dir.delete();
        } catch (Exception ex) {
            ex.printStackTrace();
            return false;
        }
    }

    Wednesday, April 28, 2010

    MySql 5 will not allow you to insert a large blob

    using mysql 5 , i tried to insert a large stream and found out that i have to set a variable in my.ini file. the variable takes a value in bytes as following :


    max_allowed_packet='value'



    for example, for 100MB :


    max_allowed_packet=104857600

    setting the jvm for eclipse

    There is a way to make sure what JVM your eclipse will use to run. By setting this configuration you will specify the exact place of your JVM for eclipse.

    in windows you should add the following code into your eclipse.ini file. Just make sure you put the "-vm" and the path of your java executable in separated lines. and make sure you put them before -vmargs .



    -vm
    C:\Java\JDK\1.5\bin\javaw.exe


    an example of linux :

    -vm
    /opt/sun-jdk-1.6.0.02/bin/java



    -- an example of mac os :

    On a Mac OS X system, you can find eclipse.ini by right-clicking (or Ctrl+click) on the Eclipse executable in Finder, choose Show Package Contents, and then locate eclipse.ini in the MacOSContents


    -vm
    /System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home/bin/java
    folder under

    Wednesday, February 17, 2010

    second one

    oh i cant believe i eft this place with just one post!!! im gonna be more active ! haha