Category Archives: Building software

DPBuddy — Tool For DataPower Administrators and Developers

We’re pleased to announce the release of our new product, DataPower Buddy (dpbuddy). “dpbuddy” a free command-line tool for automating administration, management and deployment of IBM WebSphere DataPower appliances. The tool supports export/import, file transfer, backups and many other functions.

dpbuddy is implemented as a set of custom tasks for the popular build tool, Apache Ant.

Here is a quick example of dpbuddy in action:



    


This Ant task will remove remote directories if they exist, reproduce the local directory tree (all folders under “services”) on the device and upload the necessary files based on the “includes” pattern.

dpbuddy is completely free; it can be downloaded from the dpbuddy product page

dpbuddy provides many cool features, including:

* Response from the device is presented in a human-readable form as opposed to raw SOAP/XML messages. dpbuddy makes it easy to understand error and status messages.

* Powerful remote “copy” command that automatically reproduces local directory tree on the device.

* Tight integration with Ant. Ant variables can be used inside deployment policies and configuration files.

* Easy-to-use alternative to deployment policies based on XPath.

* Ability to remotely “tail” device logs. It is even possible to automatically get new log messages similarly to Unix “tail -f” command. “tail” task can also check for error patterns.

* “Export” based on naming patterns. You don’t need to know types (“classes”) of DataPower objects; simply specify a regexp pattern and dpbuddy will export all objects matching this pattern.

* Support for self-signed certificates. No need to add DataPower certificates to the JDK store.

* Support for arbitrary SOMA requests. You can use Ant variables inside a request.

* Parsing of all commands on the client. In case of XML errors, DataPower returns cryptic “internal error” message. The actual error then has to be extracted from the device logs. dpbuddy on the other hand validates management XML commands on the client and displays error messages right away.

Go to dpbuddy product page to learn more.

ClassNotFoundException: A List of Dumb Things to Check

You deploy a new version of your application into production environment, hit the application’s URL and get a 500 error with a long error stack and nasty “java.lang.ClassNotFoundException” in bold at the top.

“Class Not Found” exceptions could be quite tricky to troubleshoot because of the complexity of Java Web applications and application servers they run on. An average web application nowadays comes bundled with dozens of jar file (and probably thousands of classes). An average application server’s classpath is many pages long. Not to mention separately deployed libraries containing jar files shared by a group of applications. There should be little surprise that it is quite common for all these different jars and classloaders to clash with each other, get out of sync or become otherwise corrupt and mis-configured.

The list below represents a subset of all the possible causes of “ClassNotFoundException”. Hopefully this list could serve as a starting point for attacking the problem. The list was inspired by “A List of Dumb Things to Check”:http://everythingsysadmin.com/dumb-things-to-check.html.

* To start, determine a type of the offending class. Is it a an application class, a third-party library class, a class provided by the application server or a JDK class? Determine the jar file that should contain the class. Determine where that jar should be located on the file system. Is it part of application installation, application server installation or some shared library installation? You may need to search for the class within multiple jars. Here is the command to do it (source): find . -name *.jar -print -exec jar -tvf {} \; | awk '/YOURSEARCHSTRING/ || /jar/ {print} ' (note–it won’t search within EAR and WAR files)
* Does the jar that’s supposed to contain the class exist on the file system?
* Are you able to “unjar” the jar using jar -xvf? Does the jar indeed contain the package and class in question?
* Check the version of the jar if you can’t find the class there. To determine the version, look at the jar’s MANIFEST.MF. Usually (but, unfortunately, not always) you will find some version information there. You can also compare the file size with the “baseline”.
* Does the account that the application server’s JVM was started with have read access to the jar? An application server usually runs under some sort of a system account. The jar might have been copied to the file system using a personal account from a different group.
* Have all application jars been updated during deployment? Are all the jars (including shared libraries) at the right version? Manual deployment process is quite common, so missing to update a jar is always a possibility.
* Is the class in question packaged with the application (e.g., under WEB-INF/lib) and being loaded by one of the parent classloaders? Most application servers utilize a classloader hierarchy where WAR file’s classloader is a child of EAR classloader which in turn is a child of the system (JVM) classloader. Parent classloaders can’t go down to request a class from a child classloader. The problem occurs if, for example, some jars were moved to a shared library but they still depend on classes packaged with the application.
In order to diagnose this situation, you’ll need to have a good understanding of your application server’s classloader hierarchy. “Here”:http://publib.boulder.ibm.com/infocenter/wasinfo/v7r0/index.jsp?topic=/com.ibm.websphere.base.doc/info/aes/ae/crun_classload.html is the information for WebSphere and “here”:http://download.oracle.com/docs/cd/E15523_01/web.1111/e13706/classloading.htm is the WebLogic documentation on classloaders.
* Is any of the jars packaged with the application also present on any of the parent classloader’s classpath? Running different versions of the same jar or library can cause all kinds of issues, including ClassNotFoundException. Some app servers allow overriding default classloader behavior so that the jars packaged with the application are loaded first. This could fix the problem.
* If the jar with the class in question is part of a shared library (as opposed to packaged with the application), check if this library was made available to the application via the classloader configuration. For example, WebSphere configuration involves setting up a separate classloader for the library and explicitly associating it with the application.
* Is the version and patch level of the application server correct? Does it match your development environment? Look at the detailed version information for all the different components of your app servers and also get a list of installed patches. E.g., for WebSphere run versionInfo -long command.
* Is the application server running under the right JDK? E.g., check if the server startup script relies on JAVA_HOME and see which JDK the JAVA_HOME points to.
* If the application runs in a cluster, does the problem occur on all nodes or just on some? Are you trying to troubleshoot the problem on the right node?
* If the classname is driven from a string, either in java source or some other file, have you spelled the class name correctly? (“Steve Loughran”:http://www.1060.org/blogxter/publish/5)

Once again, this is by no means a complete list. If anybody wants to contribute, please add a comment below and I’ll update the post.

Automating Your Builds? Don’t Forget About Testing

You’re part of a development team that just started working on a brand new Java EE application and you’re asked to put together a build script for this app. “Nothing can be easier” you think and you quickly put together a [simple Ant script](http://techtracer.com/2007/04/16/the-great-ant-tutorial-a-great-jump-start/) or a Maven POM file. Your build compiles Java code, runs JUnit tests and creates a WAR file for your app. Your job is done and you move on to more exciting and important tasks.

Unfortunately, your simple build, while being a good starting point, does not accomplish much. Contrary to what many developers think, the purpose of an automated build is not to automate production of executable code (be it a WAR file or an “exe”). Its purpose is to verify correctness of the code and to discover as many problems and defects as possible as quickly as possible.

It is common knowledge that it is much less costly to fix a defect during construction than during testing phase:

![Cost to fix software defects](/files/images/defect_cost.gif “Cost to fix software defects”)

According to the chart above (source: [Six Sigma](http://software.isixsigma.com/library/content/c060719b.asp) and IBM Systems Sciences Institute) it is two times more costly to fix a bug during testing than during implementation. I think this difference is actually much higher. Our short-term (working) memory is extremely volatile. According to [some studies](http://www.cs.umd.edu/class/fall2002/cmsc838s/tichi/attention.html), the short term memory begins decaying after eighteen seconds. The cost of “context switching” for the brain [is very high](http://www.codinghorror.com/blog/archives/000691.html). In most organizations testing cycle takes at least a few weeks. This means that the bug that you just introduced will not be discovered for another few weeks at the earliest. When (or if!) it is finally discovered, most likely you’ll be working on something entirely different. It will take at least a few hours for you just to recall all the details associated with the bug.

So the fact that your code compiles serves as a very weak indicator of code quality (although catching compilation problems early is important too, especially for large teams with high check-in volume). Automated testing must be done as part of every build. Most developers implement some automated testing using XUnit. In the majority of cases, these tests do not run against a deployed application, e.g., they do not hit a Web server. This kind of testing is useful, but it has its limitations. The main limitation is that we are not testing the application from the standpoint of its end users. For example, we’re not testing AJAX logic running in a browser. Also, we’re not testing the functionality that is dependent on an application server. Mock object frameworks help to a degree, but emulating application server’s behavior could take some effort. Not to mention the fact that the “emulated” app server won’t account for quirks of your “real” application server. In many cases there are subtle differences in app servers behavior, which is very often caused by differences in how classloader hierarchy is implemented. Reproducing these nuances using mock frameworks or even an embeddable servlet container, such as [jetty](http://www.mortbay.org/jetty/), is impossible.

The bottom line is that your automated build has to be able to deploy your application and run tests against it. Using a browser-based testing tool such as [Selenium](http://seleniumhq.org/) will allow you to test your application as if it was used by end users, including testing of all your fancy AJAX features. Automating application deployments and testing does take some effort. Developing a comprehensive automated test suite could be a daunting tasks. But it is [certainly possible](http://timothyfitz.wordpress.com/2009/02/10/continuous-deployment-at-imvu-doing-the-impossible-fifty-times-a-day/) and well worth it.

PAnt 1.5 is Released

This is a major update to our popular python Ant wrapper.

Notable changes in this release:

* Support for positional (as opposed to named) arguments, e.g., “pant.echo(‘message’)”.
* Support for lists to express repeating elements.
* Support for “ant_” prefix to avoid conflicts with python keywords.

More information is available from PAnt project page

Please subscribe to our feed or follow us on twitter to continue receiving updates about PAnt – new version is coming shortly.

Eliminate the Need to Redeploy Your Web Files

Some application servers require that location of the development workspace has to be different from the location of the deployed application. For example, you can easily point Tomcat to the root of your Web application using “docBase” of the “Context” element. But you’re out of luck with WebSphere Application Server (WAS). You have to go through a separate application update process (using admin console or Rational Application Developer tooling) to synchronize your deployed application with the workspace. In my view, this update (a.k.a. “deployment”) step should never be required in a local development environment. It is one thing to have to deploy to a test or a production environment that consists of multiple servers that are segregated from the machine hosting the build artifacts. But in a situation when both the code and the application server are sitting on the same machine, the deployment step is redundant. We should be able to simply tell the app server where the code is and it can then do whatever is needed to load the code into JVM.

Luckily, we can get pretty close to this vision with a few very simple (and free) tools.

In my previous post I explained how to enable dynamic class reloading for WebSphere Application Server and avoid having to deploy your Java changes altogether. But what about changes to JSPs and other non-Java resources? How can we synchronize the directory used by the application server with the development workspace?

Turns out, there is an Eclipse plugin that does exactly that. It’s Filesync plugin developed by Andrei Loskutov.

As the name implies, the plugin automatically synchronizes workspace directories with external directories by doing one-way copy of changed files. It allows to specify multiple directory pairs and also to define include/exclude patterns and even use variable substitution.

To enable automatic updates of JSPs in the deployed application directory all you need to do is to define a folder pair that links web root in your workspace with the location of the exploded WAS directory in WAS (usually located under profile_root/installedApps/cell_name/app_name.ear/app_name.war).

With WAS you need to watch for static “<%@ include %>” directives in your JSPs. WAS will not reload included files unless you also update including JSP. A workaround here is to turn everything into “jsp:include” actions or use JSTL’s “c:import”. There might be a slight performance penalty for doing that but improved productivity is well worth it.

You can use Filesync plugin to synchronize your class files as well. This provides an alternative to the resource link-based approach that I described in the previous post. I still like using resource links better because they can be defined using Eclipse variables which makes it easier to share the configuration within a team. As far as I can tell, with Filesync you have to use absolute paths.

Here’s how the filesync configuration screen looks like:
Filesync configuration

Another good use of Filesync is to pull jar files from an external directory. Projects typically have a repository-like location where all third-party jars are checked-in (or it could be a full-blown Maven repository). You can easily add an external jar to your classpath in Eclipse. But how to put it under “WEB-INF/lib” where it needs to end up for the application server? With filesync it can be done easily by adding yet another folder pair.

In short, Filesync allows you to assemble your application “on the fly” without having to run an external build process. It also completely eliminates the need to explicitly update deployed applications.

Instantly Redeploy Your Classes to WebSphere Application Server

Any developer wants to see the code changes instantaneously reflected in the application server.
However, when using WebSphere Application Server (WAS), developers usually have to go through the process of deploying an application to the server. Even though the deployment support is integrated into Rational Application Developer (RAD) or Eclipse WTP, it still introduces delays and impedes productivity. Not to mention that Eclipse WTP does not actually support WAS 6.1 runtimes, only 6.0.

This is unfortunate because actually WAS 6.1 has good support for dynamic reloading. With dynamic reloading turned on, WAS monitors changes on the file system and automatically reloads the module (i.e., all classes loaded by the module’s classloader) when it detects a change. The reloading is almost instantaneous for simple modules. For complex modules with a lot of classes or initialization logic the reloading step could take a little bit of time but it is still faster than redeploying an entire application (you should check out Java Rebel if you want a truly instantaneous deployment).

With dynamic reloading all we need to do in order to make our changes available to the server is to update class files in the location where the deployed application resides. This is especially straightforward for web application and classes under WEB-INF/classes since WAS always explodes web application archives during deployment. In case of jar files (say the ones under WEB-INF/lib) the situation is a more complicated.

Unfortunately, the location of the deployed application is usually different from the workspace where a developer makes changes. By default, deployed binaries are located under profile_root/installedApps/cell_name. While this location can be changed, the directory structure will still be somewhat different from how code is organized in the workspace.

We could write a simple Ant script to copy changes, but this again introduces a special “pseudo-deployment” step. It would be nice if we could simply make a change in Eclipse, save it and let dynamic reloading kick in without any extra steps.

Turns out that it is quite possible to make WAS and Eclipse behave this way.

First, let’s configure WAS:

* Log in to WAS admin console and make sure that “Run in development mode” is checked for your sever. This is the default for standalone installations.
* Deploy your application to WAS using WAS admin console.
* For convenience, you may want to specify a non-standard location for application binaries during installation to shorten the path, e.g., “was_installed_apps”. This step is optional.
* Go to “Enterprise applications/your_app/Class loading and update detection”.
* Make sure that “reload classes” is checked.
* Set reload interval to some reasonable number, say “3”. By default it’s set to “0” which means “never”. IBM recommends 3 seconds as an optimal interval, although I’ve been using 1 second without any issues (for relatively small modules though).
* Stop and start the application.

Now let’s configure Eclipse. We will have to create a resource link pointing to the deployed application and configure the project to compile classes to the deployed location.

* Go to “Java Build Path” of the project. Click on “Browse” next to “Default output folder”.
* Click “Create New Folder…”, “Advanced”, check “Link to folder in the file system”.
* Click on “Browse” and locate the root of the exploded WAR file in the deployed application location. For example, for application “HelloWorldWeb” the path will be “profile_root/installedApps/cell_name/HelloWorldWeb.ear/HelloWorldWeb.war”. Give the link a meaningful name, e.g., “deployment”. Note: if you share .project and .classpath files with other developers, use Eclipse variables instead of the absolute path.
* Click OK. This will create a resource link that you can use to specify the output folder.
* Change the output folder to point to “project_name/link_name/WEB_INF/classes”, e.g., “HelloWorldWeb/deployment/WEB-INF/classes”. Click OK.
* Eclipse will recompile your project.
* From this point forward any class change will trigger dynamic reloading on the server.
* The resource link is also available in your package explorer, so you can browse and edit the deployed files. You need to be careful if you want to edit JSPs or other files that way as they will be overridden by the next full re-deployment.

This techniques takes care of class files only. Dynamic reloading of JSP files is a different story.

Note: This has been tested only with Eclipse 3.4 and WAS 6.1 and on modules with a relatively small code base. I’d be curious to know how effective this approach is for large modules.

This post is part of the series on WebSphere Application Server administration. Please subscribe to our blog if you’d like to receive updates.

Note: We offer professional services in the area of WebSphere architecture, implementation and operations. If you’re looking for help with any of these tasks, please let us know.

Dynamic Ant Tasks without Setters

Ant uses reflection to pass data from XML to the Java class that implement an Ant task. For every attribute in XML, you have to define a setter in the Java task’s class.

This works fine most of the time, however, in some cases there could be a need for a dynamic list of attributes. For example a task can pass attribute values to some external tool that has its own set of parameters that you don’t want to hardcode in Ant. Or you may simply like the flexibility of using dynamic attributes as opposed to predefined setters.

In order to implement dynamic attributes, first you need to override “maybeConfigure” method in your Ant task and have it do nothing:


public void maybeConfigure() throws BuildException {
}

Then in your “execute” method you can access the map of attributes (that represents all attributes set in XML) as follows:


RuntimeConfigurable configurator= getRuntimeConfigurableWrapper();
Map attributes=configurator.getAttributeMap();
String attr1=(String)attributes.get("attr1");       

Note that in this case Ant does not do property substitution (for ${}), so you would need to explicitly invoke project.replaceProperties for each attribute value.

Exception Handling in WSAdmin Scripts

Using AdminTask in wsadmin often results in getting ugly stack traces. In fact, AdminTask always produces a full java stack trace even when the error is fairly innocuous (e.g., a resource name was mistyped). The stack trace in this situation is pretty useless; it could actually confuse operations staff as it might be interpreted as a problem in IBM code.

It is, in fact, quite easy to deal with this situation in Jython and suppress the annoying stack trace:


import sys
from com.ibm.ws.scripting import ScriptingException
...
    try:
        AdminTask....
    except ScriptingException:
        # note that we can't use "as" because of python 2.1 version, have to use sys
        print "Error:\n"+str(sys.exc_info()[1])

Building Windows NT

I’ve been reading a relatively old but nevertheless fascinating book called “Showstopper”:http://www.amazon.com/Show-Stopper-Breakneck-Generation-Microsoft/dp/0029356717/ref=sr_1_9?ie=UTF8&s=books&qid=1228778060&sr=8-9 about development of Windows NT. I was struck by the author’s account of NT’s build process, specifically its low degree of automation.

NT was obviously a high-intensity, almost a death march kind of project and so the builds had to be churned out at a quick pace:

…the number of builds grew from a couple to a half dozen some week…

This may not sound like much, but since NT was getting quite big and complex, it kept the guys in the build lab busy. The builds were so critical that at some point the technical lead of the project, Dave Cutler, had to take over the build lab. This, however, did not improve the way builds were done. One of the members of the build team remembers:

He is not giving us the list, he’s basically saying, ‘Go to this directory and sync this file.’ He’s saying, ‘Pick up this file, do this, do that’.

The release process was pretty haphazard too according to another team member:

We have all these cowboy developers, just slinging code like crazy, calling out: “We need another build!”

And, of course, continuous integration was not invented yet:

We’d think we were all done for the day, then test the build and it wouldn’t boot. We’d run around looking for the programmer whose code broke it.

I don’t think this situation was unique to Microsoft back then. But I also think that the attitude toward CM and development process automation has changed over the last 16 years. Today, automated builds is pretty much the norm for all but the smallest projects. Continuous integration and automated testing is becoming widespread. There is a dizzying array of build systems, “build servers”:/yet-another-build-server, version control systems and other CM and development tools.

There is a long way to go however. Implementing solid build/deployment and release management automation is still hard. Most large projects end up having to dedicate multiple highly skilled people to solving this problem. Home-grown script-based automation is still pretty much the state of the art. This is going to change. The tools will become more intelligent and advanced. I hope it won’t take another 16 years.

XML Alternatives and YAML

The need for a more human-friendly alternative to XML is apparent to many people, myself included. This is the reason why quite a few different “light-weight markup languages”:http://en.wikipedia.org/wiki/List_of_lightweight_markup_languages have been created over the last several years. I guess they are called “lightweight” because they don’t use XML-like tags that tend to clutter documents. I’ve looked at several of them and found “YAML”:http://yaml.org to be the most mature out of the bunch as well as quite human-readable (as opposed to, say, JSON) and easy to understand. You can find some very good side-by-side XML vs. YAML comparison “here”:http://yaml.kwiki.org/index.cgi?YamlExamples or “here”:http://www.ibm.com/developerworks/xml/library/x-matters23.html, the difference in readability is stunning.

From what I understand, YAML is popular in the Ruby world and it is used for various “PHP projects”:http://www.symfony-project.org/book/1_0/08-Inside-the-Model-Layer. However, it is almost unknown in Java/J2EE circles. Which is a shame. While annotations somewhat limited the spread of “XML hell” in Java applications, XML still remains a de-facto file configuration format. I would venture to say that except for few outliers, YAML would be a better option as a format for configuration files. Why is it the case? One reason is that YAML format simplifies application support. Developers often say that they don’t care about readability of XML since they use IDE or editors that hide the complexity of XML. Indeed, being able to work with XML in a nice tree view-based editor is appealing. But this does not work when application configuration needs to be quickly analyzed and potentially updated on some remote machine that most likely only has VI or notepad (which is usually the case in production environments, which I find very ironic – shouldn’t the production machine have the most advanced editors and analysis tools to make troubleshooting as efficient as possible?) in response to some production problem. For configuration files, readability and ease of understanding is the key.

Of course, there is also an old trusty property/name-value format. It is, however, very limited, since it does not support any kind of nesting or scoping. So all properties become essentially global and haven’t we learned already that global variables is not a good thing?

YAML, on the other hand, allows for expressing arbitrary complex models. Anything that can be expressed in XML can also be expressed using YAML.

On the downside, YAML does not have a very broad ecosystem. There are not that many “editors that support YAML”:http://www.digitalhobbit.com/archives/2005/09/15/yaml-editor-support/. There is a “YAML Eclipse plugin”:http://code.google.com/p/yamleditor/, but in only gives color highlighting, no validation (here is “another plugin”:http://noy.cc/symfoclipse/download.html which I have not tried yet). There is no metadata support, at least for Java, although there is a “schema validator”:http://www.kuwata-lab.com/kwalify/ for Ruby (its Java port seems to be dead). There is also no XSLT equivalent.

There are two YAML parsers for Java – “jvyaml”:https://jvyaml.dev.java.net/ and “JYaml”:http://jyaml.sourceforge.net/index.html. They kinda work, but there is certainly room for improvement in terms of error messages and just the ability to detect and reject an incorrect document. Since YAML is supposed to be a language with minimal learning curve, the parsing has to be intuitive and bulletproof.

I still think that despite the shortcomings YAML is the way to go. Perhaps I will give a closer look to one of these parsers and see if I can tweak it a bit.

Why are Environments So Poorly Supported?

A concept of “environment” permeates software development lifecycle. No application is released into production directly from developers PCs. There has to be a place where an application can go through various stages of testing. We use different environments for that purpose, e.g., “QA environment” or “acceptance environment”.

An “environment” is just a collection of resources which could include middleware and OS/filesystem resources. In the simplest case, an environment for a J2EE web application consists of a single application server. Complex applications consisting of multiple components could utilize many different resources, including several different middleware products (e.g., app server, Web server, messaging infrastructure, ESB).

For any IT organization it is important to know how their resources are used. ITIL has a concept of “CMDB”:http://en.wikipedia.org/wiki/CMDB that’s supposed to contain all IT resources. However, the granularity of CMDB implementations is usually too high (typically, a server level) which makes it difficult to use for software development. Also, CMDB is not really integrated with development processes and tools, it’s kind of a thing on its own.

Ideally, the environment concept must be supported by development, version control, change management and build/deploy tools. Environment metadata can be used to automatically install an application in a given environment. Testing tools can use this metadata to generate “smog” tests or to adjust existing test cases (e.g., by using different URLs/endpoints). There should be a capability to produce various reports showing what version of what application is installed in what environment.

Sadly, all these wonderful features are mostly missing from modern development and CM tools. Developers rely on scripting and informal use of environment variables. Essentially, today, each application has its own “selfish” view of what an environment is. This makes providing consistent operations and support difficult. This is especially true in virtualized environments where each logical environment may consist of many different VMs.

Case in point are build servers and build tools. I looked at several build servers and found explicit environment support only in “AntHill”:http://www.anthillpro.com/html/products/anthillpro/default.html. All the others I looked at (several, I won’t name them) omit the environment concept completely (except for some lame support of environment variables). To me, this is really odd. While build servers have their root in continues integration, their key selling point in an enterprise is actually release management (at lease for commercial products; there are many great open source build servers to choose from if continuous integration is the only goal). So how can a release process be automated if the foundation of this process is missing entirely from the tool that’s supposed to help with the automation?

Build tools are the same way. There is some deployment support in both Maven (“Cargo plugin”:http://cargo.codehaus.org/Maven2+plugin) and Ant, but no way of supporting environment as an entity.

Updated Jython Ant Task

I’ve updated Ant Jython task with a number of new features:

* Jython path is now handled by a separate JythonPath task.

* Jython interpreter is now scoped to Ant project. This means that you can have multiple Jython calls withing the same project that share common imports and variables.

* Jython task now supports nested text, similar to the “script” task.

Ant Jython Tasks (PAnt Tasks)

PAnt build tool comes with several Ant tasks to facilitate the use of Jython/Python from Ant.

PAnt tasks have a number of advantages over built-in <script language=”jython”> way of invoking Jython from Ant:

* More graceful exception handling. Jython code invoked using “script” generates long error stack that contains full stack trace of the “script” task itself. Sifting through the traces and trying to distinguish Java trace from Python trace is quite painful. PAnt “jython” task produces brief readable python-only error stack.
* You can use Ant properties as parameters (“jython” task makes them available in the local namespace of the calling script).
* Convenience “import” attribute.
* “jythonInit” task allows for setting python.path using Ant path structure.
* Jython interpreter is initialized once per Ant project. All scripts invoked from the same Ant project reuse the same built-in namespace. So you can define variables and imports in one call and use them in a subsequent call.
* Task name ( the name that prefixes all console output from Ant for a given task) is generated automatically based on the supplied Python code.
* “verbose.jython” property triggers verbose output for jython-related tasks only. This is much easier than trying to scan through hundreds of lines of general “ant -v” verbose log.

Example:

Ant code:






print "Property from ant:", testProp
# define a var that we can use in other scripts
s="test"



print "Var created earlier: ",s




“testmodule” python code:


from pant.pant import project 
def test (prop):
    print "Passed parameter: ",prop
    print "Test property: ", project.properties["testProp"]

Please refer to this build.xml file for more examples.

The tasks can be used independently of PAnt python code.

PAnt Ant Tasks Reference

Getting Started

Download PAnt, extract pant.jar and create “taskdef” as described here

“jythonInit” Task

The tasks initializes jython interpreter. Because of the overhead, the interpreter is initialized only once even if jythonInit is invoked multiple times. The repeating calls are simply ignored.
jythonInit automatically adds pant.pant module to PYTHONPATH.

Attributes:

* pythonPathRef – cachedir used for caching packages (optional). Defaults to ${java.io.tmpdir}/jython_cache (note– this is different from default jython behavior).

Nested elements:

pythonPath – python.path to use defined using Ant path-like structure. Required if “pythonPathRef” attribute was not provided.

Special properties:

log.python.path – if set to “true”, jythonInit will print python path to Ant log. Default: false.

“jython” Task

Invokes python code.
Note: by default, jython does not print python stack trace in case of an exception. To see the trace, run Ant in verbose mode using “-v” or use “-Dverbose.jython=true” property.

Attributes:

* exec – Python code snippet to execute. Typically, this is a function from a module available from python.path. This has to be a single line, e.g., mod.fun() although you could combine multiple statements separated by “;”. Required if “execfile” was not provided.
* import – a convenience attribute for providing “import” statement. Its only purpose is to make the task invocation more readable. Alternatively, you can have “import” as part of the”exec”,e.g., exec="import mod;mod.fun()". Optional.
* execfile – path to a python script file. Required if “exec” was not provided.

Nested elements:

Inline text with python code.

Special properties:

verbose.jython – if set to “true”, jython will print additional information about executing python code to Ant log. Default: false.

pimport Task

Creates Ant targets from a python module. Functions that will be used as targets have to be marked using “@target” decorator as described here.
Python module name is used as Ant project name. Target overriding works the same way with Ant import task. In other words, targets defined using pimport will override targets previously defined using “import” or “pimport” tasks.

Attributes:
module – python module to create targets from. The module has to be available from python.path specified using jythonInit.

Jython in WebSphere Portal

Most developers and administrators working with WebSphere Application Server (WAS) know that both JACL and Jython languages can be used for various WAS administration and configuration tasks. However, JACL has always been a preferred choice, simply because this is the default language used by the product’s admin tool (wsadmin) and also because JACL examples and documentation are more complete.

Using JACL might have been a valid option just a few years back (when WAS just came out) given the uncertainty surrounding the Jython project. Today, however, jython is clearly alive and well; alpha version supporting Python 2.5 was announced recently. Therefore there is really no point in using JACL any longer, except may be for shops with a large collection of existing JACL scripts. JACL syntax is quite arcane compared with Python and the language is clearly not as widely used.

IBM confirmed this view by releasing JACL to Jython converter a couple years back.

Unfortunately, up until recently, jython was not officially supported in another IBM product, WebSphere Portal, which comes with wpsript tool for managing pages, deployable modules and other portal artifacts.

But since portal scripting relies on wsadmin’s shell, jython is in fact fully supported by the product, it’s just not documented.
All that you need to do to switch to jython is to invoke wsadmin with “-lang jython” and “-wsadmin_classpath ” followed by the list of portal jars (you can copy the classpath from SCRPATH variable definition in wpscript.sh).

As an example, I put together a simple Jython script for cleaning up a portal page hierarchy. Removing pages before applying an XMLAccess script with page definitions allows to start portal configuration from a clean “known” state. Very often, especially in a development environment, an application’s page hierarchy gets polluted with various “test” pages created by developers. The script gets rid of them.

In WebSphere Portal 6.1 Jython is finally made a first-class citizen. The product’s documentation proclaims that JACL support will be phased out and that jython is the way to go. Surprisingly, though, all examples still use good old JACL. I assume it’s just a matter of time before they are converted.