You're part of a development team that just started working on a brand new Java EE application and you're asked to put together a build script for this app. "Nothing can be easier" you think and you quickly put together a simple Ant script or a Maven POM file. Your build compiles Java code, runs JUnit tests and creates a WAR file for your app. Your job is done and you move on to more exciting and important tasks.
Unfortunately, your simple build, while being a good starting point, does not accomplish much. Contrary to what many developers think, the purpose of an automated build is not to automate production of executable code (be it a WAR file or an "exe"). Its purpose is to verify correctness of the code and to discover as many problems and defects as possible as quickly as possible.
It is common knowledge that it is much less costly to fix a defect during construction than during testing phase:
According to the chart above (source: Six Sigma and IBM Systems Sciences Institute) it is two times more costly to fix a bug during testing than during implementation. I think this difference is actually much higher. Our short-term (working) memory is extremely volatile. According to some studies, the short term memory begins decaying after eighteen seconds. The cost of "context switching" for the brain is very high. In most organizations testing cycle takes at least a few weeks. This means that the bug that you just introduced will not be discovered for another few weeks at the earliest. When (or if!) it is finally discovered, most likely you'll be working on something entirely different. It will take at least a few hours for you just to recall all the details associated with the bug.
So the fact that your code compiles serves as a very weak indicator of code quality (although catching compilation problems early is important too, especially for large teams with high check-in volume). Automated testing must be done as part of every build. Most developers implement some automated testing using XUnit. In the majority of cases, these tests do not run against a deployed application, e.g., they do not hit a Web server. This kind of testing is useful, but it has its limitations. The main limitation is that we are not testing the application from the standpoint of its end users. For example, we're not testing AJAX logic running in a browser. Also, we're not testing the functionality that is dependent on an application server. Mock object frameworks help to a degree, but emulating application server's behavior could take some effort. Not to mention the fact that the "emulated" app server won't account for quirks of your "real" application server. In many cases there are subtle differences in app servers behavior, which is very often caused by differences in how classloader hierarchy is implemented. Reproducing these nuances using mock frameworks or even an embeddable servlet container, such as jetty, is impossible.
The bottom line is that your automated build has to be able to deploy your application and run tests against it. Using a browser-based testing tool such as Selenium will allow you to test your application as if it was used by end users, including testing of all your fancy AJAX features. Automating application deployments and testing does take some effort. Developing a comprehensive automated test suite could be a daunting tasks. But it is certainly possible and well worth it.