Long and unwieldy stack traces is a common occurrence when dealing with Java EE application servers. Here is “an example”:/files/wps_error_example.txt. Many (if not all) of these products re-throw the same exception multiple times which complicates things even further. Figuring out the root cause of an exception is a major undertaking.
Of course, most of the trace is useless when using proprietary products since it points to classes that you don’t have the source code for. And not only you – level 1 of support most likely can’t get to the source code either. As a result, 90% of the trace has little to none immediate value.
As a rule, the more complex the product, the longer the stack trace. Makes sense, right? You’ve got more layers and components, each layer thinks that it is its duty to dump the whole thing to the log and re-throw.
May be we should start using stack traces as a code complexity metric. It would be much more telling than cyclomatic complexity.
I also think that there is a correlation between an average length of a stack trace and an average consulting rate that users of the product pay for development and support. So may be at the end of the day, developers and administrators should not grumble about it to much and I should just shut up.
mmm, we produce some pretty big stack traces too. A root cause is the fact that Java interfaces/base classes limit the scope of all exceptions that aren’t descended from RuntimeException to a select few. If you subclass other people’s code, or wrap other people’s code in your framework: nested stacks.
The nice thing about OSS code is the recipient gets to handle the trace; paste it in the IDE and see what went wrong. For closed source, all you get to do is paste the text into google and see who else got the same error. Which is why you should never put stack traces, error messages or windows error codes into blog entries, except as screen shots -otherwise you end up fielding the support calls from everyone whose search turns up your blog entry